Latent Embedding Feedback and Discriminative
Features for Zero-Shot Classification
ECCV 2020

Paperswithcode Badges

overview

Our proposed method TF-VAEGAN, current state-of-the-art for ZSL and GZSL (as seen from the above badges). Please do consider adding recent ZSL or GZSL results to the same.

Video/Overview

Abstract

Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The state-of-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We first introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are then transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot object and action classification reveal the benefit of semantic consistency and iterative feedback, outperforming existing methods on six zero-shot learning benchmarks.

overview

Classification Results

Below you will find quantitative results for ZSL and GZSL classification in comparison with the previous methods.


overview

Below you will find qualitative results for ZSL and GZSL classification in comparison with the previous methods. The top row shows different variations of the ground-truth class instances, the second and third rows show the classification predictions by the baseline and proposed approaches, respectively. The green and red boxes denote correct and incorrect classification predictions, respectively. The class names under each red box show the corresponding incorrectly predicted label.


overview
overview

Image Reconstruction Results

Below you will find inverted images of Baseline synthesized features and our Feedback synthesized features on four example classes of oxford-flowers dataset. These observations suggest that our Feedback improves the quality of synthesized features over the Baseline, where no feedback is present. Best viewed in color and zoom.


overview

Citation