Abstract
We present a generative framework for generalized zeroshot learning where the training and test classes are not
necessarily disjoint. Built upon a variational autoencoder
based architecture, consisting of a probabilistic encoder
and a probabilistic conditional decoder, our model can
generate novel exemplars from seen/unseen classes, given
their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification
model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model’s ability
to generate and leverage examples from unseen classes to
train the classification model naturally helps to mitigate the
bias towards predicting seen classes in generalized zeroshot learning settings. Through a comprehensive set of
experiments, we show that our model outperforms several
state-of-the-art methods, on several benchmark datasets,
for both standard as well as generalized zero-shot learning