Abstract
Synthesizing photo-realistic images from text descriptions is a challenging problem. Previous studies have shown
remarkable progresses on visual quality of the generated
images. In this paper, we consider semantics from the input
text descriptions in helping render photo-realistic images.
However, diverse linguistic expressions pose challenges in
extracting consistent semantics even though they depict the
same thing. To this end, we propose a novel photo-realistic
text-to-image generation model that implicitly disentangles
semantics to both fulfill the high-level semantic consistency
and low-level semantic diversity. To be specific, we design
(1) a Siamese mechanism in the discriminator to learn consistent high-level semantics, and (2) a visual-semantic embedding strategy by semantic-conditioned batch normalization to find diverse low-level semantics. Extensive experiments and ablation studies on CUB and MS-COCO datasets
demonstrate the superiority of the proposed method in comparison to state-of-the-art methods.