Abstract
In this work, we focus on the challenge of taking partial
observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented
typeface. To generate a set of multi-content images following
a consistent style from very few examples, we propose an endto-end stacked conditional GAN model considering content
along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in
the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex.
serifs and ears) as well as the textual stylization (ex. color
gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles
and demonstrate effective generalization from a very small
number of observed glyphs