Abstract. We introduce a deep learning-based method to generate full
3D hair geometry from an unconstrained image. Our method can recover
local strand details and has real-time performance. State-of-the-art hair
modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning
approach, in contrast, is highly efficient in storage and can run 1000
times faster while generating hair with 30K strands. The convolutional
neural network takes the 2D orientation field of a hair image as input and
generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible
hairstyles, and the visibility of each strand is also used as a weight term to
improve the reconstruction accuracy. The encoder-decoder architecture
of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between
hairstyles. We use a large set of rendered synthetic hair models to train
our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image,
factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of
challenging real Internet pictures, and show reconstructed hair sequences
from videos