Abstract
We present a novel framework that learns to predict human anatomy from body surface. Specifically, our approach
generates a synthetic X-ray image of a person only from the
person’s surface geometry. Furthermore, the synthetic Xray image is parametrized and can be manipulated by adjusting a set of body markers which are also generated during the X-ray image prediction. With the proposed framework, multiple synthetic X-ray images can easily be generated by varying surface geometry. By perturbing the parameters, several additional synthetic X-ray images can be
generated from the same surface geometry. As a result, our
approach offers a potential to overcome the training data
barrier in the medical domain. This capability is achieved
by learning a pair of networks - one learns to generate the
full image from the partial image and a set of parameters,
and the other learns to estimate the parameters given the
full image. During training, the two networks are trained iteratively such that they would converge to a solution where
the predicted parameters and the full image are consistent
with each other. In addition to medical data enrichment,
our framework can also be used for image completion as
well as anomaly detection