Abstract
Computed Tomography (CT) reconstruction is a fundamental component to a wide variety of applications ranging
from security, to healthcare. The classical techniques require measuring projections, called sinograms, from a full
180° view of the object. However, obtaining a full-view is
not always feasible, such as when scanning irregular objects that limit flexibility of scanner rotation. The resulting limited angle sinograms are known to produce highly
artifact-laden reconstructions with existing techniques. In
this paper, we propose to address this problem using CTNet
– a system of 1D and 2D convolutional neural networks,
that operates directly on a limited angle sinogram to predict the reconstruction. We use the x-ray transform on this
prediction to obtain a “completed” sinogram, as if it came
from a full 180°view. We feed this to standard analytical
and iterative reconstruction techniques to obtain the final
reconstruction. We show with extensive experimentation on
a challenging real world dataset that this combined strategy
outperforms many competitive baselines. We also propose
a measure of confidence for the reconstruction that enables
a practitioner to gauge the reliability of a prediction made
by CTNet. We show that this measure is a strong indicator of quality as measured by the PSNR, while not requiring
ground truth at test time. Finally, using a segmentation experiment, we show that our reconstruction also preserves
the 3D structure of objects better than existing solutions