Abstract
We propose an algorithm to predict room layout from a
single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts
(e.g. “L”-shape room). Our method operates directly on the
panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is
similar to that of RoomNet [15], but we show improvements
due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size
and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well
in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general
Manhattan layouts