Abstract
Many visual illusions are contextual by nature. In the orientation-tilt illusion, the perceived orientation of a central grating is repulsed from or attracted towards the orientation of a surrounding grating. An open question in vision science is whether such illusions reflect basic limitations of the visual system, or if they correspond to corner cases of neural computations that are efficient in everyday settings. Here we develop deep recurrent network architectures that approximate neural circuits linked to contextual illusions We develop :a deep recurrent neural network architecture that approximates known visual cortical circuits (Mély et al., 2018). We show that these architecturesthis architecture, which we refer to as -Nets, are more sample efficient for learning contour detection than the state of the art, and exhibit an the -Net , learns contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusionconsistent with human data. Correcting this illusion significantly reduces-Net performance contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that contextual illusions are the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient perceptioncontours detection, and that incorporating such circuits in artificial neural networks can improve computer vision.