Abstract
Advances in image super-resolution (SR) have recently
benefited significantly from rapid developments in deep
neural networks. Inspired by these recent discoveries, we
note that many state-of-the-art deep SR architectures can
be reformulated as a single-state recurrent neural network
(RNN) with finite unfoldings. In this paper, we explore new
structures for SR based on this compact RNN view, leading
us to a dual-state design, the Dual-State Recurrent Network
(DSRN). Compared to its single-state counterparts that operate at a fixed spatial resolution, DSRN exploits both lowresolution (LR) and high-resolution (HR) signals jointly.
Recurrent signals are exchanged between these states in
both directions (both LR to HR and HR to LR) via delayed feedback. Extensive quantitative and qualitative evaluations on benchmark datasets and on a recent challenge
demonstrate that the proposed DSRN performs favorably
against state-of-the-art algorithms in terms of both memory consumption and predictive accuracy. The code for our
method is publicly available