资源论文Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes

Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes

2020-02-19 | |  59 |   57 |   0

Abstract

Neural network models of memory and error correction famously include the Hopfield network, which can directly store—and error-correct through its dynamics— arbitrary N -bit patterns, but only for 图片.png N such patterns. On the other end of the spectrum, Shannon’s coding theory established that it is possible to represent exponentially many states  图片.png using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network’s own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures—characteristic of important error-correcting codes—may provide for high-performance computation in artificial neural networks and the brain.

上一篇:Graph Normalizing Flows

下一篇:AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...