MENU

Neural ensembles – local information compression

A biorxiv preprint: Johann Schumann and Gabriele Scheler

The issue of memory is difficult for standard neural network models. Ubiquitous synaptic plasticity introduces the problem of interference, which limits pattern recall and introduces conflation errors. We present a lognormal recurrent neural network, load patterns into it (MNIST), and test the resulting neural representation for information content by an output classifier. We identify neurons which ‘compress’ the pattern information into their own adjacency network. Learning is limited to these neurons, which carry high information relative to a pattern. Also, learning requires only intrinsic and output synaptic plasticity for each identified neuron (called ‘localist plasticity’). This achieves high learning efficiency. It also prevents interference. By stimulating these neurons alone, we achieve an “unfolding” of their information and full pattern recall.

Our first experiments show that this form of storage and recall is possible, with the caveat of a ‘lossy’ recall similar to human memory. Comparing our results with a standard Gaussian network model, we notice that this effect relies critically on a power-law network model.