Maximization of memory capacity in echo state neural networks
The method consists in reservoir orthogonalization of the neural network that brings it to the regime at the edge of stability, allowing the network to get close to the theoretically derived memory limits, as explained by prof. Igor Farkaš, from Department of Applied Informatics, Comenius University in Bratislava.
In the field of machine learning and artificial neural networks, it has been a long-term effort to design effective learning algorithms allowing the neural networks to effectively solve different tasks. In the context of recurrent neural networks, i.e. the networks with feedback, a new paradigm emerged 15 years ago, called reservoir computing, which enables an efficient design of such networks. One type is an echo state network (ESN) whose advantage is a quick initialization of the reservoir, that is, the matrix of weighted recurrent connections (analogy to synapses between biological neurons). The proper setting of the weights (e.g. expressed by the spectral radius) provides appropriate dynamic properties of an ESN, which has a positive impact on task solving (e.g. increased prediction accuracy of the network). It is interesting in this context that the ESN as a dynamic system may operate in two different modes – stable dynamics (in case of smaller spectral radii) and unstable dynamics (larger spectral radii). A gradual transition between the two modes, the edge of stability, leads to an optimized behavior, for example in terms of memory capacity. This mode is interesting also in case of physical and biological systems. There exists a theory according to which the brain is dynamically self-organized at the edge of stability, in order to function effectively.
"We have shown that the memory capacity of an ESN can substantially increase due to reservoir orthogonalization (using either of the two similar gradient descent methods), such that it approaches the theoretical limits, in various sizes of the reservoir. In particular, the number of reconstructed (and therefore memorized) random inputs at the output equals the number of neurons in the reservoir, which is usually unreachable. We started to look at this topic a few years ago in a bachelor thesis (P. Barančok) and a master thesis (R. Bosák), which both led to interesting results. Using the systematic computational analysis we subsequently managed to achieve results, together with a PhD student (P. Gergeľ) into a publishable form in the prestigious journal Neural Networks, "says prof. Igor Farkaš, who leads a working group on Cognition and Neural Computation at KAI, focusing on artificial neural networks and their use for modeling in cognitive science. "We are planning in further research to verify the results using different input data, as well as to find out if, and how, reservoir orthogonalization supports optimal behavior from the perspective of other quantitative measures, such as information transfer," concludes prof. Farkaš.
Farkaš I., Bosák R., Gergeľ P.: Computational analysis of memory capacity in echo state networks. Neural Networks, 83, 109-120, 2016.