Guidestar Transparency Seal
MENU

New Results in Neuroscience Based AI for Memory and Learning

In our own work on NeuroAI we aim to leverage knowledge about the brain — here the cortex — for structural learning, i.e. for learning that is fast, efficient and successful because it uses the pre-existing neuronal structures which have evolved over millions of years. In the terms of classical AI, what we have done is to prune the network before learning. This is what distinguished deep learning from universal neural networks with a single hidden layer. Rather than continuing with a universal function approximator approach (ANNs, now AI), which has advantages and disadvantages which have been discussed since the early 90s, this is a different type of approach, first exemplified with simple patterns. In the future this and similar approaches may lead us out of the current conundrum of von Neumann’s dictum “give me 3 parameters and I fit you an elephant. Give me 4 and I let him wiggle his tail” — saying in effect, yes you can approximate arbitrary functions given millions, billions or trillions of parameters but why? It is not mathematics or science where we compress what we experience into manageable and applicable knowledge. And to be able to compress data into applicable knowledge we need structural learning, not universal function learning. I have hopes for the future.