Search

The Power of Simplicity: Building a smarter deep learning model using fewer neurons

From controlling self-driving cars to mastering creative processes, artificial intelligence tasks are becoming more ambitious. Because of that, it may seem intuitive that the internal workings of AI must grow larger and more complicated to keep up. However, researchers from the Massachusetts Institute of Technology, Institute of Science and Technology Austria, and Vienna University of Technology have developed a novel AI system that challenges this notion. Not only has it been found to learn to steer a vehicle better than current state-of-the-art models, but it also happens to be sixty-three times smaller.

Typically, AI models use numerous mathematical functions (artificial “neurons”) that work together like real biological neurons. In this specific project, the researchers named their system a “neural circuit policy (NCP).” The NCP is an AI system inspired by the simple neural circuits of the nematode Caenorhabditis elegans. In contrast to previous deep learning models, NCPs have very few neurons, and not all neurons form connections with every other neuron. “It is wired very sparsely, [with] only a few synapses, and the wiring is recurrent so it has a lot of feedback connections,” said Mathias Lechner, a PhD student at IST Austria and one of the leaders of the study. In addition, NCP neurons have increased computational capabilities, therefore making it easier to understand the exact function of any neuron at any time. 

The NCP is a type of recurrent neural network (RNN)—a model that retains memories of past observations—which translates real-world observations into a vehicle steering command. But remembering and learning from every observation is not always productive, since some occurrences happen by chance. “There could be some spurious correlations [in the training data]. For example, it could happen that always after we see a red car there is a left turn, just by accident. These dependencies are also captured,” Lechner said. Learning these accidental events, therefore, is not useful. However, an AI problem called the vanishing gradient, which makes it difficult for networks to learn from their current input, actually prevents RNNs from establishing these long-term dependencies. This is beneficial to driving, since the act itself necessitates constant adaptation to new scenarios. NCPs “have some memory, but not too much,” as noted by Lechner, and they leverage the benefits of the vanishing gradient to maximize the accuracy of their decisions.

Autonomous driving was selected to assess NCP effectiveness, and the researchers found that NCPs outperform traditional AI models in multiple metrics. The ability of NCPs to adapt to differences in lighting, for example, made them significantly better at avoiding crashes. When driving, while other networks focused on the curbside, the NCP focused on the horizon. Since curbsides are highly variable and susceptible to lighting conditions, the other networks displayed more inconsistent focus. By concentrating on the steady horizon, NCPs retained a stronger attention. In addition, NCPs learned more concisely how to drive, making it easier to associate their behaviors with intuitive, human-like decisions. The specific action of every single neuron –– which is often difficult to clearly assess in other models –– could be visualized in NCPs.

 NCPs are sparse, compact, and robust, making them an ideal option for technologies that necessitate complex, spontaneous decision-making based on outside stimuli –– such as drones and aircraft. In this way, NCPs demonstrate how the increasingly complex problems ascribed to AI do not always require large, convoluted networks to be solved.