On a summer afternoon, a bumblebee hovers before a flower. Rather than darting straight to the nectar, it performs an elaborate dance: drifting left, pausing, tracing the petals in a careful arc. What looks like hesitation is, in fact, strategy. With each motion, the bee collects fragments of the scene, tiny snapshots stitched together over time. From these glimpses, its miniature brain can recognize flowers, tell apart symbols, and even distinguish human faces. How can an animal with fewer than a million neurons achieve such sophisticated feats of vision?
A recent study in eLife, led by postdoctoral researcher HaDi MaBouDi at the University of Sheffield, suggests the answer lies in the interplay between movement and minimal neural circuitry. Using a brain-inspired, or “neuromorphic,” model of bee vision, the researchers showed how simple adaptive rules combined with flight maneuvers can produce a remarkably efficient visual system. Their findings show that active scanning shapes what bees see and informs how visual neurons wire themselves to encode information.
At the heart of the study is the concept of active vision, the idea that animals do not passively absorb the world but actively structure their own input through movement. “Intelligence doesn’t lie in the brain alone; it emerges from the continuous interaction between brain, body, and environment,” MaBouDi said. Humans refresh visual details over time through rapid eye movements, or saccades. Bees, constrained by the coarse mosaic of their compound eyes, cannot rely on such static snapshots. Instead, bees perform scanning maneuvers, coordinated movements of the head and body during flight, which sequentially sample different regions of a visual scene. By integrating visual information over time, these movements enable bees to construct a more detailed and comprehensive representation of their environment than would be possible from a single, static view. These rhythmic scanning strategies allow bees to piece together visual information over time, forming the basis for their remarkable recognition abilities.
To investigate how scanning behavior shapes neural processing, MaBouDi and colleagues programmed a biologically inspired neural network model that mirrors the architecture of the insect visual pathway. This model consists of interconnected layers of simple processing units that simulate how neurons encode and transform visual information. In this model, simulated photoreceptors feed into intermediate layers corresponding to the lamina and medulla of the bee brain, before reaching the lobula, an optical brain structure where complex visual features are integrated. From the model lobula, signals are transmitted to a simplified version of the mushroom body, the central brain structure responsible for learning and decision-making. The model incorporated progressive time delays to mimic the sequential flow of visual input during scanning; rather than receiving an entire image at once, the virtual bee processed five sequential patches, each representing a portion of its scanning trajectory. Learning occurred via non-associative plasticity, whereby synaptic connections strengthened or weakened through repeated exposure without requiring explicit rewards, enabling the network to gradually form coherent internal representations of visual scenes.
The results were fascinating. The model demonstrated that active scanning, combined with non-associative learning, allows bees’ visual systems to encode information efficiently and selectively. Lobula neurons developed spatiotemporally organized receptive fields and became tuned, or selectively responsive, to specific visual features such as orientation, motion direction, velocity, and contrast, closely mirroring insect neurophysiology. Through repeated exposure to sequential image patches, these neurons refined their responses, firing minimally but distinctly to different patterns. This decreased redundancy while emphasizing the most informative visual features. This sequential scanning also allowed the network to integrate inputs across both space and time, improving the discrimination of visually similar patterns, particularly when attention was focused on the most informative regions. Furthermore, the model’s performance depended on both the number of lobula neurons and the plasticity of the connections, highlighting that structured connectivity and selective sampling are essential for accurate visual recognition.
Perhaps most remarkable was how few neurons the system required. As few as approximately thirty-six lobula-like neurons were sufficient to achieve high recognition accuracy, demonstrating the efficiency of this biologically inspired neural architecture. The study also revealed the critical role of inhibitory plasticity. When inhibition was fixed rather than adaptive, neurons became redundant, and recognition performance dropped. In living bees, this flexibility likely allows neurons to specialize, preventing overlap and boosting efficiency. Taken together, these findings support the efficient coding hypothesis, which proposes that sensory systems evolve to convey information as compactly and efficiently as possible. “When bees actively scan a stimulus rather than processing it as a static image, they can extract the necessary information with fewer neurons, more efficiently, and faster, achieving this only through interaction with their environment,” MaBouDi said.
To test the system’s perceptual abilities, the team simulated classic experiments in bee cognition. The model successfully distinguished plus and multiplication signs, angled bars, and even human faces, particularly when its scanning strategies mirrored those of real bees. Focusing on the lower half of a symbol at moderate speed produced recognition rates as high as ninety-eight percent, whereas scanning too quickly or from too great a distance, or not scanning at all, sharply reduced accuracy. For instance, increasing scanning speed, which enlarged gaps between sampled image patches, reduced accuracy to seventy percent, while stationary simulated bees that did not actively scan achieved only correct choices sixty percent of the time.
Despite these successes, the model remains a simplification of reality. In the bee brain, the processing of visual information is likely mediated through dendritic and synaptic latencies, small delays as signals travel along neurons and across synapses, as well as intermediate neuron transmission within the medulla, where neurons relay and transform visual information before passing it to higher brain regions. The model approximates these processes in a simplified manner, using sequences of image patches and a reduced neuron network to capture the functional outcomes without simulating the full biological complexity. Real bees not only perform far more complex, context-dependent maneuvers than the model’s linear scans, but their mushroom body also contains extensive circuitry that is crucial for memory and learning, which the model reduces to a single neuron. MaBouDi emphasizes that the model should be seen as a starting point. “We are at the beginning of neural recording of the brain,” he said. “Animal cognition is the first step, and the more direct approach, to understand intelligence.”
Future work aims to capture bees’ rapid head and body movements using an advanced tracking system integrated with neural recordings in a virtual arena. “We need to capture both body and head movements in six dimensions to truly understand how bees perceive their world,” MaBouDi said. He explained that this combination of behavioral tracking and neural recordings is essential for validating computational models. Without it, models alone cannot confirm how bees perceive and process visual information, because there is no way to know if the simulated neurons or behaviors truly reflect reality.
The implications of this work extend far beyond entomology. Bee-inspired models, which rely on sparse, motion-driven coding rather than pixel-by-pixel analysis, could inform artificial intelligence and robotics, where efficiency is a major concern. “If we can understand and mimic this natural solution, using simpler but dynamic systems that actively shape how they collect information, this can guide the next wave of AI,” MaBouDi said. Other applications lie in computing efficiency and in robotics. “This integration of modeling, behavioral observation, and neuroscience can give us some idea of how to build a robot that exactly follows the strategies we use for navigation,” MaBouDi said.