A major benefit of this approach is that, as opposed to the always-on Von Neumann model, a spiking neuron network is effectively in “off” mode most of the time. Once triggered, it can then perform a huge number of parallel interactions.
“It’s exactly the same as the way the brain doesn’t churn every single feature of its incoming data,” says Jason Eshraghian, Assistant Professor at the University of California at Santa Cruz. “Imagine if you were to film a video of the space around you. You could be filming a blank wall, but the camera is still capturing pixels, whereas, as far as the brain is concerned, that’s nothing, so why process it?”
Because neuromorphic computing emulates the brain in this way, it can perform tasks using a fraction of the time and power needed by traditional machines.
Neuromorphic systems are also highly adaptable, because the connections between neurons can change in response to new tasks, making them well suited to AI. Analysts have therefore described it as a critical enabler of new technologies that could reach early majority adoption within five years.
Half a decade ago, Intel established a community of researchers around the world to explore the potential applications of neuromorphic computing in specific business use cases, ranging from voice and gesture recognition to image retrieval and robotic navigation. The results so far have been impressive, showing energy efficiency improvements of up to 1,000 times and speed increases of up to 100 times compared to traditional computer processors.
The potential of the neuromorphic approach in enabling less compute-hungry large language models for AI was recently demonstrated by Eshraghian and others at University of California Santa Cruz. Their “SpikeGPT” model for language generation, a piece of software that simulates an SNN through its algorithmic structures, uses approximately 30 times less computation than a similar model using typical deep learning methods.
“Large scale language models rely on ridiculous amounts of compute power,” he says. “Using spikes is a much more efficient way to represent information.”
Taking it to the edge
One of the major potential future benefits that comes from neuromorphic computing’s greater efficiency and speed is the capability to bring low-power, rapid decision-making to the increasing proliferation of devices that enable the Internet of Things. Think of autonomous vehicles, for instance. A neuromorphic chip negates the need to send signals over an internet connection for remote processing by powerful computers in the cloud. Instead, the device can carry out on-the-spot, AI-based learning in isolation—an approach known as “edge” computing.
“The dimension of remote adaptability and personalization that neuromorphic brings opens the door for all kinds of new capabilities with AI,” adds Davies, who believes the area of smart robots carrying out chores in the home, in particular, is one that’s ripe for development.
The term AIoT has been coined to describe the combination of AI and IoT, and California-based company BrainChip is already commercializing the concept with those new capabilities in mind. Its first-to-market digital neuromorphic processor, called Akida, is billed as “a complete neural processing engine for edge applications”.