Credit: © 2021 KAUST
A biomimicking “spiking” neural network on a microchip has enabled KAUST researchers to lay the foundation for developing more efficient hardware-based artificial intelligence computing systems.
Artificial intelligence technology is developing rapidly, with an explosion of new applications across advanced automation, data mining and interpretation, healthcare and marketing, to name a few. Such systems are based on a mathematical artificial neural network (ANN) composed of layers of decision-making nodes. Labeled data is first fed into the system to “train” the model to respond a certain way, then the decision-making rules are locked in and the model is put into service on standard computing hardware.
While this method works, it is a clunky approximation of the far more complex, powerful and efficient neural network that actually makes up our brains.
“An ANN is an abstract mathematic model that bears little resemblance to real nervous systems and requires intensive computing power,” says Wenzhe Guo, a Ph.D. student in the research team. “A spiking neural network, on the other hand, is constructed and works in the same way as the biological nervous system and can process information in a faster and more energy-efficient way.”
Spiking neural networks (SNNs) emulate the structure of the nervous system as a network of synapses that transmit information via ion channels in the form of action potential, or spikes, as they occur. This event-driven behavior, implemented mathematically as a “leaky integrate-and-fire model,” makes SNNs very energy efficient. Plus, the structure of interconnected nodes provides a high degree of parallelization, which further boosts processing power and efficiency. It also lends itself to implementation directly in computing hardware as a neuromorphic chip.
“We used a standard low-cost FPGA microchip and implemented a spike-timing-dependent plasticity model, which is a biological learning rule discovered in our brain,” says Guo.
Importantly, this biological model does not need teaching signals or labels, allowing the neuromorphic computing system to learn real-world data patterns without training.
“Since SNN models are very complex, our main challenge was to tailor the neural network settings for optimal performance,” says Guo. “We then designed the optimal hardware architecture considering a balance of cost, speed and energy consumption.”
The team’s brain-on-a-chip proved to be more than 20 times faster and 200 times more energy efficient than other neural network platforms.
“Our ultimate goal is to build a compact, fast and low-energy brain-like hardware computing system. The next step is to improve the design and optimize product packaging, miniaturize the chip and customize it for various industrial applications through collaboration,” Guo says.
###
Media Contact
Michael Cusack
[email protected]
Original Source
https:/
Related Journal Article
http://dx.