Brain-like processors that mimic the human brain’s functional model could enable portable AI and bring a new era of powerful computing.
As Moore’s Law is coming to an end, silicon processing technology is being pushed to its limits. Of course, there have been a number of proposed methods to expand the computational power of traditional processors.
Yet, at the moment, chipmakers are milking silicon for all its worth from 14nm to 10nm to 7nm silicon transistors and beyond.
The way we are going, it won’t be long until it’s physically impossible to cram anything more onto chipsets.
In short, we just can’t continue going down the miniaturization path forever.
To save the growth potential of this technology, computer engineers are working on new transistor design technologies like nanoscale valleytronic transistors.
Although valleytronics could be a promising venture, another way to extend the shelf life of Moore’s Law a little more is neuromorphic computing.
Neuromorphic Technology: the key to Portable AI
If computer engineers are taking cues from biology to build powerful biocomputing systems, neuromorphic computing research follows suit. It focuses on the brain.
Neuromorphic computing aims to build computers capable of mimicking the biological processes of the human brain.
While quantum technology is expected to bring a paradigm shift in computing, Neuromorphic technology is a promising avenue that could extend Moore’s Law and pave the way towards unbridled deep neural networks.
One of the secrets of human intelligence seems to be related to the brain’s neuronal architecture which relies on a complex network of synapses for optimal performance.
In a nutshell, neuromorphic computing scientists want to mimic neurons and synaptic connections and replicate the functioning model of the human brain in neuromorphic chips.
Scientists have been exploring the concept of neuromorphic computing since Carver Andress Mead first coined the term in the 1990s.
Although slow to start, work on the new technology is starting to show results.
Artificial Synapses: A Major Step in Neuromorphic Computing
Computer scientists built deep neural networks based on their understanding of neurons, but they haven’t yet done the same with synapses.
It’s the trillions of synapses that allow the brain to function by transferring information from neuron to neuron.
Without synapses, neurons wouldn’t even fire. And, the more connected neurons are, the more efficient the biological system becomes at solving problems.
Reproducing synaptic connections is one of the largest technical challenges that scientists had to overcome to build neuromorphic processors.
MIT engineers tackled the issue and, last January, they released a report on an artificial synapse they designed, which showed promising results.
The team went on to build an artificial synaptic chip and were able to precisely monitor signal strength flowing through them.
The researchers found that, in simulations, the chip and its synapses could be used to recognize samples of handwriting with up to 95 percent accuracy.
Intel: Breaking Down Walls Towards True Neuromorphic Computing
Maybe Intel doesn’t appear on the quantum supremacy contenders’ shortlist, but it does have its own quantum agenda.
At the Consumer Electronics Show, the historic chipmaker announced a neuromorphic chip, along with a new 49 qubits chip.
Intel’s Loihi, named after seamount in Hawaii, is a chip that mimics neurons and synaptic functions and can power self-learning deep neural networks.
NYU: Solitons Could Solve Data Transfer Issues
Last but not least, an internal research team at NYU unraveled a category of waves in magnets (known as magnetic droplets, or solitons) that could be used for building energy-efficient data transfer systems and neuromorphic AI.
There’s still a long way to go before neuromorphic processors hit the market, but the first steps in the neuromorphic ground are being made now.