Neuromorphic computers: a solution to energy-guzzling artificial intelligence?

Unlocking your phone with your face, finding your favorite music, talking to a digital assistant on your phone or driving a car with steering assistance – it’s all possible thanks to artificial intelligence, which is somewhat similar to the information processing in our brain and which has become important in countless computer applications in roughly twenty years’ time.

The Russian top chess player Garri Kasparov in 2005. The chess master was defeated in 1997 by computer Deep Blue. It was the first time that a computer was stronger than a human champion.

A disadvantage is that artificial intelligence consumes energy. Scientists calculated that fully self-driving electric cars are likely spend ten to thirty percent of their energy on the computer that controls the car. Or take the computer program AlphaGo that defeated a human Go master for the first time in 2016. It ran on about two thousand processors who together reportedly consumed a million watts of electricity – quite a lot compared to the roughly twenty watts that his human opponent needed.

Many applications of artificial intelligence take place in large data centers, where a lot of energy is available (one to two percent of total global electricity consumption currently goes to data centers). But that makes us dependent on fast and reliable connections. Wouldn’t it be much smarter to do these kinds of calculations where they are needed? For example, a camera that interprets what it sees itself would no longer have to share all its data with a computer located further away.

Computing power is now mainly accumulating at large tech companies. Scientists are working on parts of computers that can reverse this trend: computers that are ultra-efficient and still have the power for many of the above applications. Researchers draw inspiration for this from our own brains, which are fundamentally different from classical computer architecture. A pile of tiny gold particles appears to behave like a group of artificial brain cells.

A brain in silicon

At first glance, a classical computer seems fundamentally unsuitable for simulating a brain. It is true that an average computer chip has billions of transistors (which you can imagine as minuscule switches), but they work differently from neurons, the parts that make up our brain.

A transistor receives a signal and can either pass it on or block it. It is like a floodgate that is completely open or completely closed: the transistor passes a ‘1’ or ‘0’, all or nothing. A neuron also transmits a signal, but only does so when it reaches a so-called threshold value. Think of it more like a live cannon and ‘fire’ when fired from a number of entrances at the same time gets a signal.

Moreover, neurons maintain their ‘states’, the connections between the neurons (also called synapses) are adaptable and transmit signals more or less easily depending on all the previous signals they passed on. In this sense, a transistor has no memory.

Strong together

A simplified schematic representation of a neural network as it resides in the brain. The red ‘brain cells’ receive a signal (left) and pass it on to a certain extent to the blue and yellow cells, respectively. They eventually produce a result (right).

Artificial intelligence costs a lot of energy, because it rests on a large number of calculations of a large heap of data. Professor of nanoelectronics Wilfred van der Wiel from the University of Twente states that it actually comes down to multiplying huge series of numbers (so-called vector matrix multiplications). Multiplication always requires a number to be retrieved from memory before it is sent back to memory. “That’s a lot of processing steps and the only reason it works reasonably well at the moment is that a computer does these steps very quickly one after the other,” says Van der Wiel.

Then take our brain. In terms of calculation speed of a few neurons, this cannot match the billions of calculation steps that a modern processor rams through per second. The speed of neurons is on the order of several hundred ‘operations’ per second. The big trump card, however, is that the brain carries out an enormous number of calculation steps at the same time. Moving numbers to and from memory is not necessary (see also the box Brain in silicon), because the processing and storage of information happens in the same place. That turns out to be efficient.

Researchers are now checking this out and making computers more ‘parallel’, so that they can perform more calculation steps at the same time. This can be done, for example, with so-called graphics processing units (GPUs), processors that have their origin in the game world and are specialized in many parallel calculation steps. There are GPUs in the back of Tesla’s and AlphaGo also made use of them. But if computers are to be as efficient as the brain, a further approach to brain architecture is needed. The transistor has to go overboard.

Schematic representation of a programmable ‘brain cell’ of gold nanoparticles (in the middle) driven by eight surrounding electrodes. The circuit is currently operating at a temperature of -196 degrees Celsius.

Gold particle brain cells

The computer in Van der Wiel’s laboratory has no transistors. Computer is perhaps also a big word, because for now the circuit has a maximum of twelve inputs and outputs that are connected with a so-called nanomaterial of gold particles. With a little imagination you can see that as a collection of a number of brain cells.

The gold particles are twenty nanometers in diameter and lie on an insulating substrate of silicon oxide. Electrodes run to the gold particles from different sides. These include inputs that transmit a signal. Ultimately, based on these inputs, it produces one output, a signal that is passed on to the next network.

The circuits are programmable by applying a certain voltage to the control electrodes. This changes the way in which current flows through the gold particles: both its path and its resistance. Van der Wiel and colleagues have already shown that in this way it is possible to make ‘logic’ circuits that are in computers, for which many more classical transistors are needed.

Ultimately, it is precisely not the intention to imitate the classical computer. “It was a proof of principle the programmability of the circuit. The ultimate application will not consist of these circuits,” says Van der Wiel.

Wilfred van der Wiel says that applications of energy-efficient neural networks can especially make a difference in places where a lot of computing power is required, but limited energy is available, such as in cameras of self-driving cars that are able to recognize other road users.

Forgetful chips

One of the challenges is making the system bigger so that it can have more complex functions, says Van der Wiel. One brain cell does not make a brain and the power of these types of systems is precisely the large number of units that simultaneously process information. The hundreds of nanoparticles of Van der Wiel and colleagues may be a bit meager compared to the hundred billion brain cells in a human brain. “We want to scale up and that is a challenge. The question is how do we give our network more connections and how do we connect various of those networks,” says Van der Wiel. “Signals from this network also threaten to become immeasurably small.”

Another point is the memory of the networks, which is missing. First, the researchers ‘program’ the heap with gold nanoparticles by applying different voltages to the electrodes, until the nanoparticles give the desired response. But after this phase of material learning the material can in fact lose the desired program again. “We are now looking at how we can capture the functionality, for example with so-called phase-change materials, which are also included in rewritable CDs.”

Van der Wiel sees a long road to applications in this regard. He dares not say how long these will take. “Five years, fifty years, who can tell? In any case, we have made considerable progress over the past ten years. At the time, it seemed to me that we could teach chips with a random collection of nanoparticles a certain behaviour. And that is what is happening now.”

A resistor with memory

Researchers at the University of Groningen are also working on components for neuromorphic computer chips. They have now succeeded in making a connection in a material that has the learning properties of a connection between brain cells in our brain. The special thing about such a connection is that it adapts and remembers its state: for example, if many signals pass through it, the connection automatically becomes stronger. With a lack of signal, it becomes weaker.

In the computer world this is called a memristor – a combination of the words memory (memory) and resistor (resistance). In Groningen, the scientists are making such a memristor from a piece of nickel on a substrate of strontium titanium oxide. The resistance of this material turns out to depend on the current that has previously passed through it. Not only can this material ‘learn’, but also ‘forget’, just like connections in the brain do. The Groningen scientists hope to reduce their memristors for use in a neuromorphic chip.

Sources:

  • C. Kaspar, BJ Ravoo, WG van der Wiel, SV Wegner, WHP Pernice, The rise of intelligent matter, Nature (2021), doi: 10.1038 / s41586-021-03453-y
  • T.F. Tiotto, A.S. Goossens, J. Borst, T. Banerjee, N.A. Taatgen, Learning to Approximate Functions Using Nb-Doped SrTiO3 Memristors, frontiers in Neuroscience (2021), doi: 10.3389 / fnins.2020.627276

Source: Kennislink by www.nemokennislink.nl.

*The article has been translated based on the content of Kennislink by www.nemokennislink.nl. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!