The human brain is nature’s most powerful processor, so it’s not surprising that developing computers that mimic it has been a long-term goal. Neural networks, the artificial intelligence systems that learn in a very human-like way, are the closest models we have, and now Stanford scientists have developed an organic artificial synapse, inching us closer to making computers more efficient learners.
In an organic brain, neuronal cells send electrical signals to each other to process and store information. Neurons are separated by small gaps called synapses, which allow the cells to pass the signals to each other, and every time that crossing is made, that connection gets stronger, requiring less energy each time after. That strengthening of a connection is how the brain learns, and the fact that processing the information also stores it is what makes the brain such a lean, mean, learning machine.
Neural networks model this on a software level. These AI systems are great for handling huge amounts of data, and like the human brain that inspired them, the more information they’re fed, the better they become at their job. Recognizing and sorting images and sounds are their main area of expertise at the moment, and these systems are driving autonomous cars, beating humanity’s best Go players, creating trippy works of art and even teaching each other. The problem is, these intelligent software systems are still running on traditional computer hardware, meaning they aren’t as energy efficient as they could be.
“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” says Yoeri van de Burgt, lead author of the study. “Instead of simulating a neural network, our work is trying to make a neural network.”
So the team set about building a physical, artificial synapse that mimics the real thing by processing and storing information simultaneously. Based on a battery and working like a transistor, the device is made up of two thin films and three terminals, with salty water acting as an electrolyte between them. Electrical signals jump between two of the three terminals at a time, controlled by the third.
First, the researchers trained the synapse by sending various electric signals through it, to figure out what voltage they need to apply to get it to switch into a certain electrical state. Digital transistors have two states – zero and one – but with its three terminal layout, the artificial synapse is capable of having up to 500 different states programmed in, exponentially expanding the computational power it could be capable of.
Better still, switching between states takes a fraction of the energy of other systems. That’s still not in the ballpark of a brain – the artificial synapse uses 10,000 times the energy of a biological one – but it’s a step in the right direction, and with further testing in smaller devices, the researchers hope to eventually improve that efficiency.
“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” says A. Alec Talin, senior author of the study. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”
While only one artificial synapse has been built so far, the team ran extensive experiments on it, and extrapolated the data gathered to simulate how an array of artificial synapses could process information. Making use of the visual recognition skills of a neural network, the researchers tested its ability to identify handwritten numbers – 0 to 9 – in three different styles, and found that the system could recognize the digits up to 97 percent of the time.
Earlier examples of artificial synapses, like that from USC in 2011, were not only less powerful, but weren’t made completely from organic materials. Composed mostly of hydrogen and carbon and running on the same voltages as human neurons, the Stanford synapse could eventually integrate with biological brains, opening up the possibility of devices that can be more directly controlled by thought, like prosthetics and brain-machine interfaces.
The next step for the researchers is to test the simulated results by producing a physical array of the artificial synapses.
The research was published in the journal Nature Materials.