New Chip Could Put AI in Palm of Your Hand

New Chip Could Put AI in Palm of Your Hand

And it could mean big things for supercomputing and medical devices, among other tech.

The goal of an artificial intelligence system that works like the human brain got a step closer recently. Researchers at the Massachusetts Institute of Technology announced a new special-purpose chip that increases the speed of neural-network computations by three to seven times over current chips, while reducing power consumption by up to 95 percent. And it does it by working in an analog fashion more in line with how the brain works, rather than dealing with the infinite strings of ones and zeros in traditional computing.

The upshot: AI neural networks could not only boost large-scale supercomputing, but also be practical in smartphones, robots, unmanned vehicles and medical devices, to name a few examples, where they could perform complex operations — ranging from advanced authentication to reactive decision-making — that would otherwise involve connecting with power-hogging servers via the cloud.

Working in the fairly new but burgeoning field of neuromorphic computing, the MIT team designed a chip with artificial synapses that can fluctuate with the flow of information, the way synapses in the brain work. This greatly reduces the amount of power required and holds potential for eventually packing immense computational capability into tiny devices.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Jeehwan Kim, who led the research team, told MIT News. “This opens a stepping stone to produce real artificial hardware.”

Brain Power

The appeal of neuromorphic computing and neural networks lies in the efficiency of the human brain. It can’t calculate pi like a digital processor, but it can handle information streams from multiple sources (creating thoughts, interpreting sights, sounds and smells, controlling bodily operations), and is fluid in applying resources to problems as the need arises.

And it does it all on very little power. In 2013, a research team looking to simulate 1 second of biological brain activity ran 82,994 processors and a petabyte of system memory for 40 minutes on the Riken Research Institute’s K supercomputer, using enough power for 10,000 homes. The brain, by contrast, runs on about 20 watts, half the power of a refrigerator light bulb.

The MIT team is not alone in its pursuit of neural networks.

IBM, working with the Defense Advanced Research Projects Agency, has developed the TrueNorth neuromorphic chip architecture, which can run on 70 milliwatts and is capable of 46 billion synaptic operations per second, per watt. IBM developed TrueNorth under DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics program, and is using it to build a 64-chip supercomputer for the Air Force Research Laboratory. ARL plans to use it for advanced pattern and object recognition, and the “sensory processing” of audio, video and other sensor-generated data into something a computer can use.

The Intelligence Advanced Research Projects Activity is exploring another front, putting $100 million into its Machine Intelligence from Cortical Networks, or MICrONS, program to study the working of the brain to improve machine learning and AI algorithms.

China also is on the case, with Tsinghua University in Beijing developing a low-power neuromorphic chip called Thinker, which can run for a year on eight AA batteries. China’s Ministry of Industry and Information Technology laid out a 3-year plan in December that includes mass-production of neural-network chips by 2020.

AI Down to Size

Neural networks involve artificial synapses that allow a freer flow of information than digital computing, which relies on long, on/off sequences of ones and zeros, with information sent at regular intervals. A neuromorphic system can operate in parallel, sending information in bursts, requiring less power to operate. IBM’s True North, for example, has five times the transistors of an Intel processor, but its 70-milliwatt power consumption is up to 2,000 times less than the 35 to 140 watts drawn by an Intel chip.

The challenge has been in controlling that flow. The MIT researchers published their results earlier this year in the journal Nature Materials, and designed an artificial synapse using silicon germanium that gave them precise control of the electric current. In one test, they saw 95 percent accuracy in recognizing handwriting samples.

AI systems still won’t be human, but the team’s results could mean big things not just in supercomputing, but in the computing power you carry around in your pocket.