MIT researchers are developing new chips to overcome modern technology problems. The researchers revealed a new chip designed to perform public-key encryption for the Internet of Things as well as a chip designed to reduce the power consumption of neural networks.
Public-key encryption or cryptography enables computers to share information securely without needing a secret encryption key. According to the researchers, public-key encryption is often executed by software, but when intelligent technology such as the Internet of Things has to connect to a number of different sensors it can become quite complicated. The chip is hardwired to perform public-key encryption at 1/400 of the power a software execution would take, using 1/10 the amount of memory and executing it 500 times faster, the researchers said.
The researchers developed the chip using an elliptic-curve encryption technique. According to the research team, while many chips are designed to handle certain elliptic curves or families of curves, this chip is designed to handle any elliptic curve. “Cryptographers are coming up with curves with different properties, and they use different primes,” said Utsav Banerjee, an MIT graduate student in electrical engineering and computer science. “There is a lot of debate regarding which curve is secure and which curve to use, and there are multiple governments with different standards coming up that talk about different curves. With this chip, we can support all of them, and hopefully, when new curves come along in the future, we can support them as well.”
In addition, the chip features a datagram transport layer security protocol and a general-purpose processor to save energy and handle encrypted data. Overall, the researchers believe this could provide better security for the Internet of Things.
Another set of researchers are also unveiling a new chip for neural networks. According to the research team, while neural networks have helped advance AI systems such as speech and facial recognition, they require a lot of energy. This can become challenging when developers are trying to build AI solutions on handheld devices. The newly announced chip is designed to reduce the power consumption of neural networks by up to 95 percent, the researchers said.
“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” said Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development. “Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”
The chip converts a node’s input values into electrical voltages to calculate dot products for multiple nodes, and reduce power while speeding up computations by three to seven times more than its predecessors, according to the researchers.
“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays. It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future,” said Dairo Gil, VP of AI at IBM.