Researchers propose a novel magnetic RAM-based architecture that leverages spintronics to realize smaller, more efficient AI-capable circuits
Researchers from the Tokyo University of Science have proposed a novel magnetic RAM-based architecture that leverages spintronics to realize smaller, more efficient AI-capable circuits.
(a) Structure of the proposed neural network, which uses three-valued gradients during backpropagation (training) rather than real numbers, thus minimizing computational complexity. (b) A novel magnetic RAM cell leveraging spintronics for implementing the proposed technique in a computing-in-memory architecture.
Artificial intelligence (AI) and the Internet of Things (IoT) are two technological fields that have been developing at an increasingly fast pace over the past decade. By excelling at tasks such as data analysis, image recognition, and natural language processing, AI systems have become undeniably powerful tools in both academic and industry settings. Meanwhile, miniaturization and advances in electronics have made it possible to massively reduce the size of functional devices capable of connecting to the Internet. Engineers and researchers alike foresee a world where IoT devices are ubiquitous, comprising the foundation of a highly interconnected world. However, bringing AI capabilities to IoT edge devices presents a significant challenge. Artificial neural networks (ANNs)—one of the most important AI technologies—require substantial computational resources. Meanwhile, IoT edge devices are inherently small, with limited power, processing speed, and circuit space. Developing ANNs that can efficiently learn, deploy, and operate on edge devices is a major hurdle.