ÌÇÐÄÊÓÆµ

October 24, 2017

IBM scientists demonstrate in-memory computing with 1 million devices for applications in AI

A million processes are mapped to the pixels of a 1000 × 1000 pixel black-and-white sketch of Alan Turing. The pixels turn on and off in accordance with the instantaneous binary values of the processes. Credit: Nature Communications
× close
A million processes are mapped to the pixels of a 1000 × 1000 pixel black-and-white sketch of Alan Turing. The pixels turn on and off in accordance with the instantaneous binary values of the processes. Credit: Nature Communications

"In-memory computing" or "computational memory" is an emerging concept that uses the physical properties of memory devices for both storing and processing information. This is counter to current von Neumann systems and devices, such as standard desktop computers, laptops and even cellphones, which shuttle data back and forth between memory and the computing unit, thus making them slower and less energy efficient.

Today, IBM Research is announcing that its scientists have demonstrated that an unsupervised machine-learning algorithm, running on one million (PCM) devices, successfully found temporal correlations in unknown data streams. When compared to state-of-the-art classical computers, this prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI.

The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in place.

"This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures," says Dr. Evangelos Eleftheriou, an IBM Fellow and co-author of the paper. "As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today's computers. Given the simplicity, high speed and low energy of our in-memory computing approach, it's remarkable that our results are so similar to our benchmark classical approach run on a von Neumann computer."

Credit: IBM Blog Research

The details are explained in their paper appearing today in the peer-review journal Nature Communications. To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:

"Memory has so far been viewed as a place where we merely store information. But in this work, we conclusively show how we can exploit the physics of these to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes." said Dr. Abu Sebastian, exploratory and cognitive technologies scientist, IBM Research and lead author of the paper.

A schematic illustration of the in-memory computing algorithm. Credit: IBM Research
× close
A schematic illustration of the in-memory computing algorithm. Credit: IBM Research

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

More information: Abu Sebastian et al. Temporal correlation detection using computational phase-change memory, Nature Communications (2017).

Journal information: Nature Communications

Provided by IBM Blog Research

Load comments (1)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

Get Instant Summarized Text (GIST)

This summary was automatically generated using LLM.