Novel semiconductor tech paves the way for next-gen AI


Friday, 09 August, 2024

Novel semiconductor tech paves the way for next-gen AI

A team of researchers from Pohang University of Science and Technology (POSTECH) has demonstrated that analog hardware using Electrochemical Random Access Memory (ECRAM) devices can maximise the computational performance of artificial intelligence, showcasing its potential for commercialisation. Their research findings have been published in the journal Science Advances.

The advancement of AI technology has pushed the scalability of existing digital hardware (CPUs, GPUs and ASICs, among others) to its limits. Consequently, researchers are looking into analog hardware specialised for AI computation. Analog hardware adjusts the resistance of semiconductors based on external voltage or current and utilises a cross-point array structure with vertically crossed memory devices to process AI computation in parallel. Although it offers advantages over digital hardware for specific computational tasks and continuous data processing, meeting the diverse requirements for computational learning and inference remains challenging.

To address the limitations of analog hardware memory devices, the researchers focused on ECRAM, which manage electrical conductivity through ion movement and concentration. Unlike traditional semiconductor memory, these devices feature a three-terminal structure with separate paths for reading and writing data, allowing for operation at relatively low power.

The researchers fabricated ECRAM devices using three-terminal-based semiconductors in a 64x64 array. Experiments revealed that the hardware incorporating the team’s devices demonstrated excellent electrical and switching characteristics, along with high yield and uniformity. The researchers also applied the Tiki-Taka algorithm, an analog-based learning algorithm, to this high-yield hardware, thereby increasing the accuracy of AI neural network training computations. The researchers also demonstrated the impact of the “weight retention” property of hardware training on learning and confirmed that their technique does not overload artificial neural networks, highlighting the potential for commercialising the technology.

“By realising large-scale arrays based on novel memory device technologies and developing analog-specific AI algorithms, we have identified the potential for AI computational performance and energy efficiency that far surpass current digital methods,” Professor Seyoung Kim said.

Image credit: iStock.com/BlackJack3D

Related News

Eliminating 'efficiency droop' for brighter LEDs

Researchers have found a way to make LEDs brighter while maintaining their efficiency for...

3D semiconductor chip alignment boosts performance

Researchers have developed an ultra-precise method to align 3D semiconductor chips using lasers...

Researchers achieve 8 W output from optical parametric oscillator

Researchers have demonstrated a total output power of 8 W from a high-power mid-infrared cadmium...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd