Novel semiconductor tech paves the way for next-gen AI


Friday, 09 August, 2024

Novel semiconductor tech paves the way for next-gen AI

A team of researchers from Pohang University of Science and Technology (POSTECH) has demonstrated that analog hardware using Electrochemical Random Access Memory (ECRAM) devices can maximise the computational performance of artificial intelligence, showcasing its potential for commercialisation. Their research findings have been published in the journal Science Advances.

The advancement of AI technology has pushed the scalability of existing digital hardware (CPUs, GPUs and ASICs, among others) to its limits. Consequently, researchers are looking into analog hardware specialised for AI computation. Analog hardware adjusts the resistance of semiconductors based on external voltage or current and utilises a cross-point array structure with vertically crossed memory devices to process AI computation in parallel. Although it offers advantages over digital hardware for specific computational tasks and continuous data processing, meeting the diverse requirements for computational learning and inference remains challenging.

To address the limitations of analog hardware memory devices, the researchers focused on ECRAM, which manage electrical conductivity through ion movement and concentration. Unlike traditional semiconductor memory, these devices feature a three-terminal structure with separate paths for reading and writing data, allowing for operation at relatively low power.

The researchers fabricated ECRAM devices using three-terminal-based semiconductors in a 64x64 array. Experiments revealed that the hardware incorporating the team’s devices demonstrated excellent electrical and switching characteristics, along with high yield and uniformity. The researchers also applied the Tiki-Taka algorithm, an analog-based learning algorithm, to this high-yield hardware, thereby increasing the accuracy of AI neural network training computations. The researchers also demonstrated the impact of the “weight retention” property of hardware training on learning and confirmed that their technique does not overload artificial neural networks, highlighting the potential for commercialising the technology.

“By realising large-scale arrays based on novel memory device technologies and developing analog-specific AI algorithms, we have identified the potential for AI computational performance and energy efficiency that far surpass current digital methods,” Professor Seyoung Kim said.

Image credit: iStock.com/BlackJack3D

Related News

Molecular coating technique boosts supercapacitors

Researchers have effectively boosted the capacity, lifespan and cost-effectiveness of a capacitor...

Full-colour fibre LEDs based on perovskite quantum wires

Researchers have paved the way for advanced wearable display technologies by developing...

Eco-friendly tungsten recovered from semiconductor waste

Researchers have developed an economically viable bioleaching process to extract rare metals from...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd