In-memory computing (IMC) has the potential of accelerating data-intensive computing tasks, such as inference and training of neural networks, by the matrix vector multiplication (MVM) in the crosspoint array. As the array size increases, however, a significant problem arises due to the parasitic IR drop at row and column lines. Understanding and compensating IR drop is essential to enable high-density accelerators of MVM. This work presents an analytical model for the IR drop and a numerical algorithm capable of accelerating the nodal analysis of crosspoint arrays up to five orders of magnitude with respect to SPICE analysis. The numerical algorithm is used to study the impact of array size, data density, device/wire resistance, and nonlinearity on the IR drop and the related loss of accuracy in IMC. We also derive two simple compensation schemes and an architectural solution for the mitigation of IR drop that are able to increase the accuracy of neural networks from 59% up to 96.6%, compared to an ideal accuracy of 97%.

Modeling and Compensation of IR Drop in Crosspoint Accelerators of Neural Networks

Lepri N.;Baldo M.;Mannocci P.;Glukhov A.;Milo V.;Ielmini D.
2022-01-01

Abstract

In-memory computing (IMC) has the potential of accelerating data-intensive computing tasks, such as inference and training of neural networks, by the matrix vector multiplication (MVM) in the crosspoint array. As the array size increases, however, a significant problem arises due to the parasitic IR drop at row and column lines. Understanding and compensating IR drop is essential to enable high-density accelerators of MVM. This work presents an analytical model for the IR drop and a numerical algorithm capable of accelerating the nodal analysis of crosspoint arrays up to five orders of magnitude with respect to SPICE analysis. The numerical algorithm is used to study the impact of array size, data density, device/wire resistance, and nonlinearity on the IR drop and the related loss of accuracy in IMC. We also derive two simple compensation schemes and an architectural solution for the mitigation of IR drop that are able to increase the accuracy of neural networks from 59% up to 96.6%, compared to an ideal accuracy of 97%.
2022
Hardware accelerator
In-memory computing (imc)
Neural network
Nonvolatile memory
Resistive switching memory
File in questo prodotto:
File Dimensione Formato  
2022_ted_IR.pdf

Accesso riservato

: Publisher’s version
Dimensione 3.25 MB
Formato Adobe PDF
3.25 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1206538
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? 14
social impact