In-memory computing (IMC) with memory arrays allows reducing the time and energy consumption for matrix vector multiplication (MVM) for artificial neural networks (ANN) inference. However, the IMC accuracy is affected by nonidealities, such as program/read variations of device conductance and the parasitic voltage (IR) drop along the wires, whose impact quickly increases when increasing the array size. This work presents new IMC circuit architectures for mitigating both variations and IR drop at the same time. The new schemes allow for improving the accuracy of an ANN from 72.7% to 94.9%, compared to a software accuracy of 96.9%, at the expense of an increase of the memory array area.
Mitigating read-program variation and IR drop by circuit architecture in RRAM-based neural network accelerators
Lepri N.;Glukhov A.;Ielmini D.
2022-01-01
Abstract
In-memory computing (IMC) with memory arrays allows reducing the time and energy consumption for matrix vector multiplication (MVM) for artificial neural networks (ANN) inference. However, the IMC accuracy is affected by nonidealities, such as program/read variations of device conductance and the parasitic voltage (IR) drop along the wires, whose impact quickly increases when increasing the array size. This work presents new IMC circuit architectures for mitigating both variations and IR drop at the same time. The new schemes allow for improving the accuracy of an ANN from 72.7% to 94.9%, compared to a software accuracy of 96.9%, at the expense of an increase of the memory array area.File | Dimensione | Formato | |
---|---|---|---|
2022_irps_irdrop.pdf
Accesso riservato
Dimensione
10.7 MB
Formato
Adobe PDF
|
10.7 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.