From GPUs to RRAMs: Distributed In-Memory Primal-Dual Hybrid Gradient Method for Solving Large-Scale Linear Optimization Problems
Published in 2026 SIAM Conference on Parallel Processing for Scientific Computing (PP26), 2026
We present a distributed primal-dual hybrid gradient (PDHG) method co-designed for resistive random-access memory (RRAM) in-memory computing, enabling large-scale linear optimization with dramatically reduced energy and latency compared to GPU baselines.
Recommended citation: Huynh Q. N. Vo, Md Tawsif Rahman Chowdhury, Paritosh Ramanan, Gozde Tutuncuoglu, Junchi Yang, Feng Qiu, and Murat Yildirim. (2026). From GPUs to RRAMs: Distributed In-Memory Primal-Dual Hybrid Gradient Method for Solving Large-Scale Linear Optimization Problems. In: Proceedings of the 2026 SIAM Conference on Parallel Processing for Scientific Computing (PP26).
