From GPUs to RRAMs: Distributed In-Memory Primal-Dual Hybrid Gradient Method for Solving Large-Scale Linear Optimization Problems
Date:
Linear programs are central to operations research and arise in domains such as transportation, logistics, and power systems, where faster solution methods can significantly improve large-scale decision-making. Recent advances in primal-dual hybrid gradient (PDHG) methods have made GPUs a viable platform for large linear optimization problems, but their performance is still constrained by the von Neumann bottleneck, since repeated data movement between memory and processors incurs substantial latency and energy costs. In this talk, we present MELISO: a distributed in-memory computing framework that uses resistive random-access memory (RRAM) devices to accelerate PDHG for large-scale linear optimization. Our approach integrates an RRAM-oriented design framework, the MELISO large-scale distributed simulator, and hardware-aware modifications to PDHG, including matrix encoding strategies, in-memory operator norm estimation, and step-size adaptations suited to noisy analog computation. We further discuss theoretical guarantees for the inexact algorithm under realistic noise assumptions and show that RRAM-based implementations can achieve substantial gains in latency and energy efficiency while maintaining solution quality comparable to GPU-based methods and commercial solvers.
