Talks and Presentations

From GPUs to RRAMs: Distributed In-Memory Primal-Dual Hybrid Gradient Method for Solving Large-Scale Linear Optimization Problems

March 03, 2026

Conference Talk, 2026 SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP26), Berlin, Germany

Linear programs are central to operations research and arise in domains such as transportation, logistics, and power systems, where faster solution methods can significantly improve large-scale decision-making. Recent advances in primal-dual hybrid gradient (PDHG) methods have made GPUs a viable platform for large linear optimization problems, but their performance is still constrained by the von Neumann bottleneck, since repeated data movement between memory and processors incurs substantial latency and energy costs. In this talk, we present MELISO: a distributed in-memory computing framework that uses resistive random-access memory (RRAM) devices to accelerate PDHG for large-scale linear optimization. Our approach integrates an RRAM-oriented design framework, the MELISO large-scale distributed simulator, and hardware-aware modifications to PDHG, including matrix encoding strategies, in-memory operator norm estimation, and step-size adaptations suited to noisy analog computation. We further discuss theoretical guarantees for the inexact algorithm under realistic noise assumptions and show that RRAM-based implementations can achieve substantial gains in latency and energy efficiency while maintaining solution quality comparable to GPU-based methods and commercial solvers.

Decentralized Importance Sampling in Variational Autoencoders to Generate Industrial Scenarios

May 31, 2025

Conference Talk, IISE Annual Conference & Expo 2025, Atlanta, Georgia, USA

Decarbonization of industrial systems increasingly relies on stochastic optimization models that require high-quality scenarios to represent uncertainty in renewable energy and process operations. In practice, however, the data needed to construct such scenarios are often siloed across stakeholders and may be heterogeneous, making centralized scenario generation difficult. In this talk, we present a decentralized framework based on variational autoencoders (VAEs), SplitVAEs, and importance-weighted autoencoders (IWAEs) to generate industrial scenarios together with their probability masses for stochastic programming applications. Through experiments on solar and wind energy data in Texas, we show that the proposed decentralized approach retains essential spatiotemporal information, produces scenario quality comparable to centralized methods, and achieves favorable computational performance. These results suggest that decentralized deep generative models offer a practical and privacy-aware pathway for large-scale industrial scenario generation.

Importance Sampling in Variational Autoencoders to Generate Industrial Carbon Emissions Scenarios

October 20, 2024

Conference Talk, INFORMS Annual Meeting 2024, Seattle, Washington, USA

Concerns about the negative environmental impact of fossil fuels have intensified interest in decarbonization, with renewable energy playing a central role despite substantial uncertainties in supply and demand. Stochastic programming provides a principled framework for decision-making under such uncertainty, but its performance depends critically on the quality of generated scenarios and their associated probability masses. In this talk, we explore the use of variational autoencoders (VAEs), together with decentralized variants and importance-weighted autoencoders (IWAEs), for generating high-fidelity industrial carbon-emissions scenarios. Using industrial-sector CO$_2$ emissions data from Texas oil refineries, we demonstrate that these generative models can preserve key spatial and temporal structures while also supporting the estimation of scenario likelihoods required for stochastic optimization. The results highlight the promise of deep generative models as scalable tools for scenario generation in decarbonization-driven industrial decision-making.

SplitVAEs: Decentralized scenario generation from siloed data for stochastic optimization problems

May 18, 2024

Student Paper Competition, 2024 Institute of Industrial and Systems Engineers (IISE) Annual Conference & Expo, Montreal, Quebec, Canada

Stochastic optimization problems in large-scale multi-stakeholder networked systems (e.g., power grids and supply chains) rely on data-driven scenarios to encapsulate uncertainties and complex spatiotemporal interdependencies. However, centralized aggregation of stakeholder data is challenging due to privacy, computational, and logistical bottlenecks. In this paper, we present SplitVAEs, a decentralized scenario generation framework that leverages the split learning paradigm and variational autoencoders to generate high-quality scenarios without moving stakeholder data. With the help of large-scale, distributed memory-based experiments, we demonstrate the broad applicability of SplitVAEs in three distinct domain areas: power systems, industrial carbon emissions, and supply chains. Our experiments indicate that SplitVAEs can learn spatial and temporal interdependencies in large-scale networks to generate scenarios that match the joint historical distribution of stakeholder data in a decentralized manner. Our results show that SplitVAEs outperform conventional state-of-the-art methodologies and provide a superior, computationally efficient, and privacy-compliant alternative to scenario generation.

Deep Learning Models for Fault Detection and Diagnosis in Photovoltaic Modules Manufacture

June 06, 2023

Conference Talk, 2023 IEEE Conference on Artificial Intelligence (IEEE CAI), Santa Clara, CA, USA

The usage of photovoltaic (PV) systems has experienced exponential growth. This growth, however, places gargantuan pressure on the solar energy industry’s manufacturing sector and subsequently begets issues associated with the quality of PV systems, especially the PV module. Currently, fault detection and diagnosis (FDD) are challenging due to many factors including but not limited to requirements of sophisticated measurement instruments and experts. Recent advances in deep learning (DL) have proven its feasibility in image classification and object detection. Thus, DL can be extended to visual fault detection using data generated by electroluminescence (EL) imaging instruments. Here, the authors propose an in-depth approach to exploratory data analysis of EL data and several techniques based on supervised learning to detect and diagnose visual faults and defects presented in a module.