MARCH 23rd, 2026

13:30 – 18:00

Room: Allegheny

MCCSys@ASPLOS'26

5th Workshop on Memory-Centric Computing Systems

In conjunction with the ACM International Conference on Architectural Support for Programming Languages
and Operating Systems (ASPLOS 2026)

Pittsburgh, USA


About

Processing-in-Memory (PIM) is a computing paradigm that aims to overcome data movement bottlenecks by making memory systems compute-capable. Explored over several decades since the 1960s, PIM systems are now becoming a reality with the advent of the first commercial products and prototypes. PIM can improve performance and energy efficiency for many modern applications. However, there are many open questions spanning the entire computing stack and many challenges for widespread adoption.

This combined tutorial and workshop will focus on the latest advances in PIM technology, spanning both hardware and software. It will include novel PIM ideas, different tools and frameworks for conducting PIM research, and programming techniques and optimization strategies for PIM kernels. First, we will provide a series of lectures and invited talks that will provide an introduction to PIM, including an overview and a rigorous analysis of existing PIM hardware from industry and academia. Second, we will invite the broad PIM research community to submit and present their ongoing work on memory-centric systems. The program committee will favor papers that bring new insights on memory-centric systems or novel PIM-friendly applications, address key system integration challenges in academic or industry PIM architectures, or put forward controversial points of view on the memory-centric execution paradigm. We also consider position papers, especially from industry, that outline design and process challenges affecting PIM systems, new PIM architectures, or system solutions for real state-of-the-art PIM devices.


Agenda & Workshop Materials

Monday, March 23rd (13:30 – 18:00), Room: Allegheny

Time Talk Materials
13:30 – 13:40
Logistics/Welcome
Ismail Emir Yuksel
Slides
13:40 – 14:30
Memory-Centric Computing: Solving Memory's Computing Problem
Prof. Onur Mutlu
Slides
14:30 – 14:50
Understanding the Computational Capabilities of Real DRAM Chips and Robustness Issues They Introduce
Ismail Emir Yuksel
Slides
14:50 – 15:10
PiDRAM: An FPGA-based Framework for End-to-end Evaluation of Processing-in-DRAM Techniques
Ataberk Olgun
Slides
15:10 – 15:30
Revisiting Main Memory-Based Covert and Side Channel Attacks in the Context of Processing-in-Memory
Nisa Bostanci
Slides
15:30 – 16:00 Coffee Break
16:00 – 17:00
Fast and Energy-Efficient Databases using Processing-in-Memory
Slides
17:00 – 17:20
DeepFuse: Speculative Load Micro-Op Fusion
Slides
17:20 – 17:40
Enabling Near-Accelerator Performance With Generic Processing-Using-Memory Architectures
Slides
17:40 – 18:00
UM-PIM: DRAM-based PIM with Uniform and Shared Memory Space
Slides
18:00 – 18:05
Closing Remarks
Ismail Emir Yuksel

Invited Speakers

Phillip Gibbons

Prof. Phillip Gibbons

Carnegie Mellon University
"Fast and Energy-Efficient Databases using Processing-in-Memory"

Short Bio: Phillip Gibbons is a Professor in the Computer Science Department and the Electrical & Computer Engineering Department at Carnegie Mellon University (CMU). He received his Ph.D. in Computer Science from the University of California at Berkeley. Prior to joining CMU, Gibbons was a researcher at Bell Laboratories and Intel Research Pittsburgh, and co-director of the Intel Science and Technology Center for Cloud Computing. His research areas include parallel computing, databases, machine learning systems, computer architecture, and distributed systems. Recent projects range from the theory and practice of processing-in-memory (e.g., best paper runner-up at VLDB’23) to computer architecture support for robotics (e.g., best paper at Sigmetrics’24). His 200+ publications span theory and systems, and have been cited 46,000+ times with an h-index of 87. Gibbons won the ACM Paris Kanellakis Theory and Practice Award for pioneering the foundations of streaming data analytics (2019). He was founding Editor-in-Chief for the ACM Transactions on Parallel Computing, Associate Editor for the Journal of the ACM and other journals, and program/area/general chair for over a dozen conferences. Gibbons is both an ACM and IEEE Fellow.

Abstract: Data movement is fast becoming the dominant cost in computing. Processing-in-Memory (PIM), an idea dating back to 1970, is now emerging as a key technique for reducing costly data movement, by enabling computation to be executed on compute resources embedded in memory modules. But how can database indexes and transactional systems (OLTP) best take advantage of PIM, especially given conventional wisdom that PIM is good only for efficient predicate filtering (in OLAP)? This talk first highlights our work designing PIM-friendly indexes, i.e., what are PIM-optimized replacements for B-trees, radix trees, and kd-trees? Our indexes address head-on the inherent tension between minimizing communication and achieving load balance in PIM systems, achieving provable guarantees regardless of query or data skew. Experimental results on UPMEM’s 2,560-module PIM system demonstrate that our indexes outperform prior PIM indexes by up to 59x. Second, the talk highlights our design of OLTPim, a PIM-friendly in-memory OLTP database system that decreases costly memory channel data movement by up to 6.1x while also increasing transaction throughput by up to 1.7x, compared to MosaicDB, the state-of-the-art multicore OLTP system. This work appeared in VLDB’23 (best paper runner-up), SPAA’23, SPAA’25, VLDB’25, and PPoPP'26.
Ryan Wong

Ryan Wong

University of Illinois Urbana-Champaign
"Enabling Near-Accelerator Performance With Generic Processing-Using-Memory Architectures"

Short Bio: Ryan Wong is a fifth-year Ph.D. student at the University of Illinois Urbana-Champaign working under Prof. Saugata Ghose. His research interests are in the broad area of computer architecture, with particular emphasis in memory and storage systems, as well as accelerators for scientific computing and database systems. He is a Mavis Future Faculty Fellow, and has won the UIUC CS Outstanding TA Award. He received his B.S. in Computer Science, B.A. in Chemistry, and M.S. in Electrical Engineering all from the University of Rochester. For more information, please visit his website at https://rwong.cs.illinois.edu.

Abstract: Processing-using-memory (PUM) is an emerging computing paradigm that offers additional orders of magnitude benefits in performance and energy compared to its processing-near-memory (PNM) counterparts. PUM architectures leverage the inherent properties of the underlying memory devices to perform computation using the memory devices themselves, eliminating the need for off-chip data movement. Despite these benefits, many PUM architectures fall into the trap of specialization. On one end, highly specialized, application-specific PUM accelerators focus on a limited number of subkernels (e.g., matrix–vector multiplication) providing large benefits to a constrained number of applications. In contrast, general-purpose PUM architectures implement Boolean operators, leveraging its logically complete behavior across a wide range of applications, but by trading off some of PUM’s benefits. Given the ever-evolving nature of applications, we revisit the classic trade-off of generality vs. peak benefits. Our first work towards this endeavor, ANVIL, leverages PUM inside a modern SSD, substantially improving the performance of name–value pair workloads, a generic and widely used abstraction across numerous applications in both hardware and software (e.g., arrays/dictionaries, key–value stores, relational databases, content-addressable memories). Through careful hardware–software design of optional optimization opportunities, ANVIL demonstrates that it is possible to develop a single PUM hardware capable of specializing at runtime to many application domains, from key–value stores to transactional and analytical databases to graph processing, all while maintaining support for other types of SSD functionality.
Deepanjali Mishra

Deepanjali Mishra

Carnegie Mellon University
"DeepFuse: Speculative Load Micro-Op Fusion"

Short Bio: Deepanjali Mishra is a Ph.D. student in the Electrical and Computer Engineering (ECE) department at Carnegie Mellon University (CMU), advised by Prof. Akshitha Sriraman. Her research focuses on designing efficient computer systems, bridging computer architecture and operating systems to enable high-performance and sustainable hyperscale data center solutions across the compute stack. Deepanjali’s work has been recognized with the 2024 Carnegie Institute of Technology Dean's Fellowship and the ACM-W Scholarship.

Abstract: Modern memory-bound applications, such as graph analytics and key-value stores, spend a large fraction of execution time waiting on memory. Even when computation happens close to memory, processors can issue multiple memory requests for the same data. These redundant requests consume bandwidth, energy, and backend resources, limiting overall performance. To address this, we propose I-Fuse, a technique that tracks load instructions frequently accessing the same cache block. When two loads are commonly co-accessed, I-Fuse speculatively merges them into a single fused memory request. Issuing one fused request instead of two reduces memory operations to the cache and backend, lowering cache traffic and reorder buffer pressure. I-Fuse is lightweight and can be applied alongside other memory-centric optimizations, such as PIM architectures. The design focuses on detecting redundant load patterns and eliminating unnecessary memory requests before they occur. We evaluate I-Fuse on a set of modern memory-bound applications. Across eight backend-bound applications, I-Fuse achieves an average speedup of 7.4%, with a maximum of 22.3%, approaching 79% of the ideal fusion upper bound. These results show that reducing redundant memory operations improves bandwidth utilization and efficiency. This work demonstrates that microarchitectural techniques for removing redundant memory requests complement memory-centric and PIM systems, and highlights that improving memory efficiency requires considering both the processor and near-memory computation.
Yilong Zhao

Yilong Zhao

Shanghai Jiao Tong University
"UM-PIM: DRAM-based PIM with Uniform and Shared Memory Space"

Short Bio: Yilong Zhao is a Ph.D. candidate at Shanghai Jiao Tong University, advised by Professor Li Jiang. He is a member of the Advanced Computer Architecture Lab (ACA-IMPACT). His research focuses on processing-in-memory (PIM) architectures and AI accelerators. He received his M.S. and B.S. degrees from Shanghai Jiao Tong University in 2021 and 2018, respectively.

Abstract: DRAM-based Processing-in-Memory (PIM) mitigates the memory wall by integrating computing units into main memory for high-bandwidth data access. However, memory interleaving fragments contiguous regions visible to PIM units, forcing fine-grained offloading with high CPU coordination overhead. Existing solutions disable interleaving or isolate PIM memory to ensure efficiency, but sacrifice CPU bandwidth and create another system memory wall. We present UM-PIM, a unified memory architecture enabling zero-copy PIM offloading without compromising CPU performance. UM-PIM transparently co-locates interleaved CPU pages and contiguous PIM pages via: (i) dual-track memory management with independent allocation and translation; (ii) DRAM-side dynamic address remapping hardware; and (iii) communication-optimized APIs for efficient CPU access to distributed PIM data.

Livestream

🔴 Can't attend in person? Join us live!

The workshop will be livestreamed on YouTube. A replay will also be available afterwards.

▶️ Watch on YouTube

Call for Presentations

This workshop consists of invited talks on the general topic of memory-centric computing systems. There are a limited number of slots for invited talks. If you would like to deliver a talk on related topics, please contact us by filling out this form.

We invite abstract submissions related to (but not limited to) the following topics in the context of memory-centric computing systems:


Key Dates

Submission Deadline February 16, 2026 (23:59 AoE)
Notification of Acceptance February 18, 2026
Workshop Date March 23, 2026 (Half Day)

Organizers

Ismail Emir Yuksel

Ismail Emir Yuksel

ETH Zürich

Ismail Emir Yuksel is a 2nd-year PhD student in the SAFARI Research Group at ETH Zurich under the supervision of Prof. Onur Mutlu. His current broader research interests are in computer architecture, processing-in-memory, and hardware security, focusing on understanding, enhancing, and exploiting fundamental computational capabilities of modern DRAM architectures. His recent publications show that commodity DRAM chips, without any modification to the chip itself (only with modifications to the memory controller), are able to execute bulk-bitwise computation and data movement operations (including NAND, NOR, NOT, AND, OR, MAJority, multi-row copy, and initialization functions) in a reasonably robust manner.

F. Nisa Bostanci

F. Nisa Bostanci

ETH Zürich

F. Nisa Bostanci is a fourth-year PhD student in the SAFARI Research Group at ETH Zurich, under the supervision of Prof. Onur Mutlu. She is broadly interested in computer architecture and, more specifically, in security, reliability, and safety (robustness) of memory systems, emerging memory and computation paradigms, including Processing-In-Memory architectures (PIM), and designing effective and efficient solutions to address robustness issues in modern and future systems. Her recent works uncover and mitigate new security vulnerabilities that emerge with the adoption of read disturbance solutions and PIM architectures to aid in designing robust future systems.

Ataberk Olgun

Ataberk Olgun

ETH Zürich

Ataberk Olgun is a senior PhD student at ETH Zurich, working with Prof. Onur Mutlu. His broad research interests include designing secure, high-performance, and energy-efficient DRAM architectures. Especially with the worsening RowHammer vulnerability, it is increasingly difficult to design new DRAM architectures that satisfy all three characteristics. His current research focuses on i) deeply understanding and ii) efficiently mitigating the RowHammer vulnerability in modern systems.

Dr. Zhiheng Yue

Dr. Zhiheng Yue

ETH Zürich

Zhiheng Yue is a postdoctoral researcher at ETH Zurich, working with Prof. Onur Mutlu. He received the B.S. degree in electronic science and technology from the Beijing University of Posts and Telecommunications, Beijing, China, in 2017, and the M.S. degree in electrical and computer engineering from the University of Michigan, Ann Arbor, MI, USA, in 2019, and the Ph.D. degree in electronic science and technology from Tsinghua University, Beijing, in 2024. His current research interests include deep learning, Processing-in-memory, AI acceleration, 3D stacking, and very-large-scale-integration (VLSI) design.

Dr. Mohammad Sadrosadati

Dr. Mohammad Sadrosadati

ETH Zürich

Mohammad Sadrosadati received the B.Sc., M.Sc., and Ph.D. degrees in Computer Engineering from Sharif University of Technology, Tehran, Iran, in 2012, 2014, and 2019, respectively. He spent one year, from April 2017 to April 2018, as an academic guest at ETH Zurich, hosted by Prof. Onur Mutlu during his Ph.D. program. He is currently a senior researcher and lecturer at ETH Zurich, working under the supervision of Prof. Onur Mutlu. His research interests are in the areas of heterogeneous computing, processing-in-memory, memory systems, and interconnection networks. Due to his achievements and impact on improving the energy efficiency of GPUs, he won the Khwarizmi Youth Award, one of the most prestigious awards, as the first laureate in 2020, to honor and embolden him to keep taking even bigger steps in his research career.

Dr. Geraldo F. Oliveira

Dr. Geraldo F. Oliveira

ETH Zürich

Geraldo F. Oliveira received a B.S. degree in computer science from the Federal University of Viçosa, Viçosa, Brazil, in 2015, an M.S. degree in computer science from the Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2017, and a Ph.D. degree in computer science from ETH Zürich, Zürich, Switzerland, in 2025, advised by Prof. Onur Mutlu. His current research interests include system support for processing-in-memory and processing-using-memory architectures, data-centric accelerators for emerging applications, approximate computing, and emerging memory systems for consumer devices. He has several publications on these topics.

Prof. Onur Mutlu

Professor Onur Mutlu

ETH Zürich

Onur Mutlu is a Professor of Computer Science at ETH Zurich. He previously held the William D. and Nancy W. Strecker Early Career Professorship at Carnegie Mellon University. His research interests are in computer architecture, computing systems, hardware security, memory & storage systems, and bioinformatics, with a major focus on designing fundamentally energy-efficient, high-performance, and robust computing systems. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held product, research and visiting positions at Intel Corporation, Advanced Micro Devices, VMware, Google, and Stanford University. He received various honors for his research, including the 2025 IEEE Computer Society Harry H. Goode Memorial Award "for seminal contributions to computer architecture research and practice, especially in memory systems." He is an ACM Fellow, IEEE Fellow, and an elected member of the Academy of Europe. He enjoys teaching, mentoring, and enabling & democratizing access to high-quality research and education. He has supervised 25 PhD graduates, many of whom received major dissertation awards, 18 postdoctoral trainees, and more than 70 Master's and Bachelor's students. His computer architecture and digital logic design course lectures and materials are freely available on YouTube and his research group makes a wide variety of artifacts freely available online. For more information, please see his webpage at https://people.inf.ethz.ch/omutlu/.


Past Editions


Event Location

Venue

The Landing Hotel

757 Casino Dr. Pittsburgh
PA 15212
Pittsburgh
USA

The workshop will be held in conjunction with ASPLOS 2026.

For registration and accommodation information, please visit the ASPLOS 2026 website.


Contact

For questions about the workshop, please contact the organizers:

General Inquiries: ismail.yuksel@safari.ethz.ch

SAFARI Research Group: safari.ethz.ch