Research Projects
Utilizing Concurrent Data Accesses for High-Performance Data Processing
In modern computing, data-intensive applications frequently face memory bottlenecks that arise from underutilized opportunities for concurrent data access. This research investigates innovative strategies to maximize the benefits of concurrent access in cache management, offering solutions to effectively address the “Memory Wall” problem. Each study introduces specialized frameworks, metrics, and adaptive techniques designed to optimize cache performance by leveraging data access concurrency and reducing data access latency.
Publications
- CARE – Concurrency-Aware Enhanced Cache Management
- CHROME – Holistic Cache Management with Reinforcement Learning
- APAC – An Accurate and Adaptive Prefetch Framework
- Premier – Concurrency-Aware Cache Pseudo-Partitioning
Each study in this research project addresses specific challenges in cache management by integrating concurrency-aware metrics and adaptive frameworks. Together, they showcase substantial performance gains achievable through the strategic management of concurrent memory accesses, offering effective solutions to the memory wall problem that hinders data-intensive applications in multi-core systems.
Efficient Hardware Design for AI Computing
As AI applications grow in complexity, they demand increasingly efficient hardware architectures to manage huge memory demands and high-dimensional computations. This research focuses on designing hardware solutions that not only accelerate AI computation but also optimize memory access efficiency—a critical factor in achieving high performance in data-intensive AI tasks.
Publications
This project underscores the importance of advanced dataflow and concurrency-aware memory optimizations that mitigate data movement overhead and accelerate core AI computations. These innovations contribute significantly to the efficiency and scalability of AI hardware, enabling broader adoption across diverse and demanding fields.
Leveraging Processing-in-Memory for Data-Intensive Applications
Data-intensive applications, particularly those involving complex graph computations, face significant performance bottlenecks due to heavy data movement and irregular memory access patterns. This research explores innovative processing-in-memory (PIM) architectures to tackle these challenges, reducing the reliance on CPU and minimizing data transfer overhead. Each study introduces a unique approach to enhance graph computing efficiency through PIM by leveraging memory locality and concurrency.
Publications
- CoPIM – Concurrency-Aware PIM Architecture for Efficient Graph Offloading
- AceMiner – Accelerating Graph Pattern Matching with In-DRAM Caching
Each study in this research project addresses specific obstacles in PIM for graph processing, integrating optimized caching, concurrency-aware workload partitioning, and adaptive offloading. Together, they demonstrate substantial efficiency gains, underscoring the potential of PIM to accelerate data-driven applications by alleviating the memory wall problem inherent in traditional architectures.
Concurrency-Aware Memory Performance Modeling
Driven by the demands of big data applications and high-performance computing, modern computing requires precise memory performance models that effectively capture the impact of concurrency. This research delves into how concurrent data access influences memory system performance, introducing advanced modeling techniques to deepen our understanding of memory bottlenecks, particularly within multi-core systems where concurrency is a critical factor.
Publications
- A Generalized Model for Modern Hierarchical Memory Systems
- The Memory-Bounded Speedup Model and Its Impacts in Computing
This project highlights the importance of concurrency-aware memory models that capture concurrent access patterns within hierarchical memory systems, enabling the design of architectures optimized for performance and efficiency. Collectively, these studies demonstrate that data concurrency is as crucial as locality, offering new directions to overcome the memory wall problem—a persistent bottleneck in data-intensive computing.