Research Projects

Utilizing Concurrent Data Accesses for High-Performance Data Processing

In modern computing, data-intensive applications frequently face memory bottlenecks that arise from underutilized opportunities for concurrent data access. This research investigates innovative strategies to maximize the benefits of concurrent access in cache management, offering solutions to effectively address the “Memory Wall” problem. Each study introduces specialized frameworks, metrics, and adaptive techniques designed to optimize cache performance by leveraging data access concurrency and reducing data access latency.

Publications


Efficient Hardware Design for AI Computing

As AI applications grow in complexity, they demand increasingly efficient hardware architectures to manage huge memory demands and high-dimensional computations. This research focuses on designing hardware solutions that not only accelerate AI computation but also optimize memory access efficiency—a critical factor in achieving high performance in data-intensive AI tasks.

Publications


Leveraging Processing-in-Memory for Data-Intensive Applications

Data-intensive applications, particularly those involving complex graph computations, face significant performance bottlenecks due to heavy data movement and irregular memory access patterns. This research explores innovative processing-in-memory (PIM) architectures to tackle these challenges, reducing the reliance on CPU and minimizing data transfer overhead. Each study introduces a unique approach to enhance graph computing efficiency through PIM by leveraging memory locality and concurrency.

Publications


Concurrency-Aware Memory Performance Modeling

Driven by the demands of big data applications and high-performance computing, modern computing requires precise memory performance models that effectively capture the impact of concurrency. This research delves into how concurrent data access influences memory system performance, introducing advanced modeling techniques to deepen our understanding of memory bottlenecks, particularly within multi-core systems where concurrency is a critical factor.

Publications