About Me
I am a Research Assistant Professor in the College of Computing at Illinois Tech, where I work closely with Prof. Xian-He Sun at Gnosis Research Center. My research spans several areas in Computer Architecture and High-Performance Systems, with a particular focus on optimizing memory hierarchies, enhancing data-centric designs, leveraging AI to improve system performance, and enabling efficient algorithm-hardware co-design.
Email: xlu40@iit.edu
Click here to see the up-to-date version of my CV.
Prospective students: I am looking for self-motivated students interested in computer architecture and working together at Gnosis Research Center. If you are interested, please feel free to email me your resume and transcripts.
Research Interests
I conduct research in Computer Architecture and High-Performance Systems, focusing on:
- Memory performance modeling
- Cache/memory performance optimizations
- Data-centric design for computer architecture
- AI-assisted design for computer architecture (AI for Systems)
- Efficient algorithm-hardware co-design for AI (Systems for AI)
Updates
- Apr 2025: I serve as a TPC member for ICCD 2025.
- Apr 2025: Our paper Pyramid: Accelerating LLM Inference with Cross-Level Processing-in-Memory has been accepted by IEEE Computer Architecture Letters! Congrats to all collaborators!
- Aug 2024: Our paper AceMiner: Accelerating Graph Pattern Matching using PIM with Optimized Cache System has been accepted by ICCD 2024! Congrats to all collaborators!
- May 2024: I received the Best Student Paper Award for the 2023–2024 academic year from the Illinois Tech Computer Science Awards Committee!
- May 2024: I was awarded a DAC61 PhD Forum Travel Grant.
- May 2024: My dissertation Utilizing Concurrent Data Accesses for Data-Driven and AI Applications was accepted for the Ph.D. Forum at DAC 2024! Cheers!
- Mar 2024: I won the Best Poster Award at the 2024 College of Computing Poster Session at Illinois Tech!
- Mar 2024: I was awarded an ASPLOS Student Travel Grant.
- Feb 2024: Our paper ACES: Accelerating Sparse Matrix Multiplication with Adaptive Execution Flow and Concurrency-Aware Cache Optimizations was accepted by ASPLOS 2024! Congrats to all collaborators! (Acceptance rate: 11.5%)