Overview

Our EIC Lab is dedicated to advancing the practicality of artificial intelligence (AI) to bring ubiquitous intelligence to everyday platforms and numerous real-world applications for transforming human life quality. To achieve this, our research focuses on two core missions:

1. Enhancing AI Efficiency: In the era of rapid technological advancements, modern AI models boast unprecedented power but are also burdened with steep memory and computational costs. This creates a significant disparity, as the growing demands of AI surpass the capabilities of typical computing platforms. Our research tackles this critical gap by innovating efficient AI algorithms and developing advanced AI accelerators. These efforts are crucial for enabling immersive and seamless AI experiences previously deemed impossible, integrating cutting-edge intelligence into daily life applications more extensively, and contributing to green AI initiatives and sustainability.

Research Overview

2. Developing AI Agents: As AI technology rapidly advances, recent models have demonstrated astonishing capabilities in understanding and interacting with complex environments. In parallel, the complexity and diversity of real-world tasks increasingly demand AI systems capable of autonomously reasoning, planning, and acting. In response, our lab has embarked on developing AI-powered intelligent agents that can effectively and autonomously address a wide array of real-world challenges, particularly in enhancing AI efficiency, such as in hardware design. Such AI agents promise not only to streamline routine tasks but also to tackle more complex problems, significantly impact various sectors and improve the quality of human life.

Naturally, we further strive to explore and leverage the synergy between efficient AI and AI agents: We leverage efficient AI techniques to make essential components of AI agents (e.g., large language models (LLMs) and 3D intelligence) more accessible and interactive, while also developing AI agents to automate the design of efficient AI algorithms and hardware. We believe this bidirectional optimization holds great promise for democratizing AI and integrating it seamlessly into daily life. To provide concrete examples, we have shared some examples below regarding our recent progress below:

  • Mission 1: Efficient AI
  • To advance the achievable accuracy-efficiency frontier of AI models, we develop both efficient AI algorithms and accelerators, as well as their co-design solutions. Representative workloads that we are focusing on are LLMs, Visual Language Models (VLMs), and 3D intelligence (such as NeRF and Gaussian Splating). Recent exemplary work includes Fusion-3D at MICRO'24 (best paper, ranked top 1), Omni-Recon at ECCV'24 (oral paper, ranked top 2%), ShiftAddLLM at NeurIPS'24, AmoebaLLM at NeurIPS'24, Instant-3D at ISCA'23, HW-NAS-Bench at ICLR'21 (spotlight paper, ranked top 3%), Cyclic Precision Training at ICLR'21 (spotlight paper, ranked top 3%), and Early-Bird Tickets at ICLR'20 (spotlight paper, ranked top 3%).

  • Mission 2: AI Agents
  • Our recent focus in this emerging area is to develop AI agents that can optimize and automate hardware design or algorithm-hardware co-exploration to facilitate fast development of efficient AI solutions. Recent exemplary work includes MG-Verilog at LAD'24 (best paper award), GPT4AIGChip at ICCAD'23, Auto-NBA at ICML'21, G-CoS at ICCAD'21, SACoD at ICCV'21, and AutoAI2C at TCAD'24.

  • System Integration and Demonstration
  • We validate our techniques on real-world applications deployed on commercial hardware and/or custom FPGA/ASIC accelerators. Notable awards include First Place in the ACM/IEEE TinyML Design Contest at ICCAD'22 and EyeCoD at IEEE Micro's Top Picks of 2023.

    i-FlatCam (ASIC):
    Won 1st Place in Best University Demo at DAC'2022
    A 253 FPS, 91.49 µJ/Frame
    Ultra-Compact Intelligent Lensless Camera
    Gen-NeRF (FPGA):
    Won 2nd Place in Best University Demo at DAC'2023
    Real-time, Low-power, and Generalizable Scene Rendering and Segmentation based on NeRFs with Interactive View Control