Fully Funded PhD Position in Large-Scale Digital AI systems for Large Language Models (LLMs)
Summary of PhD Program:
The project aims at building large-scale AI accelerators for Large Language Models (LLMs), with a specific focus on Transformers. Diverse hardware optimization techniques can be used, targeting a scalable tile-based Tensor Processing Units (TPUs) engine with massive on-chip global buffers for data-stationary. Systolic Array (SA) architectures with novel spatial dataflows will be utilized, at a large scale, for energy-efficient LLM training and inference. The project targets building full system prototypes in advanced CMOS/FinFET technology nodes (28nm, 16nm). These digital exact computing systems are planned to host/collaborate with unconventional AI architectures with emerging technologies later.