Advanced Topics in Very-Large Scale Integration (VLSI): Deep Learning
Deep learning has emerged as an important technique for solving critical problems across a diverse set of applications. However, VLSI architectures for implementing efficient learning systems remains an open question and a very active research area. Topics will include (1) detailed overview of computations, data patterns, and neural network architectures used in deep learning with the goal of understanding the operations we want to accelerate in hardware, (2) Emerging applications of deep learning (e.g., Augmented Reality, Mixed Reality), (3) Advanced VLSI topics for designing efficient hardware acceleration of deep learning systems with a focus on VLSI design trade-offs, techniques, and optimizations (dataflow design, flexible interconnects, buffering and memory architectures, power estimation and reduction strategies, data precision and exploiting data sparsity), (4) Techniques and optimizations for mapping deep learning workloads to general-purpose (CPUs, GPUs) and custom hardware, (5) Exploration of the latest research in both academic and commercial deep learning hardware acceleration. By the end of the course, students will have a good overview of this exciting and emerging field that has many open research opportunities.
Students will complete hands-on assignments to thoroughly understand Deep Learning computational workloads (e.g. Backpropagation, vectorized implementations, etc.). Students will be expected to participate in class discussions and complete a series of written and oral paper reviews. There will be a midterm and final exam.
Required Skills: Students must have completed ELEC 402 or an equivalent fourth year VLSI course. Knowledge of Deep Learning and Python is useful but not required.
Instructor Scott Chin and Brad Quinton