E-Book, Englisch, 146 Seiten
Krishna / Kwon / Parashar Data Orchestration in Deep Learning Accelerators
1. Auflage 2022
ISBN: 978-3-031-01767-4
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 146 Seiten
Reihe: Synthesis Lectures on Computer Architecture
ISBN: 978-3-031-01767-4
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.




