Buch, Englisch, 118 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 402 g
Reihe: Springer Theses
Buch, Englisch, 118 Seiten, Format (B × H): 160 mm x 241 mm, Gewicht: 402 g
Reihe: Springer Theses
ISBN: 978-981-97-3476-4
Verlag: Springer Nature Singapore
Neural network (NN) algorithms are driving the rapid development of modern artificial intelligence (AI). The energy-efficient NN processor has become an urgent requirement for the practical NN applications on widespread low-power AI devices. To address this challenge, this dissertation investigates pure-digital and digital computing-in-memory (digital-CIM) solutions and carries out four major studies.
For pure-digital NN processors, this book analyses the insufficient data reuse in conventional architectures and proposes a kernel-optimized NN processor. This dissertation adopts a structural frequency-domain compression algorithm, named CirCNN. The fabricated processor shows 8.1x/4.2x area/energy efficiency compared to the state-of-the-art NN processor. For digital-CIM NN processors, this dissertation combines the flexibility of digital circuits with the high energy efficiency of CIM. The fabricated CIM processor validates the sparsity improvement of the CIM architecture for the first time. This dissertation further designs a processor that considers the weight updating problem on the CIM architecture for the first time.
This dissertation demonstrates that the combination of digital and CIM circuits is a promising technical route for an energy-efficient NN processor, which can promote the large-scale application of low-power AI devices.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
Weitere Infos & Material
Introduction.- Basis and research status of neural network processor.- Neural network processor for specific kernel optimized data reuse.- Neural network processor with frequency domain compression algorithm optimization.- Neural network processor combining digital and computing in memory architecture.- Digital computing in memory neural network processor supporting large scale models.- Conclusion and prospect.