Artificial Intelligence Hardware Design. Albert Chun-Chen Liu
Читать онлайн книгу.tion id="uc882af33-eeac-5f0b-8d7b-604ff4a8b081">
Table of Contents
1 Cover
6 Preface
9 1 Introduction 1.1 Development History 1.2 Neural Network Models 1.3 Neural Network Classification 1.4 Neural Network Framework 1.5 Neural Network Comparison Exercise References
10 2 Deep Learning 2.1 Neural Network Layer 2.2 Deep Learning Challenges Exercise References
11 3 Parallel Architecture 3.1 Intel Central Processing Unit (CPU) 3.2 NVIDIA Graphics Processing Unit (GPU) 3.3 NVIDIA Deep Learning Accelerator (NVDLA) 3.4 Google Tensor Processing Unit (TPU) 3.5 Microsoft Catapult Fabric Accelerator Exercise References
12 4 Streaming Graph Theory 4.1 Blaize Graph Streaming Processor 4.2 Graphcore Intelligence Processing Unit Exercise References
13 5 Convolution Optimization 5.1 Deep Convolutional Neural Network Accelerator 5.2 Eyeriss Accelerator Exercise References
14 6 In‐Memory Computation 6.1 Neurocube Architecture 6.2 Tetris Accelerator 6.3 NeuroStream Accelerator Exercise References
15 7 Near‐Memory Architecture 7.1 DaDianNao Supercomputer 7.2 Cnvlutin Accelerator Exercise References
16 8 Network Sparsity 8.1 Energy Efficient Inference Engine (EIE) 8.2 Cambricon‐X Accelerator 8.3 SCNN Accelerator 8.4 SeerNet Accelerator Exercise References
17 9 3D Neural Processing 9.1 3D Integrated Circuit Architecture 9.2 Power Distribution Network 9.3 3D Network Bridge 9.4 Power‐Saving Techniques Exercise References
18 Appendix A: Neural Network Topology
19 Index
List of Tables
1 Chapter 1Table 1.1 Neural network framework.
2 Chapter 2Table 2.1 AlexNet neural network model.
3 Chapter 3Table 3.1 Intel Xeon family comparison.Table 3.2 NVIDIA GPU architecture comparison.Table 3.3 TPU v1 applications.Table 3.4 Tensor processing unit comparison.
4 Chapter 5Table 5.1 Efficiency loss comparison.Table 5.2 DNN accelerator performance comparison.Table