Eﬃcient Hardware Realization of Convolutional Neural Networks using Intra-Kernel Regular Pruning
Convolutional neural networks (CNNs) have proven their success in a wide range of applications. While CNNs boast remarkable performance, they require signiﬁcant computational and memory resources for operation. As research strive towards higher classiﬁcation accuracy, CNN topologies have increased in depth, complexity and size. In response, algorithmic-level optimizations have been proposed to reduce the size of CNNs while retaining classiﬁcation accuracy. While these advances promise savings in theory, they often underperform in practice, especially when adopted into hardware. In order achieve practical savings, algorithmic changes must be considered in perspective of hardware, thus necessitating a software-hardware codesign philosophy. We propose an Intra-Kernel Regular (IKR) pruning scheme to reduce the size and computational complexity of CNNs by removing redundant weights at a ﬁne-grained level without loss in classiﬁcation accuracy. Unlike other pruning methods such as Fine-Grained pruning, IKR pruning maintains regular kernel structures and employs data compression techniques that translate well into hardware. At the hardware level, we propose an FPGAdesign framework targeting IKR-pruned CNNs. The organisational structure of the design enables potential for high parallelism and eﬃcient utilization of on-chip resources. Experimental results in software demonstrate up to 10×reduction in weights and 7×reduction in computation at a cost of less than 1% degradation in accuracy versus the un-pruned case. Evaluation of the accelerator indicate computational speeds up to 77.7 GOP/S (eﬀectively 403 GOP/S) with each DSP eﬀectively performing 0.53 GOP/S.
Cite this version of the work
Maurice Yang (2019). Eﬃcient Hardware Realization of Convolutional Neural Networks using Intra-Kernel Regular Pruning. UWSpace. http://hdl.handle.net/10012/14716