Abstract
Compared to conventional artificial neural networks, spiking neural networks (SNNs) are more biologically plausible and require less computation due to their event-driven nature of spiking neurons. However, the default asynchronous execution of SNNs also poses great challenges to accelerate their performance on FPGAs.
In this work, we present a novel synchronous approach for rate-encoding-based SNNs, which is more hardware friendly than conventional asynchronous approaches. We first quantitatively evaluate and mathematically prove that the proposed synchronous approach and asynchronous implementation alternatives of rate-encoding-based SNNs are similar in terms of inference accuracy, and we highlight the computational performance advantage of using SyncNN over an asynchronous approach. We also design and implement the SyncNN framework to accelerate SNNs on Xilinx ARM-FPGA SoCs in a synchronous fashion. To improve the computation and memory access efficiency, we first quantize the network weights to 16-bit, 8-bit, and 4-bit fixed-point values with the SNN-friendly quantization techniques. Moreover, we encode only the activated neurons by recording their positions and corresponding number of spikes to fully utilize the event-driven characteristics of SNNs, instead of using the common binary encoding (i.e., 1 for a spike and 0 for no spike).
For the encoded neurons that have dynamic and irregular access patterns, we design parameterized compute engines to accelerate their performance on the FPGA, where we explore various parallelization strategies and memory access optimizations. Our experimental results on multiple Xilinx ARM-FPGA SoC boards demonstrate that our SyncNN is scalable to run multiple networks, such as LeNet, Network in Network, and VGG, on various datasets such as MNIST, SVHN, and CIFAR-10. SyncNN not only achieves competitive accuracy (99.6%) but also achieves state-of-the-art performance (13,086 frames per second) for the MNIST dataset. Finally, we compare the performance of SyncNN with conventional CNNs using the Vitis AI and find that SyncNN can achieve similar accuracy and better performance compared to Vitis AI for image classification using small networks.
- [1] . 1998. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience 18, 24 (1998), 10464–10472.Google Scholar
Cross Ref
- [2] . 2002. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 1 (2002), 17–37.Google Scholar
Cross Ref
- [3] . 2005. A gradient descent rule for spiking neurons emitting multiple spikes. Information Processing Letters 95, 6 (
Sept. 2005), 552–558.Google ScholarCross Ref
- [4] . 2012. Simulating spiking neural networks on GPU. Network: Computation in Neural Systems 23, 4 (2012), 167–182.Google Scholar
Cross Ref
- [5] . 2016. NeuroFlow: A general purpose spiking neural network simulation platform using customizable processors. Frontiers in Neuroscience 9 (2016), 516.Google Scholar
Cross Ref
- [6] . 2017. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. Journal of Systems Architecture 77 (
Jan. 2017), 43–51.Google ScholarCross Ref
- [7] . 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 1–8.
DOI: Google ScholarCross Ref
- [8] . 2015. Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems, , , , , and (Eds.), Vol. 28. Curran Associates, Inc., 1117–1125.Google Scholar
- [9] . 2020. Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs. In 2020 IEEE/ACM International Conference on Computer Aided Design (ICCAD’20). 1–9.Google Scholar
Digital Library
- [10] . 2020. Exploiting neuron and synapse filter dynamics in spatial temporal learning of deep spiking neural network. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20). 2771–2778.Google Scholar
Cross Ref
- [11] . 2010. Accelerated simulation of spiking neural networks using GPUs. In The 2010 International Joint Conference on Neural Networks (IJCNN’10). 1–8.Google Scholar
Cross Ref
- [12] . 2020. Hardware implementation of spiking neural networks on FPGA. Tsinghua Science and Technology 25, 4 (2020), 479–486.Google Scholar
Cross Ref
- [13] . 2016. Training spiking deep networks for neuromorphic hardware. arXiv:1611.05141 (2016).
arxiv:1611.05141 Google Scholar - [14] . 2020. An FPGA implementation of deep spiking neural networks for low-power and fast classification. Neural Computation 32, 1 (2020), 182–204.Google Scholar
Digital Library
- [15] . 2016. Spiking neural network simulation on FPGAs with automatic and intensive pipelining. In 2016 International Symposium on Nonlinear Theory and Its Applications (NOLTA’16). 202–205.Google Scholar
- [16] . 2015. Deep learning. Nature 521 (
May 2015), 436–444.Google ScholarCross Ref
- [17] . 2021. Demystifying the memory system of modern datacenter FPGAs for software programmers through microbenchmarking. In The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’21). Association for Computing Machinery, 105–115.Google Scholar
- [18] . 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10, 9 (1997), 1659–1671.Google Scholar
Cross Ref
- [19] . 2019. SpiNNaker 2: A 10 million core processor system for brain simulation and machine learning. arXiv:1911.02385 (2019).
arxiv:cs.ET/1911.02385 Google Scholar - [20] . 2006. Fast modifications of the spikeprop algorithm. In The 2006 IEEE International Joint Conference on Neural Network Proceedings. 3970–3977.Google Scholar
- [21] . 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668–673.Google Scholar
Cross Ref
- [22] . 2009. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Networks: The Official Journal of the International Neural Network Society 22 (
Aug. 2009), 791–800.Google ScholarDigital Library
- [23] . 2019. NengoFPGA: An FPGA Backend for the Nengo Neural Simulator. http://hdl.handle.net/10012/14923.Google Scholar
- [24] . 2014. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 22, 12 (2014), 2621–2628.Google Scholar
Cross Ref
- [25] . 2017. An FPGA platform for real-time simulation of spiking neuronal networks. Frontiers in Neuroscience 11 (2017), 90.Google Scholar
Cross Ref
- [26] . 2018. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience 12 (2018), 774.Google Scholar
Cross Ref
- [27] . 2010. Supervised learning in spiking neural networks with resume: Sequence learning, classification, and spike shifting. Neural Computing 22, 2 (
Feb. 2010), 467–510.Google ScholarDigital Library
- [28] . 2012. FPGA implementation of spiking neural networks. IFAC Proceedings Volumes 45, 4 (2012), 139–144.
1st IFAC Conference on Embedded Systems, Computational Intelligence and Telematics in Control. Google ScholarCross Ref
- [29] . 2016. Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv:1612.04052 (2016).
arxiv:stat.ML/1612.04052 .Google Scholar - [30] . 2010. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In 2010 IEEE International Symposium on Circuits and Systems (ISCAS’10). 1947–1950.Google Scholar
- [31] . 2019. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience 13 (2019), 95.Google Scholar
Cross Ref
- [32] . 2019. Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In Proceedings of the International Conference on Neuromorphic Systems (ICONS’19). Association for Computing Machinery, New York, NY, Article
10 , 8 pages.Google ScholarDigital Library
- [33] . 2018. SLAYER: Spike layer error reassignment in time. In Advances in Neural Information Processing Systems, , , , , , and (Eds.), Vol. 31. Curran Associates, Inc., 1412–1421.Google Scholar
- [34] . 2015. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker. In 2015 International Joint Conference on Neural Networks (IJCNN’15). 1–8.Google Scholar
Cross Ref
- [35] . 2015. DL-ReSuMe: A delay learning-based remote supervised method for spiking neurons. IEEE Transactions on Neural Networks and Learning Systems 26, 12 (2015), 3137–3149.Google Scholar
Cross Ref
- [36] . 2019. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 330 (2019), 39–47.Google Scholar
Cross Ref
- [37] . 2018. An FPGA-based massively parallel neuromorphic cortex simulator. Frontiers in Neuroscience 12 (2018), 213.Google Scholar
Cross Ref
- [38] . 2018. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12 (2018), 331.Google Scholar
Cross Ref
- [39] . 2021. Vitis AI: Adaptable and Real-Time AI Inference Acceleration. https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html.Google Scholar
- [40] . 2019. Caffeine: Toward uniformed representation and acceleration for deep convolutional neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38, 11 (2019), 2072–2085.Google Scholar
Digital Library
Index Terms
SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs
Recommendations
Improved integrate-and-fire neuron models for inference acceleration of spiking neural networks
AbstractWe study the effects of different bio-synaptic membrane potential mechanisms on the inference speed of both spiking feed-forward neural networks and spiking convolutional neural networks. These mechanisms are inspired by biological neuron ...
Improved Izhikevich neurons for spiking neural networks
Spiking neural networks constitute a modern neural network paradigm that overlaps machine learning and computational neurosciences. Spiking neural networks use neuron models that possess a great degree of biological realism. The most realistic model of ...
Deep learning in spiking neural networks
AbstractIn recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner ...






Comments