Abstract
We present a compiler for Loihi, a novel manycore neuromorphic processor that features a programmable, on-chip learning engine for training and executing spiking neural networks (SNNs). An SNN is distinguished from other neural networks in that (1) its independent computing units, or "neurons", communicate with others only through spike messages; and (2) each neuron evaluates local learning rules, which are functions of spike arrival and departure timings, to modify its local state. The collective neuronal state dynamics of an SNN form a nonlinear dynamical system that can be cast as an unconventional model of computation. To realize such an SNN on Loihi requires each constituent neuron to locally store and independently update its own spike timing information. However, each Loihi core has limited resources for this purpose and these must be shared by neurons assigned to the same core. In this work, we present a compiler for Loihi that maps the neurons of an SNN onto and across Loihi's cores efficiently. We show that a poor neuron-to-core mapping can incur significant energy costs and address this with a greedy algorithm that compiles SNNs onto Loihi in a power-efficient manner. In so doing, we highlight the need for further development of compilers for this new, emerging class of architectures.
Supplemental Material
- Konstantin Andreev and Harald Racke. 2006. Balanced graph partitioning. Theory of Computing Systems 39, 6 (2006), 929–939. Google Scholar
Digital Library
- Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R Chandrasekaran, Jean-Marie Bussat, Rodrigo AlvarezIcaza, John V Arthur, Paul A Merolla, and Kwabena Boahen. 2014. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 5 (2014), 699–716.Google Scholar
Cross Ref
- Ed Bullmore and Olaf Sporns. 2009. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience 10, 3 (2009), 186.Google Scholar
Cross Ref
- Yongqiang Cao, Yang Chen, and Deepak Khosla. 2015. Spiking deep convolutional neural networks for energy-efficient object recognition. International Journal of Computer Vision 113, 1 (2015), 54–66. Google Scholar
Digital Library
- Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 1 (2018), 82–99.Google Scholar
Cross Ref
- Andrew P Davison, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. PyNN: a common interface for neuronal network simulators. Frontiers in neuroinformatics 2 (2009), 11.Google Scholar
- Peter U Diehl and Matthew Cook. 2015. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in computational neuroscience 9 (2015), 99.Google Scholar
- Peter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. 2015. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Neural Networks (IJCNN), 2015 International Joint Conference on. IEEE, 1–8.Google Scholar
Cross Ref
- Rodney J Douglas and Kevan AC Martin. 2004. Neuronal circuits of the neocortex. Annu. Rev. Neurosci. 27 (2004), 419–451.Google Scholar
Cross Ref
- Matthias Ehrlich, Karsten Wendt, Lukas Zühl, René Schüffny, Daniel Brüderle, Eric Müller, and Bernhard Vogginger. 2010. A Software Framework for Mapping Neural Networks to a Wafer-scale Neuromorphic Hardware System.. In ANNIIP. 43–52.Google Scholar
- CM Fiduccia and RM Mattheyses. 1982. A linear-time heuristic for improving network partitions. In Proceedings of the 19th Design Automation Conference. Google Scholar
Digital Library
- Steve B Furber, David R Lester, Luis A Plana, Jim D Garside, Eustace Painkras, Steve Temple, and Andrew D Brown. 2013. Overview of the spinnaker system architecture. IEEE Trans. Comput. 62, 12 (2013), 2454–2467. Google Scholar
Digital Library
- Francesco Galluppi, Sergio Davies, Alexander Rast, Thomas Sharp, Luis A Plana, and Steve Furber. 2012. A hierachical configuration system for a massively parallel neural hardware platform. In Proceedings of the 9th conference on Computing Frontiers. ACM, 183–192. Google Scholar
Digital Library
- Wulfram Gerstner, Werner M Kistler, Richard Naud, and Liam Paninski. 2014. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press. Google Scholar
Digital Library
- Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. 2000. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 6789 (2000), 947–951.Google Scholar
- Dezhe Z Jin and H Sebastian Seung. 2002. Fast computation with spikes in a recurrent neural network. Physical Review E 65, 5 (2002), 051922.Google Scholar
Cross Ref
- George Karypis and Vipin Kumar. 1998. Multilevel algorithms for multiconstraint graph partitioning. In Proceedings of the 1998 ACM/IEEE conference on Supercomputing. Google Scholar
Digital Library
- George Karypis and Vipin Kumar. 1998. Multilevel k-way partitioning scheme for irregular graphs. Journal of Parallel and Distributed computing 48, 1 (1998), 96–129. Google Scholar
Digital Library
- Brian W Kernighan and Shen Lin. 1970. An efficient heuristic procedure for partitioning graphs. The Bell system technical journal 49, 2 (1970), 291–307.Google Scholar
- Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J Thorpe, and Timothée Masquelier. 2017. STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks (2017).Google Scholar
- Dale K Lee, Laurent Itti, Christof Koch, and Jochen Braun. 1999. Attention activates winner-take-all competition among visual filters. Nature neuroscience 2, 4 (1999), 375–381.Google Scholar
- Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 6197 (2014), 668–673.Google Scholar
- Matthias Oster and Shih-Chii Liu. 2006. Spiking inputs to a winnertake-all network. In Advances in Neural Information Processing Systems. 1051–1058. Google Scholar
Digital Library
- Johannes Schemmel, Daniel Briiderle, Andreas Griibl, Matthias Hock, Karlheinz Meier, and Sebastian Millner. 2010. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Circuits and systems (ISCAS), proceedings of 2010 IEEE international symposium on. IEEE, 1947–1950.Google Scholar
- Sen Song, Kenneth D Miller, and Larry F Abbott. 2000. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience 3, 9 (2000), 919–926.Google Scholar
- Tim P Vogels, Henning Sprekeler, Friedemann Zenke, Claudia Clopath, and Wulfram Gerstner. 2011. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 6062 (2011), 1569–1573.Google Scholar
Cross Ref
- Duncan J Watts and Steven H Strogatz. 1998. Collective dynamics of ’small-world’ networks. nature 393, 6684 (1998), 440–442.Google Scholar
- Shihui Yin, Shreyas K Venkataramanaiah, Gregory K Chen, Ram Krishnamurthy, Yu Cao, Chaitali Chakrabarti, and Jae-sun Seo. 2017. Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations. arXiv preprint arXiv:1709.06206 (2017).Google Scholar
Index Terms
Mapping spiking neural networks onto a manycore neuromorphic architecture
Recommendations
Mapping spiking neural networks onto a manycore neuromorphic architecture
PLDI 2018: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and ImplementationWe present a compiler for Loihi, a novel manycore neuromorphic processor that features a programmable, on-chip learning engine for training and executing spiking neural networks (SNNs). An SNN is distinguished from other neural networks in that (1) its ...
Apples-to-spikes: The first detailed comparison of LASSO solutions generated by a spiking neuromorphic processor
ICONS '22: Proceedings of the International Conference on Neuromorphic Systems 2022The Locally Competitive Algorithm (LCA) is a model of simple cells in the primary visual cortex, based on convex sparse coding via recurrent lateral competition between neighboring neurons. Previous work implemented spiking LCA (S-LCA) on the Loihi ...
A study on supporting spiking neural network models based on multiple neuromorphic processors
RACS '19: Proceedings of the Conference on Research in Adaptive and Convergent SystemsNeuromorphic computing or neuromorphic engineering is an engineering discipline that attempts to simulate human brain function by creating circuits that mimic the shape of neurons. In the field of neuromorphic computing, neuromorphic processors are ...







Comments