skip to main content
research-article

Hardware Acceleration for Embedded Keyword Spotting: Tutorial and Survey

Published:18 October 2021Publication History
Skip Abstract Section

Abstract

In recent years, Keyword Spotting (KWS) has become a crucial human–machine interface for mobile devices, allowing users to interact more naturally with their gadgets by leveraging their own voice. Due to privacy, latency and energy requirements, the execution of KWS tasks on the embedded device itself instead of in the cloud, has attracted significant attention from the research community. However, the constraints associated with embedded systems, including limited energy, memory, and computational capacity, represent a real challenge for the embedded deployment of such interfaces. In this article, we explore and guide the reader through the design of KWS systems. To support this overview, we extensively survey the different approaches taken by the recent state-of-the-art (SotA) at the algorithmic, architectural, and circuit level to enable KWS tasks in edge, devices. A quantitative and qualitative comparison between relevant SotA hardware platforms is carried out, highlighting the current design trends, as well as pointing out future research directions in the development of this technology.

REFERENCES

  1. [1] Lane Nicholas D. and Georgiev Petko. 2015. Can deep learning revolutionize mobile sensing? In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications. ACM, 117122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Tang Raphael and Lin Jimmy. 2018. Deep residual learning for small-footprint keyword spotting. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 54845488.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] McGraw Ian, Prabhavalkar Rohit, Alvarez Raziel, Arenas Montse Gonzalez, Rao Kanishka, Rybach David, Alsharif Ouais, Sak Haşim, Gruenstein Alexander, Beaufays Françoise, and Carolina Parada. 2016. Personalized speech recognition on mobile devices. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 59555959.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Dillon Tharam, Wu Chen, and Chang Elizabeth. 2010. Cloud computing: Issues and challenges. In Proceedings of the 2010 24th IEEE International Conference on Advanced Information Networking and Applications. IEEE, 2733. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Zhang Yundong, Suda Naveen, Lai Liangzhen, and Chandra Vikas. 2017. Hello edge: Keyword spotting on microcontrollers. arXiv:1711.07128. Retrieved from https://arxiv.org/abs/1711.07128.Google ScholarGoogle Scholar
  6. [6] Fernández-Marqués Javier, Tseng Vincent W.-S., Bhattachara Sourav, and Lane Nicholas D.. 2018. On-the-fly deterministic binary filters for memory efficient keyword spotting applications on embedded devices. In Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning. 1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Blouw Peter, Malik Gurshaant, Morcos Benjamin, Voelker Aaron R., and Eliasmith Chris. 2020. Hardware aware training for efficient keyword spotting on general purpose and specialized hardware. arXiv:2009.04465. Retrieved from https://arxiv.org/abs/2009.04465.Google ScholarGoogle Scholar
  8. [8] He Qing, Wornell Gregory W., and Ma Wei. 2016. An adaptive multi-band system for low power voice command recognition. In Proceedings of the 17th Annual Conference of the International Speech Communication Association. Nelson Morgan (Ed.), ISCA, 1888–1892.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Giraldo Juan Sebastian P. and Verhelst Marian. 2018. Laika: A 5 uW programmable LSTM accelerator for always-on keyword spotting in 65 nm CMOS. In Proceedings of the IEEE 44th European Solid State Circuits Conference. IEEE, 166169.Google ScholarGoogle Scholar
  10. [10] Li Qin, Lin Sheng, Liu Changlu, Liu Yidong, Qiao Fei, Wang Yanzhi, and Yang Huazhong. 2020. NS-KWS: Joint optimization of near-sensor processing architecture and low-precision GRU for always-on keyword spotting. In Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design. 97102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Warden Pete. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv:1804.03209. Retrieved from https://arxiv.org/abs/1804.03209.Google ScholarGoogle Scholar
  12. [12] Shan Weiwei, Yang Minhao, Xu Jiaming, Lu Yicheng, Zhang Shuai, Wang Tao, Yang Jun, Shi Longxing, and Seok Mingoo. 2020. 14.1 a 510 nW 0.41 v low-memory low-computation keyword-spotting chip using serial FFT-Based MFCC and binarized depthwise separable convolutional neural network in 28 nm CMOS. In Proceedings of the 2020 IEEE International Solid-State Circuits Conference. IEEE, 230232.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Leonard R. Gary and Doddington George. 1993. Tidigits Speech Corpus. Texas Instruments, Inc .Google ScholarGoogle Scholar
  14. [14] Giraldo Juan Sebastian P., Lauwereins Steven, Badami Komail, and Verhelst Marian. 2020. Vocell: A 65-nm speech-triggered wake-up soc for 10 uW keyword spotting and speaker verification. IEEE Journal of Solid-State Circuits 55, 4 (2020), 868878.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Price Michael, Glass James, and Chandrakasan Anantha P.. 2017. 14.4 a scalable speech recognizer with deep-neural-network acoustic models and voice-activated power gating. In Proceedings of the 2017 IEEE International Solid-State Circuits Conference. IEEE, 244245.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Yin Shouyi, Ouyang Peng, Zheng Shixuan, Song Dandan, Li Xiudong, Liu Leibo, and Wei Shaojun. 2018. A 141 uw, 2.46 pj/neuron binarized convolutional neural network based self-learning speech recognition processor in 28 nm CMOS. In Proceedings of the 2018 IEEE Symposium on VLSI Circuits. IEEE, 139140.Google ScholarGoogle Scholar
  17. [17] Coucke Alice, Chlieh Mohammed, Gisselbrecht Thibault, Leroy David, Poumeyrol Mathieu, and Lavril Thibaut. 2019. Efficient keyword spotting using dilated convolutions and gating. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 63516355.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Wang Dewei, Chundi Pavan Kumar, Kim Sung Justin, Yang Minhao, Cerqueira Joao Pedro, Kang Joonsung, Jung Seunchul, Kim Sangjoon, and Seok Mingoo. 2020. Always-on, Sub-300-nW, event-driven spiking neural network based on spike-driven clock-generation and clock-and power-gating for an ultra-low-power intelligent device. arXiv:2006.12314. Retrieved from https://arxiv.org/abs/2006.12314.Google ScholarGoogle Scholar
  19. [19] Guo Ruiqi, Liu Yonggang, Zheng Shixuan, Wu Ssu-Yen, Ouyang Peng, Khwa Win-San, Chen Xi, Chen Jia-Jing, Li Xiudong, Liu Leibo, Meng-Fan Chang, Shaojun Wei, and Shouyi Yin. 2019. A 5.1 pJ/neuron 127.3 us/inference RNN-based speech recognition processor using 16 computing-in-memory SRAM macros in 65 nm CMOS. In Proceedings of the 2019 Symposium on VLSI Circuits. IEEE, C120–C121.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Xiang Liping, Lu Shengli, Wang Xuexiang, Liu Hao, Pang Wei, and Yu Huazhen. 2019. Implementation of LSTM accelerator for speech keywords recognition. In Proceedings of the 2019 IEEE 4th International Conference on Integrated Circuits and Microsystems. IEEE, 195198.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Dbouk Hassan, Gonugondla Sujan K., Sakr Charbel, and Shanbhag Naresh R.. 2020. KeyRAM: A 0.34 uJ/decision 18 k decisions/s recurrent attention in-memory processor for keyword spotting. In Proceedings of the 2020 IEEE Custom Integrated Circuits Conference. IEEE, 14.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Liu Bo, Wang Zhen, Fan Hu, Yang Jing, Zhu Wentao, Huang Lepeng, Gong Yu, Ge Wei, and Shi Longxing. 2019. EERA-KWS: A 163 TOPS/W always-on keyword spotting accelerator in 28 nm CMOS using binary weight network and precision self-adaptive approximate computing. IEEE Access 7 (2019), 8245382465. DOI: 10.1109/ACCESS.2019.2924340Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Liu Bo, Cai Hao, Wang Zhen, Sun Yuhao, Shen Zeyu, Zhu Wentao, Li Yan, Gong Yu, Ge Wei, Yang Jun, and L. Shi. 2020. A 22 nm, 10.8 \(\mu\)W/15.1 \(\mu\)W dual computing modes high power-performance-area efficiency domained background noise aware keyword-spotting processor. IEEE Transactions on Circuits and Systems I: Regular Papers 67, 12 (2020), 4733–4746.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Bernardo Paul Palomero, Gerum Christoph, Frischknecht Adrian, Lübeck Konstantin, and Bringmann Oliver. 2020. Ultratrail: A configurable ultralow-power TC-ResNet AI accelerator for efficient keyword spotting. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39, 11 (2020), 42404251.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Liu Bo, Wang Zhen, Zhu Wentao, Sun Yuhao, Shen Zeyu, Huang Lepeng, Li Yan, Gong Yu, and Ge Wei. 2019. An ultra-low power always-on keyword spotting accelerator using quantized convolutional neural network and voltage-domain analog switching network-based approximate computing. IEEE Access 7 (2019), 186456186469. DOI: 10.1109/ACCESS.2019.2960948Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Wang Miaorong and Chandrakasan Anantha P.. 2019. Flexible low power CNN accelerator for edge computing with weight tuning. In Proceedings of the 2019 IEEE Asian Solid-State Circuits Conference. IEEE, 209212.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Kao Chao-Yang, Kuo Huang-Chih, Chen Jian-Wen, Lin Chiung-Liang, Chen Pin-Han, and Lin Youn-Long. 2020. RNNAccel: A fusion recurrent neural network accelerator for edge intelligence. arXiv:2010.13311. Retrieved from https://arxiv.org/abs/2010.13311.Google ScholarGoogle Scholar
  28. [28] Price Patti, Fisher William M., Bernstein Jared, and Pallett David S.. 1988. The DARPA 1000-word resource management database for continuous speech recognition. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. IEEE Computer Society, 651652.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Shah Mohit, Wang Jingcheng, Blaauw David, Sylvester Dennis, Kim Hun-Seok, and Chakrabarti Chaitali. 2015. A fixed-point neural network for keyword detection on resource constrained hardware. In Proceedings of the 2015 IEEE Workshop on Signal Processing Systems. IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Kim Byeonggeun, Lee Mingu, Lee Jinkyu, Kim Yeonseok, and Hwang Kyuwoong. 2019. Query-by-example on-device keyword spotting. In Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop. IEEE, 532538.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Knowles. 2005. Passive Microphone BJ-21590-000. Retrieved on April 29, 2021 from https://www.digikey.be/htmldatasheets/production/388648/0/0/1/bj-21590-000-drawing.html.Google ScholarGoogle Scholar
  32. [32] Eargle John. 2012. The Microphone Book: From Mono to Stereo to Surround-A Guide to Microphone Design and Application. CRC Press.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Fry Dennis B.. 1975. Simple reaction-times to speech and non-speech stimuli. Cortex 11, 4 (1975), 355360.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Price Michael, Glass James, and Chandrakasan Anantha P.. 2017. A low-power speech recognizer and voice activity detector using deep neural networks. IEEE Journal of Solid-State Circuits 53, 1 (2017), 6675.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Kumatani Kenichi, Panchapagesan Sankaran, Wu Minhua, Kim Minjae, Strom Nikko, Tiwari Gautam, and Mandai Arindam. 2017. Direct modeling of raw audio with dnns for wake word detection. In Proceedings of the 2017 IEEE Automatic Speech Recognition and Understanding Workshop. IEEE, 252257.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Mittermaier Simon, Kürzinger Ludwig, Waschneck Bernd, and Rigoll Gerhard. 2020. Small-footprint keyword spotting on raw audio data with sinc-convolutions. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 74547458.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Li Qin, Zhu Huifeng, Qiao Fei, Liu Xinjun, Wei Qi, and Yang Huazhong. 2018. Energy-efficient MFCC extraction architecture in mixed-signal domain for automatic speech recognition. In Proceedings of the 2018 IEEE/ACM International Symposium on Nanoscale Architectures. IEEE, 13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Badami Komail, Murthy Kushal Dakshina, Harpe Pieter, and Verhelst Marian. 2018. A 0.6 V 54DB SNR analog frontend with 0.18 THD for low power sensory applications in 65NM CMOS. In Proceedings of the 2018 IEEE Symposium on VLSI Circuits. IEEE, 241242.Google ScholarGoogle Scholar
  39. [39] Boersma Paul. 1993. Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. In Proceedings of the Institute of Phonetic Sciences. Vol. 17, Amsterdam, 97110.Google ScholarGoogle Scholar
  40. [40] Chollet François. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 12511258.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Shahnawaz Muhammad, Plebani Emanuele, Guaneri Ivana, Pau Danilo, and Marcon Marco. 2018. Studying the effects of feature extraction settings on the accuracy and memory requirements of neural networks for keyword spotting. In Proceedings of the 2018 IEEE 8th International Conference on Consumer Electronics. IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Bang Suyoung, Wang Jingcheng, Li Ziyun, Gao Cao, Kim Yejoong, Dong Qing, Chen Yen-Po, Fick Laura, Sun Xun, Dreslinski Ron, Trevor Mudge, Hun Seok Kim, David Blaauw, and Dennis Sylvester. 2017. 14.7 a 288 \(\mu\)w programmable deep-learning processor with 270 kb on-chip weight storage using non-uniform memory hierarchy for mobile intelligence. In Proceedings of the 2017 IEEE International Solid-State Circuits Conference. IEEE, 250251.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Badami Komail M. H., Lauwereins Steven, Meert Wannes, and Verhelst Marian. 2015. A 90 nm CMOS, 6 uW power-proportional acoustic sensing frontend for voice activity detection. IEEE Journal of Solid-State Circuits 51, 1 (2015), 291302.Google ScholarGoogle Scholar
  44. [44] Rose Richard C. and Paul Douglas B.. 1990. A hidden Markov model based keyword recognition system. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. IEEE, 129132.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Baljekar Pallavi, Lehman Jill Fain, and Singh Rita. 2014. Online word-spotting in continuous speech with recurrent neural networks. In Proceedings of the 2014 IEEE Spoken Language Technology Workshop. IEEE, 536541.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Chen Guoguo, Parada Carolina, and Heigold Georg. 2014. Small-footprint keyword spotting using deep neural networks. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 40874091.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Sainath Tara N. and Parada Carolina. 2015. Convolutional neural networks for small-footprint keyword spotting. In Proceedings of the 16th Annual Conference of the International Speech Communication Association.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Fernández Santiago, Graves Alex, and Schmidhuber Jürgen. 2007. An application of recurrent neural networks to discriminative keyword spotting. In Proceedings of the International Conference on Artificial Neural Networks. Springer, 220229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Lea Colin, Flynn Michael D., Vidal Rene, Reiter Austin, and Hager Gregory D.. 2017. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 156165.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Indiveri Giacomo, Linares-Barranco Bernabé, Hamilton Tara Julia, Schaik André Van, Etienne-Cummings Ralph, Delbruck Tobi, Liu Shih-Chii, Dudek Piotr, Häfliger Philipp, Renaud Sylvie, Johannes Schemmel, Gert Cauwenberghs, John Arthur, Kai Hynna, Fopefolu Folowosele, Sylvain Saïghi, Teresa Serrano-Gotarredona, Jayawan Wijekoon, Yingxue Wang, and Kwabena Boahen. 2011. Neuromorphic silicon neuron circuits. Frontiers in Neuroscience 31, 5 (2011), 73.Google ScholarGoogle Scholar
  51. [51] Seltzer Michael L., Yu Dong, and Wang Yongqiang. 2013. An investigation of deep neural networks for noise robust speech recognition. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 73987402.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Teich Jürgen. 2012. Hardware/software codesign: The past, the present, and predicting the future. Proceedings of the IEEE 100, Special Centennial Issue (2012), 14111430.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Team Siri. 2017. Hey Siri: An on-device DNN-powered voice trigger for Apple’s personal assistant. Apple Machine Learning Journal 1, 6 (2017).Google ScholarGoogle Scholar
  54. [54] Giraldo J. S. P., O’Connor Chris, and Verhelst Marian. 2019. Efficient keyword spotting through hardware-aware conditional execution of deep neural networks. In Proceedings of the 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications. IEEE, 18.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Sze Vivienne, Chen Yu-Hsin, Yang Tien-Ju, and Emer Joel S.. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE 105, 12 (2017), 22952329.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Hashemi Soheil, Anthony Nicholas, Tann Hokchhay, Bahar R. Iris, and Reda Sherief. 2017. Understanding the impact of precision quantization on the accuracy and energy of neural networks. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition. IEEE, 14741479. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Mishra Asit, Nurvitadhi Eriko, Cook Jeffrey J., and Marr Debbie. 2017. WRPN: Wide reduced-precision networks. arXiv:1709.01134. Retrieved from https://arxiv.org/abs/1709.01134.Google ScholarGoogle Scholar
  58. [58] Li Jian and Alvarez Raziel. 2021. On the quantization of recurrent neural networks. arXiv:2101.05453. Retrieved from https://arxiv.org/abs/2101.05453.Google ScholarGoogle Scholar
  59. [59] Blalock Davis, Ortiz Jose Javier Gonzalez, Frankle Jonathan, and Guttag John. 2020. What is the state of neural network pruning?arXiv:2003.03033. Retrieved from https://arxiv.org/abs/2003.03033.Google ScholarGoogle Scholar
  60. [60] Hu Hengyuan, Peng Rui, Tai Yu-Wing, and Tang Chi-Keung. 2016. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv:1607.03250. Retrieved from https://arxiv.org/abs/1607.03250.Google ScholarGoogle Scholar
  61. [61] Han Song, Mao Huizi, and Dally William J.. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149. Retrieved from https://arxiv.org/abs/1510.00149.Google ScholarGoogle Scholar
  62. [62] Greff Klaus, Srivastava Rupesh K., Koutník Jan, Steunebrink Bas R., and Schmidhuber Jürgen. 2016. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems 28, 10 (2016), 22222232.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Herbert Sebastian and Marculescu Diana. 2007. Analysis of dynamic voltage/frequency scaling in chip-multiprocessors. In Proceedings of the 2007 International Symposium on Low Power Electronics and Design. IEEE, 3843. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Syntiant. 2019. The Speed and Power Advantage of a Purpose-Built Neural Compute Engine. RetrievedJune 2019 from https://www.syntiant.com/post/keyword-spotting-power-comparison.Google ScholarGoogle Scholar
  65. [65] Figueras Joan, Barcelona UPC, Spain CLEAN Training Leader, Maes Herman E., Leuven IMEC, Thomas Dominique, and France ST Microelectronics. Controlling Leakage Power in Nanometer CMOS: Technology Meets Design. Retrieved on April 29, 2021 from https://www.edacentrum.de/controlling-leakage-power-nanometer-cmos-technology-meets-design.Google ScholarGoogle Scholar
  66. [66] Moore David S. and Kirkland Stephane. 2007. The Basic Practice of Statistics. Vol. 2. WH Freeman, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Greenhalgh A. R. M. Peter. 2011. Big. LITTLE Processing with ARM Cortex™-A15 & Cortex-A7. Retrieved on April 29, 2021 from https://www.eetimes.com/big-little-processing-with-arm-cortex-a15-cortex-a7/.Google ScholarGoogle Scholar
  68. [68] Cho Minchang, Oh Sechang, Shi Zhan, Lim Jongyup, Kim Yejoong, Jeong Seokhyeon, Chen Yu, Blaauw David, Kim Hun-Seok, and Sylvester Dennis. 2019. 17.2 a 142 nW voice and acoustic activity detection chip for mm-scale sensor nodes using time-interleaved mixer-based frequency scanning. In Proceedings of the 2019 IEEE International Solid-State Circuits Conference. IEEE, 278280.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Hardware Acceleration for Embedded Keyword Spotting: Tutorial and Survey

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Embedded Computing Systems
          ACM Transactions on Embedded Computing Systems  Volume 20, Issue 6
          November 2021
          256 pages
          ISSN:1539-9087
          EISSN:1558-3465
          DOI:10.1145/3485150
          • Editor:
          • Tulika Mitra
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 18 October 2021
          • Accepted: 1 July 2021
          • Received: 1 January 2021
          Published in tecs Volume 20, Issue 6

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!