skip to main content
research-article

ATCN: Resource-efficient Processing of Time Series on Edge

Published:08 October 2022Publication History
Skip Abstract Section

Abstract

This article presents a scalable deep learning model called Agile Temporal Convolutional Network (ATCN) for highly accurate fast classification and time series prediction in resource-constrained embedded systems. ATCN is a family of compact networks with formalized hyperparameters that enable application-specific adjustments to be made to the model architecture. It is primarily designed for embedded edge devices with very limited performance and memory, such as wearable biomedical devices and real-time reliability monitoring systems. ATCN makes fundamental improvements over the mainstream temporal convolutional neural networks, including residual connections to increase the network depth and accuracy and the incorporation of separable depth-wise convolution to reduce the computational complexity of the model. As part of the present work, two ATCN families, namely T0 and T1, are also presented and evaluated on different ranges of embedded processors: Cortex-M7 and Cortex-A57 processors. An evaluation of the ATCN models against the best-in-class InceptionTime and MiniRocket shows that ATCN almost maintains accuracy while improving the execution time on a broad range of embedded and cyber-physical applications with demand for real-time processing on the embedded edge. At the same time, in contrast to existing solutions, ATCN is the first time series classifier based on deep learning that can be run bare-metal on embedded microcontrollers (Cortex-M7) with limited computational performance and memory capacity while delivering state-of-the-art accuracy.

REFERENCES

  1. [1] Baharani M., Biglarbegian M., Parkhideh B., and Tabkhi H.. 2019. Real-time deep learning at the edge for scalable reliability modeling of Si-MOSFET power electronics converters. IEEE Internet of Things Journal 6, 5 (2019), 73757385. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Baharani Mohammadreza, Sunil Ushma, Manohar Kaustubh, Furgurson Steven, and Tabkhi Hamed. 2021. DeepDive: An integrative algorithm/architecture co-design for deep separable convolutional neural networks. In Proceedings of the 2021 on Great Lakes Symposium on VLSI (GLSVLSI’21). Association for Computing Machinery, New York, NY, 247252. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Bai Shaojie, Kolter J. Zico, and Koltun Vladlen. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. CoRR abs/1803.01271 (2018). arXiv:1803.01271 http://arxiv.org/abs/1803.01271.Google ScholarGoogle Scholar
  4. [4] Biglarbegian M., Baharani M., Kim N., Tabkhi H., and Parkhideh B.. 2018. Scalable reliability monitoring of GaN power converter through recurrent neural networks. In 2018 IEEE Energy Conversion Congress and Exposition (ECCE’18). 72717277. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Carreras M., Deriu G., Raffo L., Benini L., and Meloni P.. 2020. Optimizing temporal convolutional network inference on FPGA-based accelerators. IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2020), 11.Google ScholarGoogle Scholar
  6. [6] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical evaluationof gated recurrent neural networks on sequence modeling. In NIPS DeepLearn Workshop.Google ScholarGoogle Scholar
  7. [7] Dau Hoang Anh, Bagnall Anthony, Kamgar Kaveh, Yeh Chin-Chia Michael, Zhu Yan, Gharghabi Shaghayegh, Ratanamahatana Chotirat Ann, and Keogh Eamonn. 2019. The UCR time series archive. IEEE/CAA Journal of Automatica Sinica 6, 6 (2019), 12931305. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Dempster Angus, Petitjean François, and Webb Geoffrey I.. 2020. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery 34, 5 (Sept. 2020), 14541495. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Dempster Angus, Schmidt Daniel F., and Webb Geoffrey I.. 2021. MiniRocket: A very fast (almost) deterministic transform for time series classification. InProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’21). Association for Computing Machinery, New York, NY, 248257. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Demšar Janez. 2006. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7 (2006), 130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Deo Nachiket and Trivedi Mohan M.. 2018. Convolutional social pooling for vehicle trajectory prediction. In 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops’18). IEEE Computer Society, 14681476. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Friedman Milton. 1940. A comparison of alternative tests of significance for the problem of m rankings. Annals of Mathematical Statistics 11, 1 (1940), 8692.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Gehring Jonas, Auli Michael, Grangier David, Yarats Denis, and Dauphin Yann N.. 2017. Convolutional sequence to sequence learning. CoRR abs/1705.03122 (2017). arXiv:1705.03122 http://arxiv.org/abs/1705.03122.Google ScholarGoogle Scholar
  14. [14] Goodfellow Sebastian D., Goodwin Andrew, Greer Robert, Laussen Peter C., Mazwi Mjaye, and Eytan Danny. 2018. Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings. InProceedings of Machine Learning Research, Doshi-Velez Finale, Fackler Jim, Jung Ken, Kale David, Ranganath Rajesh, Wallace Byron, and Wiens Jenna (Eds.), Vol. 85. PMLR, Palo Alto, CA, 83101. http://proceedings.mlr.press/v85/goodfellow18a.html.Google ScholarGoogle Scholar
  15. [15] Han Song, Mao Huizi, and Dally William J.. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In International Conference on Learning Representations (ICLR’16).Google ScholarGoogle Scholar
  16. [16] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2015. Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). arXiv:1512.03385 http://arxiv.org/abs/1512.03385.Google ScholarGoogle Scholar
  17. [17] He Yihui, Zhang Xiangyu, and Sun Jian. 2017. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17).Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Hinton Geoffrey E., Vinyals Oriol, and Dean Jeffrey. 2014. Distilling the knowledge in a neural network. InNIPS abs/1503.02531 (2014).Google ScholarGoogle Scholar
  19. [19] Hochreiter Sepp and Schmidhuber Jürgen. 1997. Long short-term memory. Neural Computing 9, 8 (Nov. 1997), 17351780. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Holm Sture. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 2 (1979), 6570. http://www.jstor.org/stable/4615733.Google ScholarGoogle Scholar
  21. [21] Fawaz Hassan Ismail, Forestier Germain, Weber Jonathan, Idoumghar Lhassane, and Muller Pierre-Alain. 2019. Deep learning for time series classification: A review. Data Mining and Knowledge Discovery 33, 4 (2019), 917963.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Fawaz Hassan Ismail, Lucas Benjamin, Forestier Germain, Pelletier Charlotte, Schmidt Daniel F., Weber Jonathan, Webb Geoffrey I., Idoumghar Lhassane, Muller Pierre-Alain, and Petitjean François. 2020. InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery 34, 6 (Nov. 2020), 19361962. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Iwana Brian Kenji and Uchida Seiichi. 2021. An empirical survey of data augmentation for time series classification with neural networks. PLOS ONE 16, 7 (July 2021), e0254841. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Jacob Benoit, Kligys Skirmantas, Chen Bo, Zhu Menglong, Tang Matthew, Howard Andrew, Adam Hartwig, and Kalenichenko Dmitry. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18).Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Guennec Arthur Le, Malinowski Simon, and Tavenard Romain. 2016. Data augmentation for time series classification using convolutional neural networks. In ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data.Google ScholarGoogle Scholar
  26. [26] Lea C., Flynn M. D., Vidal R., Reiter A., and Hager G. D.. 2017. Temporal convolutional networks for action segmentation and detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 10031012.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Li Qin, Zhang Xiaofan, Xiong JinJun, Hwu Wen-mei, and Chen Deming. 2019. Implementing neural machine translation with bi-directional GRU and attention mechanism on FPGAs using HLS. In Proceedings of the 24th Asia and South Pacific Design Automation Conference (ASPDAC’19). Association for Computing Machinery, New York, NY, 693698. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Li Y., Xia Z., and Zhang Y.. 2020. Standalone systolic profile detection of non-contact SCG signal with LSTM network. IEEE Sensors Journal 20, 6 (2020), 31233131. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Lines Jason, Taylor Sarah, and Bagnall Anthony. 2016. HIVE-COTE: The hierarchical vote collective of transformation-based ensembles for time series classification. In 2016 IEEE 16th International Conference on Data Mining (ICDM’16). 10411046. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Mercat Jean, Gilles Thomas, Zoghby Nicole El, Sandou Guillaume, Beauvois Dominique, and Gil Guillermo Pita. 2020. Multi-head attention for multi-modal joint vehicle motion forecasting. In 2020 IEEE International Conference on Robotics and Automation (ICRA’20). IEEE, 96389644. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Molchanov Pavlo, Tyree Stephen, Karras Tero, Aila Timo, and Kautz Jan. 2017. Pruning convolutional neural networks for resource efficient transfer learning. ICLR 14, 192 (2017), 192.Google ScholarGoogle Scholar
  32. [32] Ouyang Kewei, Hou Yi, Zhou Shilin, and Zhang Ye. 2021. Convolutional neural network with an elastic matching mechanism for time series classification. Algorithms 14, 7 (2021), 6875–6879. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Pandey A. and Wang D.. 2019. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’19). 68756879. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Saadatnejad S., Oveisi M., and Hashemi M.. 2020. LSTM-based ECG classification for continuous monitoring on personal wearable devices. IEEE Journal of Biomedical and Health Informatics 24, 2 (2020), 515523. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Sen Rajat, Yu Hsiang-Fu, and Dhillon Inderjit S.. 2019. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. In Advances in Neural Information Processing Systems. 48374846.Google ScholarGoogle Scholar
  36. [36] Szegedy Christian, Ioffe Sergey, Vanhoucke Vincent, and Alemi Alexander A.. 2017. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI’17). AAAI Press, 42784284.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Um Terry T., Pfister Franz M. J., Pichler Daniel, Endo Satoshi, Lang Muriel, Hirche Sandra, Fietzek Urban, and Kulić Dana. 2017. Data augmentation of wearable sensor data for Parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI’17). Association for Computing Machinery, New York, NY, 216220. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Oord Aäron van den, Dieleman Sander, Zen Heiga, Simonyan Karen, Vinyals Oriol, Graves Alex, Kalchbrenner Nal, Senior Andrew W., and Kavukcuoglu Koray. 2016. WaveNet: A generative model for raw audio. ArXiv abs/1609.03499 (2016).Google ScholarGoogle Scholar
  39. [39] Kuppevelt D. van, Meijer C., Huber F., Ploeg A. van der, Georgievska S., and Hees V. T. van. 2020. Mcfly: Automated deep learning on time series. SoftwareX 12 (2020), 100548. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Waibel A., Hanazawa T., Hinton G., Shikano K., and Lang K. J.. 1989. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech, and Signal Processing 37, 3 (1989), 328339. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Wilcoxon Frank. 1992. Individual comparisons by ranking methods. In Breakthroughs in Statistics. Springer, 196202.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Xie Xu, Zhang Chi, Zhu Yixin, Wu Ying Nian, and Zhu Song-Chun. 2021. Congestion-aware multi-agent trajectory prediction for collision avoidance. CoRR abs/2103.14231 (2021). arXiv:2103.14231 https://arxiv.org/abs/2103.14231.Google ScholarGoogle Scholar
  43. [43] Zhang B., Xiong D., Xie J., and Su J.. 2020. Neural machine translation with GRU-gated attention model. IEEE Transactions on Neural Networks and Learning Systems (2020), 111. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Zhang Y., Xiong R., He H., and Pecht M. G.. 2018. Long short-term memory recurrent neural network for remaining useful life prediction of lithium-Ion batteries. IEEE Transactions on Vehicular Technology 67, 7 (2018), 56955705. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Zhou Bolei, Khosla Aditya, Lapedriza Agata, Oliva Aude, and Torralba Antonio. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 29212929.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. ATCN: Resource-efficient Processing of Time Series on Edge

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Embedded Computing Systems
          ACM Transactions on Embedded Computing Systems  Volume 21, Issue 5
          September 2022
          526 pages
          ISSN:1539-9087
          EISSN:1558-3465
          DOI:10.1145/3561947
          • Editor:
          • Tulika Mitra
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 8 October 2022
          • Online AM: 21 March 2022
          • Accepted: 3 March 2022
          • Revised: 25 February 2022
          • Received: 15 July 2021
          Published in tecs Volume 21, Issue 5

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed
        • Article Metrics

          • Downloads (Last 12 months)102
          • Downloads (Last 6 weeks)5

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!