skip to main content
research-article

Contrastive Predictive Coding for Human Activity Recognition

Published: 24 June 2021 Publication History

Abstract

Feature extraction is crucial for human activity recognition (HAR) using body-worn movement sensors. Recently, learned representations have been used successfully, offering promising alternatives to manually engineered features. Our work focuses on effective use of small amounts of labeled data and the opportunistic exploitation of unlabeled data that are straightforward to collect in mobile and ubiquitous computing scenarios. We hypothesize and demonstrate that explicitly considering the temporality of sensor data at representation level plays an important role for effective HAR in challenging scenarios. We introduce the Contrastive Predictive Coding (CPC) framework to human activity recognition, which captures the temporal structure of sensor data streams. Through a range of experimental evaluations on real-life recognition tasks, we demonstrate its effectiveness for improved HAR. CPC-based pre-training is self-supervised, and the resulting learned representations can be integrated into standard activity chains. It leads to significantly improved recognition performance when only small amounts of labeled training data are available, thereby demonstrating the practical value of our approach. Through a series of experiments, we also develop guidelines to help practitioners adapt and modify the framework towards other mobile and ubiquitous computing scenarios.

References

[1]
Alireza Abedin, Farbod Motlagh, Qinfeng Shi, Hamid Rezatofighi, and Damith Ranasinghe. 2020. Towards deep clustering of human activities from wearables. In Proceedings of the 2020 International Symposium on Wearable Computers. 1--6.
[2]
Gregory D Abowd. 2012. What next, Ubicomp? Celebrating an intellectual disappearing act. Proc. Int. Conf. on Ubiquitous Computing (UbiComp) (2012).
[3]
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning. 173--182.
[4]
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. 2013. A public domain dataset for human activity recognition using smartphones. In Esann.
[5]
B Atal and M Schroeder. 1979. Predictive coding of speech signals and subjective error criteria. IEEE Transactions on Acoustics, Speech, and Signal Processing 27, 3 (1979), 247--254.
[6]
Bishnu S Atal and Manfred R Schroeder. 1970. Adaptive predictive coding of speech signals. Bell System Technical Journal 49, 8 (1970), 1973--1986.
[7]
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477 (2020).
[8]
Gonzalo Bailador, Daniel Roggen, Gerhard Tröster, and Gracián Triviño. 2007. Real Time Gesture Recognition Using Continuous Time Recurrent Neural Networks. In Proceedings of the ICST 2nd International Conference on Body Area Networks (Florence, Italy) (BodyNets '07). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), Brussels, BEL, Article 15, 8 pages.
[9]
Oresti Banos, Alberto Calatroni, Miguel Damas, Héctor Pomares, Ignacio Rojas, Hesam Sagha, Jose del R Mill, Gerhard Troster, Ricardo Chavarriaga, Daniel Roggen, et al. 2012. Kinect= imu? learning mimo signal mappings to automatically translate activity recognition systems across sensor modalities. In 2012 16th International Symposium on Wearable Computers. IEEE, 92--99.
[10]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR) 46, 3 (2014), 1--33.
[11]
Charikleia Chatzaki, Matthew Pediaditis, George Vavoulas, and Manolis Tsiknakis. 2016. Human daily activity and fall recognition using a smartphone's acceleration sensor. In International Conference on Information and Communication Technologies for Ageing Well and e-Health. Springer, 100--118.
[12]
R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Tröster, J. R. Millán, and D. Roggen. 2013. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters 34, 15 (2013), 2033--2042.
[13]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709 (2020).
[14]
Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. 2020. Debiased contrastive learning. Advances in Neural Information Processing Systems 33 (2020).
[15]
Yu-An Chung and James Glass. 2020. Generative pre-training for speech with autoregressive predictive coding. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3497--3501.
[16]
Yu-An Chung and James Glass. 2020. Improved speech representations with multi-target autoregressive predictive coding. arXiv preprint arXiv:2004.05274 (2020).
[17]
Yu-An Chung, Wei-Ning Hsu, Hao Tang, and James Glass. 2019. An unsupervised autoregressive model for speech representation learning. arXiv preprint arXiv:1904.03240 (2019).
[18]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[19]
Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision. 1422--1430.
[20]
Aiden Doherty, Dan Jackson, Nils Hammerla, Thomas Plötz, Patrick Olivier, Malcolm H Granat, Tom White, Vincent T Van Hees, Michael I Trenell, Christoper G Owen, et al. 2017. Large scale population assessment of physical activity using wrist worn accelerometers: The UK Biobank Study. PloS one 12, 2 (2017), e0169649.
[21]
Francesco Donnarumma, Marcello Costantini, Ettore Ambrosini, Karl Friston, and Giovanni Pezzulo. 2017. Action perception as hypothesis testing. Cortex 89 (2017), 45--60.
[22]
Peter Elias. 1955. Predictive coding-I. IRE Transactions on Information Theory 1, 1 (1955), 16--24.
[23]
Basura Fernando, Hakan Bilen, Efstratios Gavves, and Stephen Gould. 2017. Self-supervised video representation learning with odd-one-out networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3636--3645.
[24]
Davide Figo, Pedro C Diniz, Diogo R Ferreira, and Joao MP Cardoso. 2010. Preprocessing techniques for context recognition from accelerometer data. Personal and Ubiquitous Computing 14, 7 (2010), 645--662.
[25]
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018).
[26]
Martin A Giese and Tomaso Poggio. 2003. Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience 4, 3 (2003), 179--192.
[27]
Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 6645--6649.
[28]
Yu Guan and Thomas Plötz. 2017. Ensembles of deep lstm learners for activity recognition using wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (2017), 1--28.
[29]
Michael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. 297--304.
[30]
Patrick Haggard. 2005. Conscious intention and motor cognition. Trends in cognitive sciences 9, 6 (2005), 290--295.
[31]
Nils Y Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880 (2016).
[32]
Nils Y Hammerla, Reuben Kirkham, Peter Andras, and Thomas Ploetz. 2013. On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution. In Proceedings of the 2013 international symposium on wearable computers. 65--68.
[33]
Harish Haresamudram, David V Anderson, and Thomas Plötz. 2019. On the role of features in human activity recognition. In Proceedings of the 23rd International Symposium on Wearable Computers. 78--88.
[34]
Harish Haresamudram, Apoorva Beedu, Varun Agrawal, Patrick L Grady, Irfan Essa, Judy Hoffman, and Thomas Plötz. 2020. Masked reconstruction based self-supervision for human activity recognition. In Proceedings of the 2020 International Symposium on Wearable Computers. 45--49.
[35]
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9729--9738.
[36]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[37]
Olivier J Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. 2019. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272 (2019).
[38]
Bernhard Hommel, Jochen Müsseler, Gisa Aschersleben, and Wolfgang Prinz. 2001. The theory of event coding (TEC): A framework for perception and action planning. Behavioral and brain sciences 24, 5 (2001), 849.
[39]
Tâm Huynh and Bernt Schiele. 2005. Analyzing features for activity recognition. In Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies. 159--163.
[40]
Aapo Hyvarinen and Hiroshi Morioka. 2016. Unsupervised feature extraction by time-contrastive learning and nonlinear ICA. In Advances in Neural Information Processing Systems. 3765--3773.
[41]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[42]
James M Kilner, Karl J Friston, and Chris D Frith. 2007. The mirror-neuron system: a Bayesian perspective. Neuroreport 18, 6 (2007), 619--623.
[43]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[44]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84--90.
[45]
Cheng-I Lai. 2019. Contrastive predictive coding based feature for automatic speaker verification. arXiv preprint arXiv:1904.01575 (2019).
[46]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436--444.
[47]
Andy T Liu, Shang-Wen Li, and Hung-yi Lee. 2020. Tera: Self-supervised learning of transformer encoder representation for speech. arXiv preprint arXiv:2007.06028 (2020).
[48]
Andy T Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, and Hung-yi Lee. 2020. Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6419--6423.
[49]
Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893 (2018).
[50]
William Lotter, Gabriel Kreiman, and David Cox. 2016. Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104 (2016).
[51]
Saif Mahmud, M Tonmoy, Kishor Kumar Bhaumik, AKM Rahman, M Ashraful Amin, Mohammad Shoyaib, Muhammad Asif Hossain Khan, and Amin Ahsan Ali. 2020. Human Activity Recognition from Wearable Sensor Data Using Self-Attention. arXiv preprint arXiv:2003.09018 (2020).
[52]
Mohammad Malekzadeh, Richard G Clegg, Andrea Cavallaro, and Hamed Haddadi. 2018. Protecting sensory data against sensitive inferences. In Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems. 1--6.
[53]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111--3119.
[54]
Ishan Misra, C Lawrence Zitnick, and Martial Hebert. 2016. Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision. Springer, 527--544.
[55]
Vishvak S Murahari and Thomas Plötz. 2018. On attention models for human activity recognition. In Proceedings of the 2018 ACM International Symposium on Wearable Computers. 100--103.
[56]
Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML.
[57]
John G Nutt, Bastiaan R Bloem, Nir Giladi, Mark Hallett, Fay B Horak, and Alice Nieuwboer. 2011. Freezing of gait: moving forward on a mysterious clinical phenomenon. The Lancet Neurology 10, 8 (2011), 734--744.
[58]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
[59]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
[60]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems. 8026--8037.
[61]
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2536--2544.
[62]
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532--1543.
[63]
Thomas Plötz, Nils Y Hammerla, and Patrick L Olivier. 2011. Feature learning for activity recognition in ubiquitous computing. In Twenty-second international joint conference on artificial intelligence.
[64]
David MW Powers. 2020. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061 (2020).
[65]
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pretraining.
[66]
Rajesh PN Rao and Dana H Ballard. 1999. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience 2, 1 (1999), 79--87.
[67]
A. Reiss and D. Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring.
[68]
Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, and Emmanuel Dupoux. 2020. Unsupervised pretraining transfers well across languages. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 7414--7418.
[69]
Aaqib Saeed, Tanir Ozcelebi, and Johan Lukkien. 2019. Multi-task Self-Supervised Learning for Human Activity Detection. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 1--30.
[70]
Aaqib Saeed, Flora D Salim, Tanir Ozcelebi, and Johan Lukkien. 2020. Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence. IEEE Internet of Things Journal 8, 2 (2020), 1030--1040.
[71]
Aaqib Saeed, Victor Ungureanu, and Beat Gfeller. 2020. Sense and Learn: Self-Supervision for Omnipresent Sensors. arXiv preprint arXiv:2009.13233 (2020).
[72]
Huan Song, Deepta Rajan, Jayaraman Thiagarajan, and Andreas Spanias. 2018. Attend and diagnose: Clinical time series analysis using attention models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[73]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929--1958.
[74]
T. Stiefmeier, D. Roggen, G. Ogris, P. Lukowicz, and G. Tröster. 2008. Wearable activity tracking in car manufacturing. IEEE Pervasive Computing 2 (2008), 42--50.
[75]
Alireza Abedin Varamin, Ehsan Abbasnejad, Qinfeng Shi, Damith C Ranasinghe, and Hamid Rezatofighi. 2018. Deep auto-set: A deep auto-encoder-set network for activity recognition using wearables. In Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. 246--253.
[76]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.
[77]
Weiran Wang, Qingming Tang, and Karen Livescu. 2020. Unsupervised pre-training of bidirectional speech encoders via masked reconstruction. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6889--6893.
[78]
Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. 2018. Learning and using the arrow of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8052--8060.
[79]
Laurenz Wiskott and Terrence J Sejnowski. 2002. Slow feature analysis: Unsupervised learning of invariances. Neural computation 14, 4 (2002), 715--770.
[80]
Neo Wu, Bradley Green, Xue Ben, and Shawn O'Banion. 2020. Deep transformer models for time series forecasting: The influenza prevalence case. arXiv preprint arXiv:2001.08317 (2020).
[81]
Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiaoli Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Ijcai, Vol. 15. Citeseer, 3995--4001.
[82]
Ming Zeng, Haoxiang Gao, Tong Yu, Ole J Mengshoel, Helge Langseth, Ian Lane, and Xiaobing Liu. 2018. Understanding and improving recurrent networks for human activity recognition by continuous attention. In Proceedings of the 2018 ACM International Symposium on Wearable Computers. 56--63.
[83]
Ming Zeng, Le T Nguyen, Bo Yu, Ole J Mengshoel, Jiang Zhu, Pang Wu, and Joy Zhang. 2014. Convolutional neural networks for human activity recognition using mobile sensors. In 6th International Conference on Mobile Computing, Applications and Services. IEEE, 197--205.
[84]
M. Zhang and A. Sawchuk. 2012. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors.
[85]
Yilun Zhao, Xinda Wu, Yuqing Ye, Jia Guo, and Kejun Zhang. 2020. MusiCoder: A Universal Music-Acoustic Encoder Based on Transformers. arXiv preprint arXiv:2008.00781 (2020).

Cited By

View all
  • (2025)Temporal Contrastive Learning for Sensor-Based Human Activity Recognition: A Self-Supervised ApproachIEEE Sensors Journal10.1109/JSEN.2024.349193325:1(1839-1850)Online publication date: 1-Jan-2025
  • (2025)Hydra-TS: Enhancing Human Activity Recognition With Multiobjective Synthetic Time-Series Data GenerationIEEE Sensors Journal10.1109/JSEN.2024.348310825:1(763-772)Online publication date: 1-Jan-2025
  • (2025)Application of human activity/action recognition: a reviewMultimedia Tools and Applications10.1007/s11042-024-20576-2Online publication date: 8-Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 2
June 2021
932 pages
EISSN:2474-9567
DOI:10.1145/3472726
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 June 2021
Published in IMWUT Volume 5, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. contrastive predictive coding
  2. human activity recognition
  3. representation learning

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)375
  • Downloads (Last 6 weeks)40
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Temporal Contrastive Learning for Sensor-Based Human Activity Recognition: A Self-Supervised ApproachIEEE Sensors Journal10.1109/JSEN.2024.349193325:1(1839-1850)Online publication date: 1-Jan-2025
  • (2025)Hydra-TS: Enhancing Human Activity Recognition With Multiobjective Synthetic Time-Series Data GenerationIEEE Sensors Journal10.1109/JSEN.2024.348310825:1(763-772)Online publication date: 1-Jan-2025
  • (2025)Application of human activity/action recognition: a reviewMultimedia Tools and Applications10.1007/s11042-024-20576-2Online publication date: 8-Jan-2025
  • (2025)An Analysis of Time-Frequency Consistency in Human Activity RecognitionIntelligent Systems10.1007/978-3-031-79035-5_5(66-81)Online publication date: 30-Jan-2025
  • (2025)Impact of Pre-training Datasets on Human Activity Recognition with Contrastive Predictive CodingIntelligent Systems10.1007/978-3-031-79035-5_21(306-320)Online publication date: 30-Jan-2025
  • (2024)Towards Learning Discrete Representations via Self-Supervision for Wearables-Based Human Activity RecognitionSensors10.3390/s2404123824:4(1238)Online publication date: 15-Feb-2024
  • (2024)Active contrastive coding reducing label effort for sensor-based human activity recognitionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23480446:2(3987-3999)Online publication date: 14-Feb-2024
  • (2024)Cross-Domain HAR: Few-Shot Transfer Learning for Human Activity RecognitionACM Transactions on Intelligent Systems and Technology10.1145/370492116:1(1-35)Online publication date: 21-Nov-2024
  • (2024)SemiCMT: Contrastive Cross-Modal Knowledge Transfer for IoT Sensing with Semi-Paired Multi-Modal SignalsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997798:4(1-30)Online publication date: 21-Nov-2024
  • (2024)Self-supervised Learning for Accelerometer-based Human Activity Recognition: A SurveyProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997678:4(1-42)Online publication date: 21-Nov-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media