Abstract
With the emerging interest in the ubiquitous sensing field, it has become possible to build assistive technologies for persons during their daily life activities to provide personalized feedback and services. For instance, it is possible to detect an individual’s behavioral pattern (e.g., physical activity, location, and mood) by using sensors embedded in smart-watches and smartphones. The multi-sensor environments also come with some challenges, such as how to fuse and combine different sources of data. In this article, we explore several methods of fusion for multi-representations of data from sensors. Furthermore, multiple representations of sensor data were generated and then fused using data-level, feature-level, and decision-level fusions. The presented methods were evaluated using three publicly available human activity recognition (HAR) datasets. The presented approaches utilize Deep Convolutional Neural Networks (CNNs). A generic architecture for fusion of different sensors is proposed. The proposed method shows promising performance, with the best results reaching an overall accuracy of 98.4% for the Context-Awareness via Wrist-Worn Motion Sensors (HANDY) dataset and 98.7% for the Wireless Sensor Data Mining (WISDM version 1.1) dataset. Both results outperform previous approaches.
- Koray Açıcı, Çağatay Erdaş, Tunç Aşuroğlu, and Hasan Oğul. 2018. HANDY: A benchmark dataset for context-awareness via wrist-worn motion sensors. Data 3, 3 (2018), 24.Google Scholar
Cross Ref
- Zeeshan Ahmad and Naimul Khan. 2018. Towards improved human action recognition using convolutional neural networks and multimodal fusion of depth and inertial sensor data. In Proceedings of the IEEE International Symposium on Multimedia (ISM’18). IEEE, 223--230.Google Scholar
Cross Ref
- Rao Muhammad Anwer, Fahad Shahbaz Khan, Joost van de Weijer, Matthieu Molinier, and Jorma Laaksonen. 2018. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification. ISPRS J. Photogram. Remote Sens. 138 (2018), 74--85.Google Scholar
Cross Ref
- Oresti Banos, Miguel Damas, Hector Pomares, and Ignacio Rojas. 2012. On the use of sensor fusion to reduce the impact of rotational and additive noise in human activity recognition. Sensors 12, 6 (2012), 8039--8054.Google Scholar
Cross Ref
- Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surveys 46, 3 (2014), 33.Google Scholar
Digital Library
- Liang Cao, Yufeng Wang, Bo Zhang, Qun Jin, and Athanasios V. Vasilakos. 2018. GCHAR: An efficient Group-based Context-aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 118 (2018), 67--80.Google Scholar
Digital Library
- Cagatay Catal, Selin Tufekci, Elif Pirmit, and Guner Kocabag. 2015. On the use of ensemble of classifiers for accelerometer-based activity recognition. Appl. Soft Comput. 37 (2015), 1018--1022.Google Scholar
Digital Library
- François Chollet et al. 2015. Keras.Google Scholar
- Radoslaw M. Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. 2016. Deep neural networks predict hierarchical spatio-temporal cortical dynamics of human visual object recognition. arXiv preprint arXiv:1601.02970.Google Scholar
- Omid Dehzangi, Mojtaba Taherisadr, and Raghvendar ChangalVala. 2017. IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors 17, 12 (2017), 2735.Google Scholar
Cross Ref
- J. P. Eckmann, S. Oliffson Kamphorst, D. Ruelle, et al. 1995. Recurrence plots of dynamical systems. World Sci. Ser. Nonlin. Sci. Ser. A 16 (1995), 441--446.Google Scholar
- Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin Riedmiller, and Wolfram Burgard. 2015. Multimodal deep learning for robust RGB-D object recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’15). IEEE, 681--687.Google Scholar
Cross Ref
- Wilfried Elmenreich. 2002. An introduction to sensor fusion. Vienna University of Technology, Austria 502 (2002).Google Scholar
- Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. 2016. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1933--1941.Google Scholar
Cross Ref
- Enrique Garcia-Ceja, Carlos E. Galván-Tejada, and Ramon Brena. 2018. Multi-view stacking for activity recognition with sound and accelerometer data. Info. Fusion 40 (2018), 45--56.Google Scholar
Digital Library
- Enrique Garcia-Ceja, Md Zia Uddin, and Jim Torresen. 2018. Classification of recurrence plots’ distance matrices with a convolutional neural network for activity recognition. Procedia Comput. Sci 130 (2018), 157--163.Google Scholar
Digital Library
- Haodong Guo, Ling Chen, Gencai Chen, and Mingqi Lv. 2016. Smartphone-based activity recognition independent of device orientation and placement. Int. J. Commun. Syst. 29, 16 (2016), 2403--2415.Google Scholar
Digital Library
- Haodong Guo, Ling Chen, Liangying Peng, and Gencai Chen. 2016. Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 1112--1123.Google Scholar
Digital Library
- Haodong Guo, Ling Chen, Yanbin Shen, and Gencai Chen. 2014. Activity recognition exploiting classifier level fusion of acceleration and physiological signals. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 63--66.Google Scholar
Digital Library
- Ming Guo, Zhelong Wang, Ning Yang, Zhenglin Li, and Tiantian An. 2019. A multisensor multiclassifier hierarchical fusion model based on entropy weight for human activity recognition using wearable inertial sensors. IEEE Trans. Hum.-Mach. Syst. 49, 1 (2019), 105--111.Google Scholar
Cross Ref
- Piyush Gupta and Tim Dallas. 2014. Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Trans. Biomed. Eng. 61, 6 (2014), 1780--1786.Google Scholar
Cross Ref
- Mohammad Mehedi Hassan, Shamsul Huda, Md Zia Uddin, Ahmad Almogren, and Majed Alrubaian. 2018. Human activity recognition from body sensor data using deep learning. J. Med. Syst. 42, 6 (2018), 99.Google Scholar
Digital Library
- Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Comput. 18, 7 (2006), 1527--1554.Google Scholar
Digital Library
- M. Shamim Hossain. 2017. Cloud-supported cyber–physical localization framework for patients monitoring. IEEE Syst. J. 11, 1 (2017), 118--127.Google Scholar
Cross Ref
- Tâm Huynh and Bernt Schiele. 2005. Analyzing features for activity recognition. In Proceedings of the Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-aware Services: Usages and Technologies. ACM, 159--163.Google Scholar
Digital Library
- Andrey D. Ignatov and Vadim V. Strijov. 2016. Human activity recognition using quasiperiodic time series collected from a single tri-axial accelerometer. Multimedia Tools Appl. 75, 12 (2016), 7257--7270.Google Scholar
Digital Library
- Joseph S. Iwanski and Elizabeth Bradley. 1998. Recurrence plots of experimental data: To embed or not to embed? Chaos 8, 4 (1998), 861--871.Google Scholar
Cross Ref
- Ahmad Jalal, Shaharyar Kamal, and Cesar A. Azurdia-Meza. 2019. Depth maps-based human segmentation and action recognition using full-body plus body color cues via recognizer engine. J. Electr. Eng. Technol. (2019), 1--7.Google Scholar
- Bahador Khaleghi, Alaa Khamis, Fakhreddine O. Karray, and Saiedeh N. Razavi. 2013. Multisensor data fusion: A review of the state-of-the-art. Info. Fusion 14, 1 (2013), 28--44.Google Scholar
Digital Library
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. MIT Press, 1097--1105.Google Scholar
Digital Library
- Kai Kunze and Paul Lukowicz. 2008. Dealing with sensor displacement in motion-based onbody activity recognition systems. In Proceedings of the 10th International Conference on Ubiquitous Computing. ACM, 20--29.Google Scholar
Digital Library
- Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newslett. 12, 2 (2011), 74--82.Google Scholar
Digital Library
- Oscar D. Lara, Alfredo J. Pérez, Miguel A. Labrador, and José D. Posada. 2012. Centinela: A human activity recognition system based on acceleration and vital sign data. Pervas. Mobile Comput. 8, 5 (2012), 717--729.Google Scholar
Digital Library
- Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.Google Scholar
Cross Ref
- Ming Li, Viktor Rozgica, Gautam Thatte, Sangwon Lee, Adar Emken, Murali Annavaram, Urbashi Mitra, Donna Spruijt-Metz, and Shrikanth Narayanan. 2010. Multimodal physical activity recognition by fusing temporal and cepstral information. IEEE Trans. Neural Syst. Rehab. Eng. 18, 4 (2010), 369--380.Google Scholar
Cross Ref
- Jenny Margarito, Rim Helaoui, Anna M. Bianchi, Francesco Sartor, and Alberto G. Bonomi. 2016. User-independent recognition of sports activities from a single wrist-worn accelerometer: A template-matching-based approach. IEEE Trans. Biomed. Eng. 63, 4 (2016), 788--796.Google Scholar
- Norbert Marwan, M. Carmen Romano, Marco Thiel, and Jürgen Kurths. 2007. Recurrence plots for the analysis of complex systems. Phys. Rep. 438, 5–6 (2007), 237--329.Google Scholar
Cross Ref
- Ihssene Menhour, Belkacem Fergani, et al. 2018. A new framework using PCA, LDA and KNN-SVM to activity recognition based SmartPhone’s sensors. In Proceedings of the 6th International Conference on Multimedia Computing and Systems (ICMCS’18). IEEE, 1--5.Google Scholar
Cross Ref
- Sebastian Münzner, Philip Schmidt, Attila Reiss, Michael Hanselmann, Rainer Stiefelhagen, and Robert Dürichen. 2017. CNN-based sensor fusion techniques for multimodal human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. ACM, 158--165.Google Scholar
Digital Library
- Natalia Neverova, Christian Wolf, Griffin Lacey, Lex Fridman, Deepak Chandra, Brandon Barbello, and Graham Taylor. 2016. Learning human identity from motion patterns. IEEE Access 4 (2016), 1810--1820.Google Scholar
Cross Ref
- Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. 2011. Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Learning (ICML’11). 689--696.Google Scholar
- Farzan Majeed Noori, Enrique Garcia-Ceja, Md Zia Uddin, Michael Riegler, and Jim Tørresen. 2019. Fusion of multiple representations extracted from a single sensor’s data for activity recognition using CNNs. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’19). IEEE, 1--6.Google Scholar
Cross Ref
- Farzan Majeed Noori, Smiti Kahlon, Philip Lindner, Tine Nordgreen, Jim Torresen, and Michael Riegler. 2019. Heart rate prediction from head movement during virtual reality treatment for social anxiety. In Proceedings of the International Conference on Content-Based Multimedia Indexing (CBMI’19). IEEE, 1--5.Google Scholar
Cross Ref
- Farzan Majeed Noori, Benedikte Wallace, Md Zia Uddin, and Jim Torresen. 2019. A robust human activity recognition approach using OpenPose, motion features, and deep recurrent neural network. In Proceedings of the Scandinavian Conference on Image Analysis. Springer, 299--310.Google Scholar
Digital Library
- Henry Friday Nweke, Ying Wah Teh, Ghulam Mujtaba, and Mohammed Ali Al-Garadi. 2019. Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Info. Fusion 46 (2019), 147--170.Google Scholar
Cross Ref
- Minh Pham, Dan Yang, and Weihua Sheng. 2019. A sensor fusion approach to indoor human localization based on environmental and wearable sensors. IEEE Trans. Auto. Sci. Eng. 16, 1 (2019), 339--350.Google Scholar
Cross Ref
- Thomas Plötz, Nils Y. Hammerla, and Patrick L. Olivier. 2011. Feature learning for activity recognition in ubiquitous computing. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence.Google Scholar
- Sen Qiu, Zhelong Wang, Hongyu Zhao, Kairong Qin, Zhenglin Li, and Huosheng Hu. 2018. Inertial/magnetic sensors based pedestrian dead reckoning by means of multi-sensor fusion. Info. Fusion 39 (2018), 108--119.Google Scholar
Cross Ref
- Daniele Ravi, Charence Wong, Benny Lo, and Guang-Zhong Yang. 2016. A deep learning approach to on-node sensor data analytics for mobile or wearable devices. IEEE J. Biomed. Health Info. 21, 1 (2016), 56--64.Google Scholar
Cross Ref
- Sadiq Sani, Stewart Massie, Nirmalie Wiratunga, and Kay Cooper. 2017. Learning deep and shallow features for human activity recognition. In Proceedings of the International Conference on Knowledge Science, Engineering and Management. Springer, 469--482.Google Scholar
Cross Ref
- Peter Sarcevic, Zoltan Kincses, and Szilveszter Pletl. 2019. Online human movement classification using wrist-worn wireless sensors. J. Ambient Intell. Human. Comput. 10, 1 (2019), 89--106.Google Scholar
Cross Ref
- Muhammad Shoaib, Stephan Bosch, Ozlem Incel, Hans Scholten, and Paul Havinga. 2015. A survey of online activity recognition using mobile phones. Sensors 15, 1 (2015), 2059--2085.Google Scholar
Cross Ref
- Muhammad Shoaib, Stephan Bosch, Ozlem Incel, Hans Scholten, and Paul Havinga. 2016. Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors 16, 4 (2016), 426.Google Scholar
Cross Ref
- Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. 2017. Hand keypoint detection in single images using multiview bootstrapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1145--1153.Google Scholar
Cross Ref
- Karen Simonyan and Andrew Zisserman. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems. MIT Press, 568--576.Google Scholar
Digital Library
- Mohammad Soleymani, Michael Riegler, and Pål Halvorsen. 2018. Multimodal analysis of user behavior and browsed content under different image search intents. Int. J. Multimedia Info. Retriev. 7, 1 (2018), 29--41.Google Scholar
Cross Ref
- Nitish Srivastava and Ruslan R. Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines. In Advances in Neural Information Processing Systems. MIT Press, 2222--2230.Google Scholar
Digital Library
- Amarnag Subramanya, Alvin Raj, Jeff A. Bilmes, and Dieter Fox. 2012. Recognizing activities and spatial context using wearable sensors. arXiv preprint arXiv:1206.6869.Google Scholar
- Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.Google Scholar
Digital Library
- Haibin Yu, Wenyan Jia, Zhen Li, Feixiang Gong, Ding Yuan, Hong Zhang, and Mingui Sun. 2019. A multisource fusion framework driven by user-defined knowledge for egocentric activity recognition. EURASIP J. Adv. Signal Process. 2019, 1 (2019), 14.Google Scholar
Cross Ref
- Ming Zeng, Le T. Nguyen, Bo Yu, Ole J. Mengshoel, Jiang Zhu, Pang Wu, and Joy Zhang. 2014. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications, and Services. IEEE, 197--205.Google Scholar
Cross Ref
- Chun Zhu and Weihua Sheng. 2012. Realtime recognition of complex human daily activities using human motion and location data. IEEE Trans. Biomed. Eng. 59, 9 (2012), 2422--2430.Google Scholar
Cross Ref
Index Terms
Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs
Recommendations
Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks
MM '15: Proceedings of the 23rd ACM international conference on MultimediaHuman physical activity recognition based on wearable sensors has applications relevant to our daily life such as healthcare. How to achieve high recognition accuracy with low computational cost is an important issue in the ubiquitous computing. Rather ...
Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions
AbstractActivity detection and classification using different sensor modalities have emerged as revolutionary technology for real-time and autonomous monitoring in behaviour analysis, ambient assisted living, activity of daily living (ADL), ...
Human Activity Recognition by Using Different Deep Learning Approaches for Wearable Sensors
AbstractWith the spread of wearable sensors, the solutions to the task of activity recognition by using the data obtained from the sensors have become widespread. Recognition of activities owing to wearable sensors such as accelerometers, gyroscopes, and ...






Comments