Abstract
Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.
- H. Lu, Y. Li, M. Chen, et al. 2018. Brain intelligence: Go beyond artificial intelligence. Mobile Networks and Applications 23, 2 (2018), 368–375. Google Scholar
Digital Library
- E. J. Topol. 2019. High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine 25, 1 (2019), 44–56.Google Scholar
Cross Ref
- M. Moravčík, M. Schmid, N. Burch, et al. 2017. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science 356, 6337 (2017), 508–513.Google Scholar
- R. Liu, B. Yang, E. Zio, et al. 2018. Artificial intelligence for fault diagnosis of rotating machinery: A review. Mechanical Systems and Signal Processing 108 (2018) 33–47.Google Scholar
Cross Ref
- C. Krittanawong, H. J. Zhang, Z. Wang, et al. 2017. Artificial intelligence in precision cardiovascular medicine. Journal of the American College of Cardiology 69, 21 (2017), 2657–2664.Google Scholar
Cross Ref
- K. C. Yang, O. Varol, C. A. Davis, et al. 2019. Arming the public with artificial intelligence to counter social bots. Human Behavior and Emerging Technologies 1, 1 (2019), 48–61.Google Scholar
Cross Ref
- A. D. Rast, S. V. Adams, S. Davidson, et al. 2018. Behavioral learning in a cognitive neuromorphic robot: An integrative approach. IEEE Transactions on Neural Networks and Learning Systems 29, 12 (2018), 6132–6144.Google Scholar
Cross Ref
- A. Mishra and J. Son. 2019. Cognitive robotics: A platform for innovation. IEEE Potentials 38, 3 (2019), 39–42.Google Scholar
Cross Ref
- M. Meier, C. Mason, F. Putze, et al. 2019. Comparative analysis of think-aloud methods for everyday activities in the context of cognitive robotics. Detail 9, (2019) 10.Google Scholar
- L. Hu, Y. Miao, G. Wu, et al. 2019. iRobot-Factory: An intelligent robot factory based on cognitive manufacturing and edge computing. Future Generation Computer Systems 90 (2019), 569–577.Google Scholar
Cross Ref
- K. Merrick. 2017. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research 41 (2017), 38–55. Google Scholar
Digital Library
- D. Lange. 2019. Cognitive robotics: Making robots sense, understand, and interact. IEEE Computer 52, 12 (2019), 39–44.Google Scholar
Cross Ref
- Z. Li, F. Chen, A. Bicchi, et al. 2019. Guest editorial neuro-robotics systems: Sensing, cognition, learning, and control. IEEE Transactions on Cognitive and Developmental Systems 11, 2 (2019), 145–147.Google Scholar
Cross Ref
- A. Aly, S. Griffiths, and F. Stramandinoli. 2017. Metrics and benchmarks in human-robot interaction: Recent advances in cognitive robotics. Cognitive Systems Research 43 (2017), 313–323.Google Scholar
Cross Ref
- D. H. Perico, T. P. D. Homem, A. C. Almeida, et al. 2018. Humanoid robot framework for research on cognitive robotics. Journal of Control, Automation and Electrical Systems 29, 4 (2018), 470–479.Google Scholar
Cross Ref
- A. Romero, A. Prieto, F. Bellas, et al. 2019. Simplifying the creation and management of utility models in continuous domains for cognitive robotics. Neurocomputing 353 (2019), 106–118.Google Scholar
Digital Library
- N. Rasheed, S. H. M. Amin, U. Sultana, et al. 2017. Hopfield net spreading activation for grounding of abstract action words in cognitive robot. Biologically Inspired Cognitive Architectures 21 (2017), 37–46.Google Scholar
Cross Ref
- S. Lemaignan, M. Warnier, E. A. Sisbot, et al. 2017. Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence 247 (2017), 45–69. Google Scholar
Digital Library
- E. Bryndin. 2017. Program hierarchical realization of adaptation behavior of the cognitive mobile robot with imitative thinking. International Journal of Engineering Management 1, 4 (2017), 74–79.Google Scholar
- Y. H. Wu, V. Cristancho-Lacroix, C. Fassert, et al. 2016. The attitudes and perceptions of older adults with mild cognitive impairment toward an assistive robot. Journal of Applied Gerontology 35, 1 (2016), 3–17.Google Scholar
Cross Ref
- E. Bryndin. 2018. Communicative associative logic of cognitive professional robot with imitative thinking. Language 1 (2018), 2.Google Scholar
- Y. Chen, H. Jiang, C. Li, et al. 2016. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing 54, 10 (2016), 6232–6251.Google Scholar
Cross Ref
- T. Wiatowski and H. Bölcskei. 2017. A mathematical theory of deep convolutional neural networks for feature extraction. IEEE Transactions on Information Theory 64, 3 (2017), 1845–1866.Google Scholar
Cross Ref
- W. Feng, J. Wang, Y. Chen, et al. 2018. UAV-aided MIMO communications for 5G Internet of Things. IEEE Internet of Things Journal 6, 2 (2018), 1731–1740.Google Scholar
Cross Ref
- G. Akpakwu, B. Silva, G. P. Hancke, et al. 2017. A survey on 5G networks for the Internet of Things: Communication technologies and challenges. IEEE Access 5, 12 (2017), 3619–3647.Google Scholar
- M. Aazam, K. A. Harras, and S. Zeadally. 2019. Fog computing for 5G Tactile Industrial Internet of Things: QoE-aware resource allocation model. IEEE Transactions on Industrial Informatics 15, 5 (2019), 3085–3092.Google Scholar
Cross Ref
- J. Davies-Thompson, G. V. Elli, M. Rezk, et al. 2019. Hierarchical brain network for face and voice integration of emotion expression. Cerebral Cortex 29, 9 (2019), 3590–3605.Google Scholar
Cross Ref
- M. Chen, F. Herrera, and K. Hwang. 2018. Cognitive computing: Architecture, technologies and intelligent applications. IEEE Access 6 (2018), 19774–19783.Google Scholar
Cross Ref
- Y. Chen, J. D. E. Argentinis, and G. Weber. 2016. IBM Watson: How cognitive computing can be applied to big data challenges in life sciences research. Clinical Therapeutics 38, 4 (2016), 688–701.Google Scholar
Cross Ref
- A. Petersen, A. H. Petersen, C. Bundesen, et al. 2017. The effect of phasic auditory alerting on visual perception. Cognition 165 (2017), 73–81.Google Scholar
Cross Ref
Index Terms
Cognitive Robotics on 5G Networks
Recommendations
Enhancing the accuracy of a human emotion recognition method using spatial temporal graph convolutional networks
AbstractArtificial intelligence technology has been widely used in human emotion recognition applications. Unlike traditional facial, semantic and brain wave technology, spatio-temporal graph convolution network technology has been shown to be useful for ...
Facial expressions and subjective assessments of emotions
AbstractThere was conducted an experimental comparison of the objective (FaceReader 4.0) and subjective assessment of the severity of emotions on different scales. As objects, 18 short (no more than 2 s) videos were used, on which the same ...
Gaussian mixture model based estimation of the neutral face shape for emotion recognition
When the goal is to recognize the facial expression of a person given an expressive image, there are mainly two types of information encoded in the image that we have to deal with: identity-related information and expression related information. ...






Comments