Abstract
Multimedia scientists have largely focused their research on the recognition of tangible properties of data such as objects and scenes. Recently, the field has started evolving toward the modeling of more complex properties. For example, the understanding of social, affective, and subjective attributes of visual data has attracted the attention of many research teams at the crossroads of computer vision, multimedia, and social sciences. These intangible attributes include, for example, visual beauty, video popularity, or user behavior. Multiple, diverse challenges arise when modeling such properties from multimedia data. The sections concern technical aspects such as reliable groundtruth collection, the effective learning of subjective properties, or the impact of context in subjective perception; see Refs. [2] and [3].
- Xavier Alameda-Pineda, Andrea Pilzer, Dan Xu, Nicu Sebe, and Elisa Ricci. 2017. Viraliency: Pooling local viraliry. In IEEE CVPR.Google Scholar
- Xavier Alameda-Pineda, Miriam Redi, Nicu Sebe, Shih-Fu Chang, and Jiebo Luo. 2018. ACM MM’18 workshop on understanding subjective attributes of data, multimodal recognition of evoked emotions. In ACM International Conference on Multimedia.Google Scholar
- Xavier Alameda-Pineda, Miriam Redi, Mohammad Soleymani, Nicu Sebe, Shih-Fu Chang, and Samuel Gosling. 2017. MUSA2—First ACM workshop on multimodal understanding of social, affective and subjective attributes. In ACM Multimedia. Google Scholar
Digital Library
- Xavier Alameda-Pineda, Elisa Ricci, Yan Yan, and Nicu Sebe. 2016. Recognizing emotions from abstract paintings using non-linear matrix completion. In IEEE CVPR.Google Scholar
- Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. 2013. The interestingness of images. In ICCV. Google Scholar
Digital Library
- Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, and Shih-Fu Chang. 2015. Visual affect around the world: A large-scale multilingual visual sentiment ontology. In ACM International Conference on Multimedia. Google Scholar
Digital Library
- Aditya Khosla, Atish Das Sarma, and Raffay Hamid. 2014. What makes an image popular? In WWW. 867--876. Google Scholar
Digital Library
- Lorenzo Porzi, Samuel Rota Bulò, Bruno Lepri, and Elisa Ricci. 2015. Predicting and understanding urban perception with convolutional neural networks. In ACM MM. 139--148. Google Scholar
Digital Library
- Aliaksandr Siarohin, Gloria Zen, Cveta Majtanovic, Xavier Alameda-Pineda, Elisa Ricci, and Nicu Sebe. 2017. How to make an image more memorable? A deep style transfer approach. In ACM ICMR. Google Scholar
Digital Library
- Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. 2014. Learning deep features for scene recognition using places database. In NIPS. 487--495. Google Scholar
Digital Library
Index Terms
Special Section on Multimodal Understanding of Social, Affective, and Subjective Attributes
Recommendations
MUSA2: First ACM Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes
MM '17: Proceedings of the 25th ACM international conference on MultimediaMultimedia scientists have largely focused their research on the recognition of tangible properties of data, such as objects and scenes. Recently, the field has started evolving towards the modeling of more complex properties. For example, the ...
Does social capital affect SNS usage? A look at the roles of subjective well-being and social identity
Examines the effects of social capital on SNS use (qualitative and quantitative use).Found that bridging capital significantly impacts qualitative SNS use.Verified the mediating role of subjective well-being between bonding capital and quantitative SNS ...






Comments