Abstract
Data musicalization is the process of automatically composing music based on given data as an approach to perceptualizing information artistically. The aim of data musicalization is to evoke subjective experiences in relation to the information rather than merely to convey unemotional information objectively. This article is written as a tutorial for readers interested in data musicalization. We start by providing a systematic characterization of musicalization approaches, based on their inputs, methods, and outputs. We then illustrate data musicalization techniques with examples from several applications: one that perceptualizes physical sleep data as music, several that artistically compose music inspired by the sleep data, one that musicalizes on-line chat conversations to provide a perceptualization of liveliness of a discussion, and one that uses musicalization in a gamelike mobile application that allows its users to produce music. We additionally provide a number of electronic samples of music produced by the different musicalization applications.
Supplemental Material
Available for Download
media files
Supplemental movie, appendix, image and software files for, Data Musicalization
- Florian Alt, Alireza S. Shirazi, Stefan Legien, Albrecht Schmidt, and Julian Mennenöh. 2010. Creating meaningful melodies from text messages. In Proceedings of the International Conference on New Interfaces for Musical Expression, Kirsty Beilharz, Bert Bongers, Andrew Johnston, and Sam Ferguson (Eds.). 63--68.Google Scholar
- Stephen Barrass, Mitchell Whitelaw, and Freya Bailes. 2006a. Listening to the mind listening: An analysis of sonification reviews, designs and correspondences. Leonardo Music Journal 16 (2006), 13--19.Google Scholar
Cross Ref
- Stephen Barrass, Mitchell Whitelaw, and Guillaume Potard. 2006b. Listening to the mind listening. Media Int. Austr. Inc. Cult. Policy 118, 1 (2006), 60--67.Google Scholar
Cross Ref
- Terri L. Bonebright and John H. Flowers. 2011. Evaluation of auditory display. In The Sonification Handbook, Thomas Hermann, Andy Hunt, and John G. Neuhoff (Eds.). Logos Publishing House, 111--144.Google Scholar
- Erin Bradner, Wendy A. Kellogg, and Thomas Erickson. 1999. The adoption and use of BABBLE: A field study of chat in the workplace. In Proceedings of the 6th European Conference on Computer Supported Cooperative Work. 139--158. Google Scholar
Digital Library
- Roberto Bresin and Anders Friberg. 2011. Emotion rendering in music: Range and characteristic values of seven musical variables. Cortex 47, 9 (10 2011), 1068--81.Google Scholar
- Amilcar Cardoso, Tony Veale, and Geraint A. Wiggins. 2009. Converging on the divergent: The history (and future) of the international joint workshops in computational creativity. AI Mag. 30, 3 (2009), 15--22.Google Scholar
Digital Library
- Simon Colton, John Charnley, and Alison Pease. 2011. Computational creativity theory: The FACE and IDEA descriptive models. In Proceedings of the 2nd International Conference on Computational Creativity (ICCC’11).Google Scholar
- Simon Colton and Geraint A. Wiggins. 2012. Computational creativity: The final frontier? In Proceedings of the European Conference on Artificial Intelligence (ECAI’12). 21--26. Google Scholar
Digital Library
- Reagan Curtis. 2004. Analyzing students’ conversations in chat room discussion groups. Coll. Teach. 52, 4 (10 2004), 143--149.Google Scholar
- Georg Essl and Michael Rohs. 2009. Interactivity for mobile music-making. Organised Sound 14, 2 (2009), 197--207. Google Scholar
Digital Library
- Jose David Fernández and Francisco Vico. 2013. AI methods in algorithmic composition: A comprehensive survey. J. Artif. Intell. Res. 48, 1 (2013), 513--582. Google Scholar
Digital Library
- John H. Flowers, Dion C. Buhman, and Kimberly D. Turnage. 2005. Data sonification from the desktop: Should sound be part of standard data analysis software? ACM Trans. Appl. Percept. 2, 4 (Oct. 2005), 467--472. Google Scholar
Digital Library
- Justin Grimmer and Brandon M. Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Pol. Anal. 21, 3 (7 2013), 267--297.Google Scholar
- M. H. Hansen and B. Rubin. 2001. Babble online: Applying statistics and design to sonify the internet. In Proceedings of the 2001 International Conference on Auditory Display. 10--15.Google Scholar
- Mark Hansen and Ben Rubin. 2002. Listening post: Giving voice to online communication. In Proceedings of the 2002 International Conference on Auditory Display. 3--6.Google Scholar
- Ewan Hill and Steven Goldfarb. 2016. ATLAS Data Sonification: A New Interface for Musical Expression and Public Interaction. Technical Report ATL-OREACH-PROC-2016-005. CERN, Geneva.Google Scholar
- C. Iber, S. Ancoli-Israel, A. Chesson, and S. F. Quan. 2007. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications. American Academy of Sleep Medicine.Google Scholar
- Giulio Jacucci, Anna Spagnolli, Jonathan Freeman, and Luciano Gamberini. 2014. Symbiotic interaction: A critical definition and comparison to other human-computer paradigms. In Symbiotic Interaction, Giulio Jacucci, Luciano Gamberini, Jonathan Freeman, and Anna Spagnolli (Eds.). Lecture Notes in Computer Science, Vol. 8820. Springer International Publishing, 3--20.Google Scholar
- Daniel Johnson and Dan Ventura. 2014. Musical motif discovery in non-musical media. In Proceedings of the 5th International Conference on Computational Creativity (ICCC’14). 91--99.Google Scholar
- P. N. Juslin, A. Friberg, and R. Bresin. 2002. Toward a computational model of expression in music performance: The GERM model. Mus. Sci. 5, 1 Suppl (2002), 63--122. http://doi.apa.org/?uid=2002-17591-004Google Scholar
- P. N. Juslin. 2001. Communicating emotion in music performance: A review and a theoretical framework. In Music and Emotion: Theory and Research, P. N. Juslin and J. A. Sloboda (Eds.). Oxford University Press, New York, 309--337.Google Scholar
- Anna Kantosalo and Hannu Toivonen. 2016. Modes for creative human-computer collaboration: Alternating and task-divided co-creativity. In Proceedings of the 7th International Conference on Computational Creativity (ICCC). Paris, France, 77--84.Google Scholar
- Helmut Kirchmeyer. 1968. On the historical constitution of a rationalistic music. Die Reihe 8 (1968), 11--24.Google Scholar
- Daniel J. Levitin, Karon MacLean, Max Mathews, Lonny Chu, and Eric Jensen. 2000. The perception of cross-modal simultaneity (or the greenwich observatory problem revisited). AIP Conf. Proc. 517, 1 (2000), 323--329.Google Scholar
Cross Ref
- Yin-tzu Lin, I-ting Liu, Jyh-shing Roger Jang, and Ja-ling Wu. 2015. Audio musical dice game. ACM Trans. Multimedia Comput. Commun. Appl. 11, 4 (6 2015), 1--24. Google Scholar
Digital Library
- Suresh K. Lodha, Catherine M. Wilson, and Robert E. Sheehan. 1996. LISTEN: Sounding uncertainty visualization. In Proceedings of the 7th Conference on Visualization’96 (VIS’96). IEEE Computer Society Press, Los Alamitos, CA, 189--ff. http://dl.acm.org/citation.cfm?id=244979.245053 Google Scholar
Digital Library
- T. M. Madhyastha and D. A. Reed. 1995. Data sonification: Do you see what I hear? IEEE Softw. 12, 2 (Mar 1995), 45--56. Google Scholar
Digital Library
- Kristine Monteith, Tony Martinez, and Dan Ventura. 2010a. Automatic generation of music for inducing emotive response. In Proceedings of the 1st International Conference on Computational Creativity. 140--149.Google Scholar
- Kristine Monteith, Tony Martinez, and Dan Ventura. 2010b. Computational modeling of emotional content in music. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society. 2356--2361.Google Scholar
- Keith Muscutt. 2007. Composing with algorithms: An interview with david cope. Comput. Mus. J. 31, 3 (2007), 10--22. Google Scholar
Digital Library
- Jieun Oh, Jorge Herrera, Nicholas J. Bryan, Luke Dahl, and Ge Wang. 2010. Evolving the mobile phone orchestra. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME’10). 82--87.Google Scholar
- J. Paalasmaa, L. Leppäkorpi, and M. Partinen. 2011. Quantifying respiratory variation with force sensor measurements. In Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’11). 3812--3815.Google Scholar
- Joonas Paalasmaa, Hannu Toivonen, and Markku Partinen. 2015. Adaptive heartbeat modeling for beat-to-beat heart rate measurement in ballistocardiograms. IEEE J. Biomed. Health Inf. 19, 6 (2015), 1945--1952.Google Scholar
Cross Ref
- J. Paalasmaa, M. Waris, H. Toivonen, L. Leppäkorpi, and M. Partinen. 2012. Online monitoring of sleep at home. In Proceedings of the 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’12).Google Scholar
- George Papadopoulos and Geraint Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In Proceedings of the AISB Symposim on Musical Creativity. 110--117.Google Scholar
- W. Gerrod Parrott. 2001. Emotions in Social Psychology. Psychology Press, Philadelphia.Google Scholar
- James W. Pennebaker, Roger J. Booth, and Martha E. Francis. 2007. Linguistic inquiry and word count: LIWC {Computer software}.Google Scholar
- Z. Pousman and J. Stasko. 2006. A taxonomy of ambient information systems: Four patterns of design. In Proceedings of the Working Conference on Advanced Visual Interfaces (2006), 67--74. Google Scholar
Digital Library
- Marty Quinn. 2001. Research set to music: The climate symphony and other sonifications of ice core, radar, DNA, seismic and solar wind data. In Proceedings of the 7th International Conference on Auditory Display.Google Scholar
- C. Roads. 1996. The Computer Music Tutorial. The MIT Press. Google Scholar
Digital Library
- Ian Simon, Dan Morris, and Sumit Basu. 2008. MySong: Automatic accompaniment generation for vocal melodies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 725--734. Google Scholar
Digital Library
- Robert Smith, Aaron Dennis, and Dan Ventura. 2012. Automatic composition from non-musical inspiration sources. In Proceedings of the 3rd International Conference on Computational Creativity. 160--164.Google Scholar
- Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment in short strength detection informal text. J. Am. Soc. Inf. Sci. Technol. 62, 2 (2010). Google Scholar
Digital Library
- Godfried Toussaint. 2005. The euclidean algorithm generates traditional musical rhythms. In Renaissance Banff: Mathematics, Music, Art, Culture, Reza Sarhangi and Robert V. Moody (Eds.). Bridges Conference, Southwestern College, Winfield, Kansas, 47--56.Google Scholar
- Takahiko Tsuchiya, Jason Freeman, and Lee W Lerner. 2015. Data-to-music API: Real-time data-agnostic sonification with musical structure models. In Proceedings of the 21st International Conference on Auditory Display (ICAD’15). Graz, Austria, 244--251.Google Scholar
- Aurora Tulilaulu, Joonas Paalasmaa, Mikko Waris, and Hannu Toivonen. 2012. Sleep musicalization: Automatic music composition from sleep measurements. In Proceedings of the 11th International Symposium on Intelligent Data Analysis (IDA’12). LNCS, Vol. 7619. 392--403. Google Scholar
Digital Library
- L. Turchet and R. Bresin. 2015. Effects of interactive sonification on emotionally expressive walking styles. IEEE Trans. Affect. Comput. 6, 2 (Apr. 2015), 152--164.Google Scholar
Digital Library
- L. Turchet and A. Rodà. 2017. Emotion rendering in auditory simulations of imagined walking styles. IEEE Trans. Affect. Comput. 8, 2 (Apr. 2017), 241--253.Google Scholar
- P. Vickers and J. L. Alty. 1996. CAITLIN: A musical program auralization tool to assist novice programmers with debugging. In Proceedings of the International Conference on Auditory Display (ICAD’96). 17--24.Google Scholar
- Paul Vickers and James L. Alty. 2005. Musical program auralization: Empirical studies. ACM Trans. Appl. Percept. 2, 4 (2005), 477--489. Google Scholar
Digital Library
- Paul Vickers and Bennett Hogg. 2006. Sonification abstraite/sonification concr\ete: An’aesthetic perspective space’for classifying auditory displays in the ars musica domain. In Proceedings of the 12th International Conference on Auditory Display.Google Scholar
- Bruce N. Walker and Michael A. Nees. 2011. Theory of Sonification. 9--39.Google Scholar
- Ge Wang, Georg Essl, and Henri Penttinen. 2008. Do mobile phones dream of electric orchestras? In Proceedings of the International Computer Music Conference (ICMC’08).Google Scholar
- Ge Wang, Georg Essl, Jeff Smith, Spencer Salazar, P Cook, Rob Hamilton, Rebecca Fiebrink, Jonathan Berger, David Zhu, Mattias Ljungstrom, and others. 2009. Smule = Sonic media: An intersection of the mobile, musical, and social. In Proceedings of the International Computer Music Conference (ICMC’09). 283--286.Google Scholar
- Catherine M. Wilson and Suresh K. Lodha. 1996. Listen: A data sonification toolkit. In Proceedings of the International Conference on Auditory Display.Google Scholar
Index Terms
Data Musicalization
Recommendations
Harmony, the union of music and art
EVA '17: Proceedings of the conference on Electronic Visualisation and the ArtsThis paper discusses the creative process explored in the creation of the project A Hidden Order by artist Sama Mara and composer Lee Westwood. A Hidden Order is an audio-visual project merging the worlds of Traditional Islamic Art & Western ...
From raw audio to a seamless mix: creating an automated DJ system for Drum and Bass
We present the open-source implementation of the first fully automatic and comprehensive DJ system, able to generate seamless music mixes using songs from a given library much like a human DJ does.
The proposed system is built on top of several enhanced ...
Music/lyrics composition system considering user's image and music genre
SMC'09: Proceedings of the 2009 IEEE international conference on Systems, Man and CyberneticsThis paper proposes a music/lyrics composition system consisting of two sections, a lyric composing section and a music composing section, which considers user's image of a song and music genre. First of all, a user has an image of music/lyrics to ...






Comments