skip to main content
research-article

Data Musicalization

Published:25 April 2018Publication History
Skip Abstract Section

Abstract

Data musicalization is the process of automatically composing music based on given data as an approach to perceptualizing information artistically. The aim of data musicalization is to evoke subjective experiences in relation to the information rather than merely to convey unemotional information objectively. This article is written as a tutorial for readers interested in data musicalization. We start by providing a systematic characterization of musicalization approaches, based on their inputs, methods, and outputs. We then illustrate data musicalization techniques with examples from several applications: one that perceptualizes physical sleep data as music, several that artistically compose music inspired by the sleep data, one that musicalizes on-line chat conversations to provide a perceptualization of liveliness of a discussion, and one that uses musicalization in a gamelike mobile application that allows its users to produce music. We additionally provide a number of electronic samples of music produced by the different musicalization applications.

Skip Supplemental Material Section

Supplemental Material

References

  1. Florian Alt, Alireza S. Shirazi, Stefan Legien, Albrecht Schmidt, and Julian Mennenöh. 2010. Creating meaningful melodies from text messages. In Proceedings of the International Conference on New Interfaces for Musical Expression, Kirsty Beilharz, Bert Bongers, Andrew Johnston, and Sam Ferguson (Eds.). 63--68.Google ScholarGoogle Scholar
  2. Stephen Barrass, Mitchell Whitelaw, and Freya Bailes. 2006a. Listening to the mind listening: An analysis of sonification reviews, designs and correspondences. Leonardo Music Journal 16 (2006), 13--19.Google ScholarGoogle ScholarCross RefCross Ref
  3. Stephen Barrass, Mitchell Whitelaw, and Guillaume Potard. 2006b. Listening to the mind listening. Media Int. Austr. Inc. Cult. Policy 118, 1 (2006), 60--67.Google ScholarGoogle ScholarCross RefCross Ref
  4. Terri L. Bonebright and John H. Flowers. 2011. Evaluation of auditory display. In The Sonification Handbook, Thomas Hermann, Andy Hunt, and John G. Neuhoff (Eds.). Logos Publishing House, 111--144.Google ScholarGoogle Scholar
  5. Erin Bradner, Wendy A. Kellogg, and Thomas Erickson. 1999. The adoption and use of BABBLE: A field study of chat in the workplace. In Proceedings of the 6th European Conference on Computer Supported Cooperative Work. 139--158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Roberto Bresin and Anders Friberg. 2011. Emotion rendering in music: Range and characteristic values of seven musical variables. Cortex 47, 9 (10 2011), 1068--81.Google ScholarGoogle Scholar
  7. Amilcar Cardoso, Tony Veale, and Geraint A. Wiggins. 2009. Converging on the divergent: The history (and future) of the international joint workshops in computational creativity. AI Mag. 30, 3 (2009), 15--22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Simon Colton, John Charnley, and Alison Pease. 2011. Computational creativity theory: The FACE and IDEA descriptive models. In Proceedings of the 2nd International Conference on Computational Creativity (ICCC’11).Google ScholarGoogle Scholar
  9. Simon Colton and Geraint A. Wiggins. 2012. Computational creativity: The final frontier? In Proceedings of the European Conference on Artificial Intelligence (ECAI’12). 21--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Reagan Curtis. 2004. Analyzing students’ conversations in chat room discussion groups. Coll. Teach. 52, 4 (10 2004), 143--149.Google ScholarGoogle Scholar
  11. Georg Essl and Michael Rohs. 2009. Interactivity for mobile music-making. Organised Sound 14, 2 (2009), 197--207. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jose David Fernández and Francisco Vico. 2013. AI methods in algorithmic composition: A comprehensive survey. J. Artif. Intell. Res. 48, 1 (2013), 513--582. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. John H. Flowers, Dion C. Buhman, and Kimberly D. Turnage. 2005. Data sonification from the desktop: Should sound be part of standard data analysis software? ACM Trans. Appl. Percept. 2, 4 (Oct. 2005), 467--472. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Justin Grimmer and Brandon M. Stewart. 2013. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Pol. Anal. 21, 3 (7 2013), 267--297.Google ScholarGoogle Scholar
  15. M. H. Hansen and B. Rubin. 2001. Babble online: Applying statistics and design to sonify the internet. In Proceedings of the 2001 International Conference on Auditory Display. 10--15.Google ScholarGoogle Scholar
  16. Mark Hansen and Ben Rubin. 2002. Listening post: Giving voice to online communication. In Proceedings of the 2002 International Conference on Auditory Display. 3--6.Google ScholarGoogle Scholar
  17. Ewan Hill and Steven Goldfarb. 2016. ATLAS Data Sonification: A New Interface for Musical Expression and Public Interaction. Technical Report ATL-OREACH-PROC-2016-005. CERN, Geneva.Google ScholarGoogle Scholar
  18. C. Iber, S. Ancoli-Israel, A. Chesson, and S. F. Quan. 2007. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications. American Academy of Sleep Medicine.Google ScholarGoogle Scholar
  19. Giulio Jacucci, Anna Spagnolli, Jonathan Freeman, and Luciano Gamberini. 2014. Symbiotic interaction: A critical definition and comparison to other human-computer paradigms. In Symbiotic Interaction, Giulio Jacucci, Luciano Gamberini, Jonathan Freeman, and Anna Spagnolli (Eds.). Lecture Notes in Computer Science, Vol. 8820. Springer International Publishing, 3--20.Google ScholarGoogle Scholar
  20. Daniel Johnson and Dan Ventura. 2014. Musical motif discovery in non-musical media. In Proceedings of the 5th International Conference on Computational Creativity (ICCC’14). 91--99.Google ScholarGoogle Scholar
  21. P. N. Juslin, A. Friberg, and R. Bresin. 2002. Toward a computational model of expression in music performance: The GERM model. Mus. Sci. 5, 1 Suppl (2002), 63--122. http://doi.apa.org/?uid=2002-17591-004Google ScholarGoogle Scholar
  22. P. N. Juslin. 2001. Communicating emotion in music performance: A review and a theoretical framework. In Music and Emotion: Theory and Research, P. N. Juslin and J. A. Sloboda (Eds.). Oxford University Press, New York, 309--337.Google ScholarGoogle Scholar
  23. Anna Kantosalo and Hannu Toivonen. 2016. Modes for creative human-computer collaboration: Alternating and task-divided co-creativity. In Proceedings of the 7th International Conference on Computational Creativity (ICCC). Paris, France, 77--84.Google ScholarGoogle Scholar
  24. Helmut Kirchmeyer. 1968. On the historical constitution of a rationalistic music. Die Reihe 8 (1968), 11--24.Google ScholarGoogle Scholar
  25. Daniel J. Levitin, Karon MacLean, Max Mathews, Lonny Chu, and Eric Jensen. 2000. The perception of cross-modal simultaneity (or the greenwich observatory problem revisited). AIP Conf. Proc. 517, 1 (2000), 323--329.Google ScholarGoogle ScholarCross RefCross Ref
  26. Yin-tzu Lin, I-ting Liu, Jyh-shing Roger Jang, and Ja-ling Wu. 2015. Audio musical dice game. ACM Trans. Multimedia Comput. Commun. Appl. 11, 4 (6 2015), 1--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Suresh K. Lodha, Catherine M. Wilson, and Robert E. Sheehan. 1996. LISTEN: Sounding uncertainty visualization. In Proceedings of the 7th Conference on Visualization’96 (VIS’96). IEEE Computer Society Press, Los Alamitos, CA, 189--ff. http://dl.acm.org/citation.cfm?id=244979.245053 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. M. Madhyastha and D. A. Reed. 1995. Data sonification: Do you see what I hear? IEEE Softw. 12, 2 (Mar 1995), 45--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Kristine Monteith, Tony Martinez, and Dan Ventura. 2010a. Automatic generation of music for inducing emotive response. In Proceedings of the 1st International Conference on Computational Creativity. 140--149.Google ScholarGoogle Scholar
  30. Kristine Monteith, Tony Martinez, and Dan Ventura. 2010b. Computational modeling of emotional content in music. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society. 2356--2361.Google ScholarGoogle Scholar
  31. Keith Muscutt. 2007. Composing with algorithms: An interview with david cope. Comput. Mus. J. 31, 3 (2007), 10--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jieun Oh, Jorge Herrera, Nicholas J. Bryan, Luke Dahl, and Ge Wang. 2010. Evolving the mobile phone orchestra. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME’10). 82--87.Google ScholarGoogle Scholar
  33. J. Paalasmaa, L. Leppäkorpi, and M. Partinen. 2011. Quantifying respiratory variation with force sensor measurements. In Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’11). 3812--3815.Google ScholarGoogle Scholar
  34. Joonas Paalasmaa, Hannu Toivonen, and Markku Partinen. 2015. Adaptive heartbeat modeling for beat-to-beat heart rate measurement in ballistocardiograms. IEEE J. Biomed. Health Inf. 19, 6 (2015), 1945--1952.Google ScholarGoogle ScholarCross RefCross Ref
  35. J. Paalasmaa, M. Waris, H. Toivonen, L. Leppäkorpi, and M. Partinen. 2012. Online monitoring of sleep at home. In Proceedings of the 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’12).Google ScholarGoogle Scholar
  36. George Papadopoulos and Geraint Wiggins. 1999. AI methods for algorithmic composition: A survey, a critical view and future prospects. In Proceedings of the AISB Symposim on Musical Creativity. 110--117.Google ScholarGoogle Scholar
  37. W. Gerrod Parrott. 2001. Emotions in Social Psychology. Psychology Press, Philadelphia.Google ScholarGoogle Scholar
  38. James W. Pennebaker, Roger J. Booth, and Martha E. Francis. 2007. Linguistic inquiry and word count: LIWC {Computer software}.Google ScholarGoogle Scholar
  39. Z. Pousman and J. Stasko. 2006. A taxonomy of ambient information systems: Four patterns of design. In Proceedings of the Working Conference on Advanced Visual Interfaces (2006), 67--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Marty Quinn. 2001. Research set to music: The climate symphony and other sonifications of ice core, radar, DNA, seismic and solar wind data. In Proceedings of the 7th International Conference on Auditory Display.Google ScholarGoogle Scholar
  41. C. Roads. 1996. The Computer Music Tutorial. The MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Ian Simon, Dan Morris, and Sumit Basu. 2008. MySong: Automatic accompaniment generation for vocal melodies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 725--734. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Robert Smith, Aaron Dennis, and Dan Ventura. 2012. Automatic composition from non-musical inspiration sources. In Proceedings of the 3rd International Conference on Computational Creativity. 160--164.Google ScholarGoogle Scholar
  44. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment in short strength detection informal text. J. Am. Soc. Inf. Sci. Technol. 62, 2 (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Godfried Toussaint. 2005. The euclidean algorithm generates traditional musical rhythms. In Renaissance Banff: Mathematics, Music, Art, Culture, Reza Sarhangi and Robert V. Moody (Eds.). Bridges Conference, Southwestern College, Winfield, Kansas, 47--56.Google ScholarGoogle Scholar
  46. Takahiko Tsuchiya, Jason Freeman, and Lee W Lerner. 2015. Data-to-music API: Real-time data-agnostic sonification with musical structure models. In Proceedings of the 21st International Conference on Auditory Display (ICAD’15). Graz, Austria, 244--251.Google ScholarGoogle Scholar
  47. Aurora Tulilaulu, Joonas Paalasmaa, Mikko Waris, and Hannu Toivonen. 2012. Sleep musicalization: Automatic music composition from sleep measurements. In Proceedings of the 11th International Symposium on Intelligent Data Analysis (IDA’12). LNCS, Vol. 7619. 392--403. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. L. Turchet and R. Bresin. 2015. Effects of interactive sonification on emotionally expressive walking styles. IEEE Trans. Affect. Comput. 6, 2 (Apr. 2015), 152--164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. L. Turchet and A. Rodà. 2017. Emotion rendering in auditory simulations of imagined walking styles. IEEE Trans. Affect. Comput. 8, 2 (Apr. 2017), 241--253.Google ScholarGoogle Scholar
  50. P. Vickers and J. L. Alty. 1996. CAITLIN: A musical program auralization tool to assist novice programmers with debugging. In Proceedings of the International Conference on Auditory Display (ICAD’96). 17--24.Google ScholarGoogle Scholar
  51. Paul Vickers and James L. Alty. 2005. Musical program auralization: Empirical studies. ACM Trans. Appl. Percept. 2, 4 (2005), 477--489. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Paul Vickers and Bennett Hogg. 2006. Sonification abstraite/sonification concr\ete: An’aesthetic perspective space’for classifying auditory displays in the ars musica domain. In Proceedings of the 12th International Conference on Auditory Display.Google ScholarGoogle Scholar
  53. Bruce N. Walker and Michael A. Nees. 2011. Theory of Sonification. 9--39.Google ScholarGoogle Scholar
  54. Ge Wang, Georg Essl, and Henri Penttinen. 2008. Do mobile phones dream of electric orchestras? In Proceedings of the International Computer Music Conference (ICMC’08).Google ScholarGoogle Scholar
  55. Ge Wang, Georg Essl, Jeff Smith, Spencer Salazar, P Cook, Rob Hamilton, Rebecca Fiebrink, Jonathan Berger, David Zhu, Mattias Ljungstrom, and others. 2009. Smule = Sonic media: An intersection of the mobile, musical, and social. In Proceedings of the International Computer Music Conference (ICMC’09). 283--286.Google ScholarGoogle Scholar
  56. Catherine M. Wilson and Suresh K. Lodha. 1996. Listen: A data sonification toolkit. In Proceedings of the International Conference on Auditory Display.Google ScholarGoogle Scholar

Index Terms

  1. Data Musicalization

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader
              About Cookies On This Site

              We use cookies to ensure that we give you the best experience on our website.

              Learn more

              Got it!