EXPRESSION OF EMOTION IN MUSIC AND VOCAL COMMUNICATION Topic Editors Anjali Bhatara, Petri Laukka and Daniel J. Levitin PSYCHOLOGY Frontiers in Psychology August 2014 | Expression of emotion in music and vocal communication | 1 ABOUT FRONTIERS Frontiers is more than just an open-access publisher of scholarly articles: it is a pioneering approach to the world of academia, radically improving the way scholarly research is managed. The grand vision of Frontiers is a world where all people have an equal opportunity to seek, share and generate knowledge. Frontiers provides immediate and permanent online open access to all its publications, but this alone is not enough to realize our grand goals. FRONTIERS JOURNAL SERIES The Frontiers Journal Series is a multi-tier and interdisciplinary set of open-access, online journals, promising a paradigm shift from the current review, selection and dissemination processes in academic publishing. All Frontiers journals are driven by researchers for researchers; therefore, they constitute a service to the scholarly community. At the same time, the Frontiers Journal Series operates on a revo- lutionary invention, the tiered publishing system, initially addressing specific communities of scholars, and gradually climbing up to broader public understanding, thus serving the interests of the lay society, too. DEDICATION TO QUALITY Each Frontiers article is a landmark of the highest quality, thanks to genuinely collaborative interac- tions between authors and review editors, who include some of the world’s best academicians. Research must be certified by peers before entering a stream of knowledge that may eventually reach the public - and shape society; therefore, Frontiers only applies the most rigorous and unbiased reviews. Frontiers revolutionizes research publishing by freely delivering the most outstanding research, evaluated with no bias from both the academic and social point of view. By applying the most advanced information technologies, Frontiers is catapulting scholarly publishing into a new generation. WHAT ARE FRONTIERS RESEARCH TOPICS? Frontiers Research Topics are very popular trademarks of the Frontiers Journals Series: they are collections of at least ten articles, all centered on a particular subject. With their unique mix of varied contributions from Original Research to Review Articles, Frontiers Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author by contacting the Frontiers Editorial Office: researchtopics@frontiersin.org FRONTIERS COPYRIGHT STATEMENT © Copyright 2007-2014 Frontiers Media SA. All rights reserved. All content included on this site, such as text, graphics, logos, button icons, images, video/audio clips, downloads, data compilations and software, is the property of or is licensed to Frontiers Media SA (“Frontiers”) or its licensees and/or subcontractors. The copyright in the text of individual articles is the property of their respective authors, subject to a license granted to Frontiers. The compilation of articles constituting this e-book, wherever published, as well as the compilation of all other content on this site, is the exclusive property of Frontiers. For the conditions for downloading and copying of e-books from Frontiers’ website, please see the Terms for Website Use. If purchasing Frontiers e-books from other websites or sources, the conditions of the website concerned apply. Images and graphics not forming part of user-contributed materials may not be downloaded or copied without permission. Individual articles may be downloaded and reproduced in accordance with the principles of the CC-BY licence subject to any copyright or other notices. They may not be re-sold as an e-book. As author or other contributor you grant a CC-BY licence to others to reproduce your articles, including any graphics and third-party materials supplied by you, in accordance with the Conditions for Website Use and subject to any copyright notices which you include in connection with your articles and materials. All copyright, and all rights therein, are protected by national and international copyright laws. The above represents a summary only. For the full conditions see the Conditions for Authors and the Conditions for Website Use. ISSN 1664-8714 ISBN 978-2-88919-263-2 DOI 10.3389/978-2-88919-263-2 Frontiers in Psychology August 2014 | Expression of emotion in music and vocal communication | 2 Two of the most important social skills in humans are the ability to determine the moods of those around us, and to use this to guide our behavior. To accomplish this, we make use of numerous cues. Among the most important are vocal cues from both speech and non-speech sounds. Music is also a reliable method for communicating emotion. It is often present in social situations and can serve to unify a group’s mood for ceremonial purposes (funerals, weddings) or general social interactions. Scientists and philosophers have speculated on the origins of music and language, and the possible common bases of emotional expression through music, speech and other vocalizations. They have found increasing evidence of commonalities among them. However, the domains in which researchers investigate these topics do not always overlap or share a common language, so communication between disciplines has been limited. The aim of this Research Topic is to bring together research across multiple disciplines related to the production and perception of emotional cues in music, speech, and non-verbal vocalizations. This includes natural sounds produced by human and non-human primates as well as synthesized sounds. Research methodology includes survey, behavioral, and neuroimaging techniques investigating adults as well as developmental populations, including those with atypical development. Studies using laboratory tasks as well as studies in more naturalistic settings are included. EXPRESSION OF EMOTION IN MUSIC AND VOCAL COMMUNICATION The owner of this image is Petri Laukka Topic Editors: Anjali Bhatara, Université Paris Descartes, France Petri Laukka, Stockholm University, Sweden Daniel J. Levitin, McGill University, Canada Frontiers in Psychology August 2014 | Expression of emotion in music and vocal communication | 3 Table of Contents 05 Expression of Emotion in Music and Vocal Communication: Introduction to the Research Topic Anjali Bhatara, Petri Laukka and Daniel J. Levitin 07 Emotional Expression in Music: Contribution, Linearity, and Additivity of Primary Musical Cues Tuomas Eerola, Anders Friberg and Roberto Bresin 19 Music, Emotion, and Time Perception: The Influence of Subjective Emotional Valence and Arousal? Sylvie Droit-Volet, Danilo Ramos, Lino José L. O. Bueno and Emmanuel Bigand 31 Preattentive Processing of Emotional Musical Tones: A Multidimensional Scaling and ERP Study Thomas F . Münte, Katja Spreckelmeyer, Eckart Altenmüller and Hans Colonius 42 Changing the Tune: Listeners Like Music that Expresses a Contrasting Emotion E. Glenn Schellenberg, Kathleen A. Corrigall, Olivia Ladinig and David Huron 51 Effects of Voice on Emotional Arousal Psyche Loui, Justin P . Bachorik, Hui C. Li and Gottfried Schlaug 57 Predicting Musically Induced Emotions From Physiological Inputs: Linear and Neural Network Models Frank A. Russo, Naresh N. Vempala and Gillian M. Sandstrom 65 Play it Again, Sam: Brain Correlates of Emotional Music Recognition Eckart Altenmüller, Susann Siggel, Bahram Mohammadi, Amir Samii and Thomas F Münte 73 Emotion Felt by the Listener and Expressed by the Music: Literature Review and Theoretical Perspectives Emery Schubert 91 Dynamic Musical Communication of Core Affect Nicole K. Flaig and Edward W. Large 103 The Same, Only Different: What Can Responses to Music in Autism Tell Us About the Nature of Musical Emotions? Rory Allen, Reubs Walsh and Nick Zangwill 107 Valence, Arousal, and Task Effects in Emotional Prosody Processing Silke Paulmann, Martin Bleichner and Sonja A. E. Kotz 117 Feeling Backwards? How Temporal Order in Speech Affects the Time Course of Vocal Emotion Recognition Simon Rigoulot, Eugen Wassiliwizky and Marc D. Pell 131 The Siren Song of Vocal Fundamental Frequency for Romantic Relationships Sarah Weusthoff, Brian R. Baucom and Kurt Hahlweg Frontiers in Psychology August 2014 | Expression of emotion in music and vocal communication | 4 140 Voice Quality in Affect Cueing: Does Loudness Matter? Irena Yanushevskaya, Christer Gobl and Ailbhe Ní Chasaide 154 Encoding Conditions Affect Recognition of Vocally Expressed Emotions Across Cultures Rebecca Jürgens, Matthis Drolet, Ralph Pirow, Elisabeth Scheiner and Julia Fischer 164 Perception of Emotionally Loaded Vocal Expressions and Its Connection to Responses to Music. A Cross-Cultural Investigation: Estonia, Finland, Sweden, Russia, and The USA Teija Waaramaa and Timo Leisiö 177 Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners Michihiko Koeda, Pascal Belin, Tomoko Hama, Tadashi Masuda, Masato Matsuura and Yoshiro Okubo 185 Cross-Cultural Decoding of Positive and Negative Non-Linguistic Emotion Vocalizations Petri Laukka, Hillary Anger Elfenbein, Nela Söder, Henrik Nordström, Jean Althoff, Wanda Chui, Frederick K. Iraki, Thomas Rockstuhl and Nutankumar S. Thingujam 193 The Role of Motivation and Cultural Dialects in the In-Group Advantage for Emotional Vocalizations Disa Sauter 202 What Does Music Express? Basic Emotions and Beyond Patrik N. Juslin 216 Repetition and Emotive Communication in Music Versus Speech Elizabeth Hellmuth Margulis 220 Emotional Communication in Speech and Music: The Role of Melodic and Rhythmic Contrasts Lena Quinto, William Forde Thompson and Felicity Louise Keating 228 On the Acoustics of Emotion in Audio: What Speech, Music, and Sound Have in Common Felix Weninger, Florian Eyben, Björn W. Schuller, Marcello Mortillaro and Klaus R. Scherer 240 The “Musical Emotional Bursts”: A Validated Set of Musical Affect Bursts to Investigate Auditory Affective Processing Sébastien Paquette, Isabelle Peretz and Pascal Belin 247 A Vocal Basis for the Affective Character of Musical Mode in Melody Daniel Bowling 253 Animal Signals and Emotion in Music: Coordinating Affect Across Groups Gregory A. Bryant 266 Speech vs. Singing: Infants Choose Happier Sounds Marieve Corbeil, Sandra E. Trehub and Isabelle Peretz 277 Child Implant Users’ Imitation of Happy- and Sad- Sounding Speech David Jueyu Wang, Sandra E. Trehub, Anna Volkova and Pascal van Lieshout 285 Age-Related Differences in Affective Responses to and Memory for Emotions Conveyed by Music: A Cross-Sectional Study Sandrine Vieillard and Anne-Laure Gilet EDITORIAL published: 05 May 2014 doi: 10.3389/fpsyg.2014.00399 Expression of emotion in music and vocal communication: Introduction to the research topic Anjali Bhatara 1,2 *, Petri Laukka 3 and Daniel J. Levitin 4 1 Sorbonne Paris Cité, Université Paris Descartes, Paris, France 2 Laboratoire Psychologie de la Perception, CNRS, UMR 8242, Paris, France 3 Department of Psychology, Stockholm University, Stockholm, Sweden 4 Department of Psychology, McGill University, Montreal, QC, Canada *Correspondence: bhatara@gmail.com Edited and reviewed by: Luiz Pessoa, University of Maryland, USA Keywords: music, speech, emotion, voice, cross-domain cognition In social interactions, we must gauge the emotional state of oth- ers in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this. Music is also a com- plex auditory signal with the capacity to communicate emotion rapidly and effectively and often occurs in social situations or ceremonies as an emotional unifier. Scientists and philosophers have speculated about the com- mon cognitive origins of music and language. Perhaps their common origin lies in their efficacy for emotional expression. Unlike semantic or syntactic aspects of language (and music), many of their acoustic and emotional aspects are shared with sounds made by other species (Fitch, 2006); music and speech share a common acoustic code for expressing emotion (Juslin and Laukka, 2003). Until recently, however, scientists working in the two domains of music and speech rarely communicated, so research was restricted to one domain or the other. The purpose of this Research Topic was to bring these researchers together and encourage cross-talk. Over 25 groups of researchers contributed their expertise, and the included papers give an overview of the diversity of current research, both in terms of research questions and methodology. Some articles focus on aspects in one of the two domains, whereas other articles directly compare, contrast, or combine music and vocal communication. Empirical studies on music perception include work by Eerola et al. (2013), in which they systematically manipulated musical cues to determine their effects on perception of emotion, and Droit-Volet et al. (2013), who altered acoustic elements associ- ated with emotion to examine the effect of these changes on time perception. Effects of context on music understanding were also investigated: Spreckelmeyer et al. (2013) examined preattentive processing of emotion, measuring ERPs during the processing of a sad tone within the context of happy tones and the reverse. Schellenberg et al. (2012) demonstrated a listener preference for music that expressed emotion contrasting with an established context, and Loui et al. (2013) examined the role of vocals on perceived arousal and valence in songs. Turning to emotional responses to music, Russo et al. (2013) developed models aimed at predicting the emotion being expe- rienced using information in the listeners’ physiological signals, and Altenmüller et al. (2014) used fMRI to investigate the neu- ral basis of episodic memory for arousing film music. Following up on Gabrielsson’s (2002) distinction between emotion felt by a listener and emotion expressed by a piece of music, Schubert (2013) provided a review and suggestions for future research on the internal and external loci of musical emotion. There were also two theoretical papers on musical emotions: Flaig and Large (2014) speculated that music may induce affective response by speaking to the brain in its own language by way of neurodynam- ics, and Allen et al. (2013) presented a view of the general nature of musical emotions based on studies on autism. In the speech domain, Paulmann et al. (2013) used EEG to investigate influences of arousal and valence on cortical responses to emotional prosody. Rigoulot et al. (2013) used a gating paradigm to demonstrate the importance of utterance-final syl- lables in emotion recognition. Two papers focused on the role of specific acoustic cues in vocal expression: Weusthoff et al. (2013) discussed the role of fundamental frequency in the success of romantic relationships, and Yanushevskaya et al. (2013) examined the role of loudness, both independently and in conjunction with voice quality. Several researchers undertook cross-cultural studies of emo- tion perception in speech and non-verbal vocalizations. Jürgens et al. (2013) examined the perception of German emotional speech tokens across three cultures. Waaramaa and Leisiö (2013) examined the recognition of emotion in Finnish pseudo- sentences by listeners from five countries. There were also three cross-cultural investigations of non-verbal vocalizations: Koeda et al. (2013) examined perception of emotional vocalizations by Canadian and Japanese listeners, Laukka et al. (2013) examined Swedish listeners’ perception of vocalizations from four countries, and Sauter (2013) examined the role of motivation in the in- group advantage for emotion recognition by presenting listeners with vocalizations produced by in- or out-group members. Discussing the similarity between music and speech emotion expression, Juslin (2013) forwarded the argument that this sim- ilarity lies at the “core” or basic emotion level, and that more complex emotions are more domain-specific. Several authors empirically tested the similarity and contrasts between music and vocal expression. Margulis (2013) posited that the relative preponderance of repetition in music compared to speech con- tributes to a fundamental difference between the two domains. Quinto et al. (2013) showed differences in the functions of pitch and rhythm between these domains. Weninger et al. (2013) synthesized information from databases including speech, music, and environmental sounds, and thereby took a step toward a www.frontiersin.org May 2014 | Volume 5 | Article 399 | 5 Bhatara et al. Emotion in music and voice holistic computational model of affect in sound. To aid future cross-domain research, Paquette et al. (2013) presented a new validated set of stimuli—a musical equivalent to vocal affective bursts. Bowling (2013) reviewed the affective character of musical modes, based in the biology of human vocal emotion expression, and Bryant (2013) further argued that research on music and emotion might benefit from research on form and function in non-human animal signals. Three papers examined developmental and lifespan changes. Corbeil et al. (2013) contrasted the perception of speaking and singing in infancy, and found that it is not the domain (music or speech) that matters but rather the level of (positive) emotion. Wang et al. (2013) examined early auditory deprivation, asking children with cochlear implants to imitate happy and sad utter- ances. Vieillard and Gilet (2013) found an increase in positive responding to music with aging. In sum, the main contribution of this Research Topic, along with highlighting the variety of research being done already, is to show the places of contact between the domains of music and vocal expression that occur at the level of emotional communica- tion. In addition, we hope it will encourage future dialog among researchers interested in emotion in fields as diverse as computer science, linguistics, musicology, neuroscience, psychology, speech and hearing sciences, and sociology, who can each contribute knowledge necessary for studying this complex topic. REFERENCES Allen, R., Walsh, R., and Zangwill, N. (2013). The same, only different: what can responses to music in autism tell us about the nature of musical emotions? Front. Psychol. 4:156. doi: 10.3389/fpsyg.2013.00156 Altenmüller, E., Siggel, S., Mohammadi, B., Samii, A., and Münte, T. (2014). Play it again Sam: brain correlates of emotional music recognition. Front. Psychol. 5:114. doi: 10.3389/fpsyg.2014.00114 Bowling, D. L. (2013). A vocal basis for the affective character of musical mode in melody. Front. Psychol. 4: 464. doi: 10.3389/fpsyg.2013.00464 Bryant, G. A. (2013). Animal signals and emotion in music: coordinating affect across groups. Front. Psychol. 4:990. doi: 10.3389/fpsyg.2013.00990 Corbeil, M., Trehub, S. E., and Peretz, I. (2013). Speech vs. singing: infants choose happier sounds. Front. Psychol. 4:372. doi: 10.3389/fpsyg.2013.00372 Droit-Volet, S., Ramos, D., Bueno, J. L. O., and Bigand, E. (2013). Music, emotion, and time perception: the influence of subjective emotional valence and arousal? Front. Psychol. 4:417. doi: 10.3389/fpsyg.2013.00417 Eerola, T., Friberg, A., and Bresin, R. (2013). Emotional expression in music: con- tribution, linearity, and additivity of primary musical cues. Front. Psychol. 4:487. doi: 10.3389/fpsyg.2013.00487 Fitch, W. T. (2006). The biology and evolution of music: a comparative perspective. Cognition 100, 173–215. doi: 10.1016/j.cognition.2005.11.009 Flaig, N. K., and Large, E. W. (2014). Dynamic musical communication of core affect. Front. Psychol. 5:72. doi: 10.3389/fpsyg.2014.00072 Gabrielsson, A. (2002). Emotion perceived and emotion felt: same or different? Music. Sci. 5, 123–147. doi: 10.1177/10298649020050S105 Jürgens, R., Drolet, M., Pirow, R., Scheiner, E., and Fischer, J. (2013). Encoding con- ditions affect recognition of vocally expressed emotions across cultures. Front. Psychol. 4:111. doi: 10.3389/fpsyg.2013.00111 Juslin, P. N. (2013). What does music express? Basic emotions and beyond . Front. Psychol. 4:596. doi: 10.3389/fpsyg.2013.00596 Juslin, P. N., and Laukka, P. (2003). Communication of emotions in vocal expres- sion and music performance: different channels, same code? Psychol. Bull. 129, 770–814. doi: 10.1037/0033-2909.129.5.770 Koeda, M., Belin, P., Hama, T., Masuda, T., Matsuura, M., and Okubo, Y. (2013). Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and Canadian listeners. Front. Psychol. 4:105. doi: 10.3389/fpsyg.2013.00105 Laukka, P., Elfenbein, H. A., Söder, N., Nordström, H., Althoff, J., Chui, W., et al. (2013). Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations. Front. Psychol. 4:353. doi: 10.3389/fpsyg.2013.00353 Loui, P., Bachorik, J. P., Li, H. C., and Schlaug, G. (2013). Effects of voice on emotional arousal. Front. Psychol. 4:675. doi: 10.3389/fpsyg.2013.00675 Margulis, E. H. (2013). Repetition and emotive communication in music versus speech. Front. Psychol. 4:167. doi: 10.3389/fpsyg.2013.00167 Paquette, S., Peretz, I., and Belin, P. (2013). The “musical emotional bursts”: a val- idated set of musical affect bursts to investigate auditory affective processing. Front. Psychol. 4:509. doi: 10.3389/fpsyg.2013.00509 Paulmann, S., Bleichner, M., and Kotz, S. A. (2013). Valence, arousal, and task effects in emotional prosody processing. Front. Psychol. 4:345. doi: 10.3389/fpsyg.2013.00345 Quinto, L., Thompson, W. F., and Keating, F. L. (2013). Emotional communication in speech and music: the role of melodic and rhythmic contrasts. Front. Psychol. 4:184. doi: 10.3389/fpsyg.2013.00184 Rigoulot, S., Wassiliwizky, E., and Pell, M. D. (2013). Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition. Front. Psychol. 4:367. doi: 10.3389/fpsyg.2013.00367 Russo, F. A., Vempala, N. N., and Sandstrom, G. M. (2013). Predicting musically induced emotions from physiological inputs: linear and neural network models. Front. Psychol. 4:468. doi: 10.3389/fpsyg.2013.00468 Sauter, D. A. (2013). The role of motivation and cultural dialects in the in-group advantage for emotional vocalizations. Front. Psychol. 4:814. doi: 10.3389/fpsyg.2013.00814 Schellenberg, E. G., Corrigall, K. A., Ladinig, O., and Huron, D. (2012). Changing the tune: listeners like music that expresses a contrasting emotion. Front. Psychol. 3:574. doi: 10.3389/fpsyg.2012.00574 Schubert, E. (2013). Emotion felt by listener and expressed by music: a literature review and theoretical investigation. Front. Psychol. 4:837. doi: 10.3389/fpsyg.2013.00837 Spreckelmeyer, K. N., Altenmüller, E. O., Colonius, H., and Münte, T. F. (2013). Preattentive processing of emotional musical tones: a multidimensional scaling and ERP study. Front. Psychol. 4:656. doi: 10.3389/fpsyg.2013.00656 Vieillard, S., and Gilet, A.-L. (2013). Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study. Front. Psychol. 4:711. doi: 10.3389/fpsyg.2013.00711 Waaramaa, T., and Leisiö, T. (2013). Perception of emotionally loaded vocal expres- sions and its connection to responses to music. a cross-cultural investigation: Estonia, Finland, Sweden, Russia, and the USA. Front. Psychol. 4:344. doi: 10.3389/fpsyg.2013.00344 Wang, D. J., Trehub, S. E., Volkova, A., and van Lieshout, P. (2013). Child implant users’ imitation of happy-and sad-sounding speech. Front. Psychol. 4:351. doi: 10.3389/fpsyg.2013.00351 Weninger, F., Eyben, F., Schuller, B. W., Mortillaro, M., and Scherer, K. R. (2013). On the acoustics of emotion in audio: what speech, music, and sound have in common. Front. Psychol. 4:292. doi: 10.3389/fpsyg.2013.00292 Weusthoff, S., Baucom, B. R., and Hahlweg, K. (2013). The siren song of vocal fundamental frequency for romantic relationships. Front. Psychol. 4:439. doi: 10.3389/fpsyg.2013.00439 Yanushevskaya, I., Gobl, C., and Ní Chasaide, A. (2013). Voice quality in affect cue- ing: does loudness matter? Front. Psychol. 4:335. doi: 10.3389/fpsyg.2013.00335 Conflict of Interest Statement: The authors declare that the research was con- ducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Received: 26 March 2014; accepted: 15 April 2014; published online: 05 May 2014. Citation: Bhatara A, Laukka P and Levitin DJ (2014) Expression of emotion in music and vocal communication: Introduction to the research topic. Front. Psychol. 5 :399. doi: 10.3389/fpsyg.2014.00399 This article was submitted to Emotion Science, a section of the journal Frontiers in Psychology. Copyright © 2014 Bhatara, Laukka and Levitin. This is an open-access article dis- tributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Frontiers in Psychology | Emotion Science May 2014 | Volume 5 | Article 399 | 6 ORIGINAL RESEARCH ARTICLE published: 30 July 2013 doi: 10.3389/fpsyg.2013.00487 Emotional expression in music: contribution, linearity, and additivity of primary musical cues Tuomas Eerola 1 *, Anders Friberg 2 and Roberto Bresin 2 1 Department of Music, University of Jyväskylä, Jyväskylä, Finland 2 Department of Speech, Music, and Hearing, KTH - Royal Institute of Technology, Stockholm, Sweden Edited by: Anjali Bhatara, Université Paris Descartes, France Reviewed by: Frank A. Russo, Ryerson University, Canada Dan Bowling, University of Vienna, Austria *Correspondence: Tuomas Eerola, Department of Music, University of Jyväskylä, Seminaarinkatu 35, Jyväskylä, FI-40014, Finland e-mail: tuomas.eerola@jyu.fi The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010). Keywords: emotion, music cues, factorial design, discrete emotion ratings INTRODUCTION One of the central reasons that music engages the listener so deeply is that it expresses emotion (Juslin and Laukka, 2004). Not only do music composers and performers of music capitalize on the potent emotional effects of music but also the gaming and film industries, as do the marketing and music therapy indus- tries. The way music arouses listeners’ emotions has been studied from many different perspectives. One such method involves the use of self-report measures, where listeners note the emotions that they either recognize or actually experience while listening to the music (Zentner and Eerola, 2010). Another method involves the use of physiological and neurological indicators of the emo- tions aroused when listening to music (recent overview of the field is given in Eerola and Vuoskoski, 2012). Although many extra-musical factors are involved in the induction of emotions (e.g., the context, associations, and individual factors, see Juslin and Västfjäll, 2008), the focus of this paper is on those prop- erties inherent in the music itself which cause emotions to be perceived by the listener that are generally related to mechanism of emotional contagion (Juslin and Västfjäll, 2008). Scientific experiments since the 1930s have attempted to deter- mine the impact of such individual musical cues in the commu- nication of certain emotions to the listener (Hevner, 1936, 1937). A recent summary of this work can be found in Gabrielsson and Lindström’s (2010) study that states that the most potent musical cues, also most frequently studied, are mode, tempo, dynamics, articulation, timbre , and phrasing . For example, the distinction between happiness and sadness has received consid- erable attention—these emotions are known to be quite clearly distinguished through cues of tempo, pitch height , and mode : the expression of happiness is associated with faster tempi, a high- pitch range, and a major rather than minor mode, and these cues are reversed in musical expressions of sadness (Hevner, 1935, 1936; Wedin, 1972; Crowder, 1985; Gerardi and Gerken, 1995; Peretz et al., 1998; Dalla Bella et al., 2001). Other combinations of musical cues have been implicated for different discrete emotions such as anger, fear, and peacefulness (e.g., Bresin and Friberg, 2000; Vieillard et al., 2008). In real music, it is challenging to assess the exact contribution of individual cues to emotional expression because all cues are utterly intercorrelated. Here, the solution is to independently and systematically manipulate the cues in music by synthesizing vari- ants of a given music. Such a factorial design allows assessment of the causal role of each cue in expressing emotions in music. Previous studies on emotional expression in music using facto- rial design have often focused on relatively few cues as one has to manipulate each level of the factors separately, and the ensu- ing exhaustive combinations will quickly amount to an unfeasible total number of trials needed to evaluate the design. Because of this complexity, the existing studies have usually evaluated two or three separate factors using typically two or three discrete lev- els in each. For example, Dalla Bella et al. (2001) studied the contribution of tempo and mode to the happiness-sadness con- tinuum. In a similar vein, Ilie and Thompson (2006) explored the contributions of intensity , tempo , and pitch height on three affect dimensions. Interestingly, the early pioneers of music and emotion research did include a larger number of musical factors in their www.frontiersin.org July 2013 | Volume 4 | Article 487 | 7 Eerola et al. Emotional expression in music experiments. For example, Rigg’s experiments (1937, 1940a,b, cited in Rigg, 1964) might have only used five musical phrases, but a total of seven cues were manipulated in each of these examples ( tempo, mode, articulation, pitch level, loudness, rhythm patterns , and interval content ). He asked listeners to choose between happy and sad emotion categories for each excerpt, as well as fur- ther describe the excerpts using precise emotional expressions. His main findings nevertheless indicated that tempo and mode were the most important cues. Hevner’s classic studies (1935, 1937) manipulated six musical cues ( mode, tempo, pitch level, rhythm quality, harmonic complexity, and melodic direction ) and she observed that mode, tempo and rhythm were the determi- nant cues for emotions in her experiments. Rather contemporary, complex manipulations of musical cues have been carried out by Scherer and Oshinsky (1977), Juslin (1997c), and Juslin and Lindström (2010). Scherer and Oshinsky manipulated seven cues in synthesized sequences ( amplitude variation, pitch level, pitch contour, pitch variation, tempo, envelope , and filtration cut-off level , as well as tonality and rhythm in their follow-up experiments) but again mostly with only two levels. They were able to account for 53–86% of the listeners’ ratings of emotionally relevant seman- tic differential scales using linear regression. This suggests that a linear combination of the cues is able to account for most of the ratings, although some interactions did occur between the cues. Similar overall conclusions were drawn by Juslin (1997c), when he manipulated synthesized performances of “Nobody Knows The Trouble I’ve Seen” in terms of five musical cues ( tempo —three levels, dynamics —three levels, articulation —two levels, timbre — three levels and tone attacks —two levels). The listeners rated happiness, sadness, anger, fearfulness, and tenderness on Likert scales. Finally, Juslin and Lindström (2010) carried out the most exhaustive study to date by manipulating a total of eight cues ( pitch, mode, melodic progression, rhythm, tempo, sound level, articulation , and timbre ), although seven of the cues were lim- ited to two levels (for instance, tempo had 70 bpm and 175 bpm version). This design yielded 384 stimuli that were rated by 10 listeners for happiness, anger, sadness, tenderness, and fear. The cue contributions were determined by regression analyses. In all, 77–92% of the listener ratings could be predicted with the linear combination of the cues. The interactions between the cues only provided a small (4–7%) increase in predictive accuracy of the models and hence Juslin and Lindström concluded that the “back- bone of emotion perception in music is constituted by the main effects of the individual cues, rather than by their interactions” (p. 353). A challenge to the causal approach (experimental manipula- tion rather than correlational exploration) is choosing appropri- ate values for the cue levels. To estimate whether the cue levels operate in a linear fashion, they should also be varied in such a manner. Another significant problem is determining a priori whether the ranges of each cue level are musically appropri- ate, in the context of all the other cues and musical examples used. Fortunately, a recent study on emotional cues in music (Bresin and Friberg, 2011) established plausible ranges for seven musical cues, and this could be used as a starting point for a systematic factorial study of the cues and emotions. In their study, a synthesis approach was taken, in which participants could simultaneously adjust all seven cues of emotional expression to produce compelling rendition of five emotions (neutral, happy, sad, scary, peaceful, and sad) on four music examples. The results identified the optimal values and ranges for the individual musical cues, which can be directly utilized to establish both a reasonable range of each cue and also an appropriate number of levels so that each of the emotions could be well-represented in at least one position in the cue space for these same music examples. AIMS AND RATIONALE The general aim of the present study is to corroborate and test the hypotheses on the contribution of musical cues to the expres- sion of emotions in music. The specific aims were: (1) to assess predictions from studies on musical cues regarding the causal relationships between primary cues and expressed emotions; (2) to assess whether the cue levels operate in a linear or non-linear manner; and (3) to test whether cues operate in an additive or interactive fashion. For such aims, a factorial manipulation of the musical cues is required since these the cues are completely inter- correlated in a correlation design. Unfortunately, the full factorial design is especially demanding for such an extensive number of factors and their levels, as it requires a substantial number of trials (the number of factors multiplied by the number of factor levels) and an a priori knowledge of the settings for those factor levels. We already have the answers to the latter in the form of the pre- vious study by Bresin and Friberg (2011). With regard to all the combinations required for such an extensive factorial design, we can reduce the full factorial design by using optimal design prin- ciples, in other words, by focusing on the factor main effects and low-order interactions while ignoring the high-order interactions that are confounded in the factor design matrix. MATERIALS AND METHODS A factorial listening experiment was designed in which six pri- mary musical cues ( register, mode, tempo, dynamics, articulation , and timbre ) were varied on two to six scalar or nominal levels across four different music structures . First, we will go through the details of these musical cues, and then, we will outline the optimal design which was used to create the music stimuli. MANIPULATION OF THE CUES The six primary musical cues were, with one exception (mode), the same cues that were used in the production study by Bresin and Friberg (2011). Each of these cues has been previously implicated as having a central impact on emotions expressed by music [summary in Gabrielsson and Lindström (2010), and past factorial studies, e.g., Scherer and Oshinsky, 1977; Juslin and Lindström, 2010] and have a direct counterpart in speech expres- sion (see Juslin and Laukka, 2003; except for mode, see Bowling et al., 2012). Five cues— register, tempo, dynamics, timbre and articulation (the scalar factors)—could be seen as having linear or scalar levels, whereas mode (a nominal factor) contains two cat- egories (major and minor). Based on observations from the pro- duction study, we chose to represent register with six levels, tempo and dynamics with five levels, and articulation with four levels. This meant that certain cues were deemed to need a larger range in order to accommodate different emotional characteristics, Frontiers in Psychology | Emotion Science July 2013 | Volume 4 | Article 487 | 8 Eerola et al. Emotional expression in music while others required less subtle differences between the levels ( articulation and timbre ). Finally, we decided to manipulate these factors across different music structures derived from a past study to replicate the findings using four different music excerpts, which we treat as an additional seventh factor. Because we assume that the physiological states have led to the configuration of cue codes, we derive predictions for each cue direction for each emotion based on the vocal expression of affect [from Juslin and Scherer (2005), summarized for our primary cues in Table 3 ]. For mode, which is not featured in speech studies, we draw on the recent cross-cultural findings, which suggest a link between emotional expression in modal music and speech mediated by the rela- tive size of melodic/prosodic intervals (Bowling et al., 2012). The comparisons of our results with those of past studies on musical expression on emotions rely on a summary by Gabrielsson and Lindström (2010) and individual factorial studies (e.g., Scherer and Oshinsky, 1977; Juslin and Lindström, 2010), which present a more or less comparable pattern of results to those obtained in the studies on vocal expression of emotions (Juslin and Laukka, 2003). OPTIMAL DESIGN OF THE EXPERIMENT A full factorial design with these particular factors would have required 14,400 unique trials to completely exhaust all factor and level couplings (6 × 5 × 5 × 4 × 2 × 3 × 4). As such an experiment is impractically large by any standards, a form of reduction was required. Reduced designs called fractional fa