Nina Sun Eidsheim: How covert metaphors shape timbral theories, listening, and practices
Abstract. If music and sound are “thick events” that necessarily exceed our ability to grasp them fully, what resources can we employ to make (at least partial) sense of them? Metaphorical language works as one resource, shaping the ways in which we perceive and understand not only music, but also one another and the world. Western musical thought has been shaped by several dominant metaphors, which not only influence the vocabulary we use to describe and analyze music but also impact our musical imaginaries, performance practices, and sensory access to timbre. In this keynote, I want to shift attention from what I call lexical metaphors (metaphors that are evident in language, such as “amber voice”) to covert metaphors (metaphors that aren’t evident in language, but are rather hidden in listening and performance practices, theories, methodologies, and sensibilities), and to show how they contribute to the formation of our timbral understanding. Moreover, I will argue that because many of these covert-metaphor paradigms were established during a time of fervent European nationalism and continued to develop under colonialist violence and expansion, many in the service of imagining nations and defining the self in meetings with the Other. Finally, I will argue that timbral studies must re-examine the covert-metaphor paradigms that have shaped the field, critically record their impact on the world, and also ask whether they best support our aspirations for the field.
Nina Sun Eidsheim (she/her) is the author of Sensing Sound: Singing and Listening as Vibrational Practice and The Race of Sound: Listening, Timbre, and Vocality in African American Music; co-editor of the Oxford Handbook of Voice Studies; co-editor of the Refiguring American Music book series for Duke University Press. She received her bachelor of music from the voice program at the Agder Conservatory (Norway); MFA in vocal performance from the California Institute of the Arts; and Ph.D. in Musicology from the University of California, San Diego. Eidsheim is Professor of Musicology, UCLA Herb Alpert School of Music and founder and director of the UCLA Practice-based Experimental Epistemology Research (PEER) Lab, an experimental research Lab dedicated to decolonializing data, methodology, and analysis, in and through multisensory creative practices. Current projects include a book collaboration with Wadada Leo Smith and a multi-model project that maps networks of metaphors that structure musical community, discourse, and practice.
Makis Solomos: on timbre and listening (Title TBC)
Makis Solomos was born in Greece and now lives in France, working as Professor of Musicology at the Université Paris 8 and as director of the research team MUSIDANSE. He has published many books and articles about new music, and he is one of the main international specialists of Xenakis’ music. His book From Music to Sound. The Emergence of Sound in 20th- and 21st-Century Music (Routledge, 2019) examines how new music brought about a change of paradigm—a mutation—on listening, moving from a culture centred on the note to a culture of sound. His new book Towards an Ecology of Sound. The Living World, the Mental and the Social in Today’s Music, Sound Art and Artivisms (Routledge, 2023) deals with an enlarged notion of ecology for music and sound, mixing environmental issues and socio-political questions. He co-organized, the Xenakis22: Centenary International Symposium and he is the editor of Révolutions Xenakis (Éditions de l’Œil – Philharmonie de Paris, 2022).
Charles Spence: Crossmodal correspondences involving timbre: theory and application
Abstract. There has been an explosion of interest in the crossmodal correspondences in recent years. The term refers to the (almost synaesthetic) connections between the senses, and have most frequently been studied between the modalities of audition and vision. While the auditory dimension of pitch has historically attracted the most interest from cognitive researchers as well as from composers and artists interested in translating between the senses, given the circular nature of pitch and colour. However, in recent years, there has been growing interest from those working in the field of sensory substitution research on those correspondences involving instrumental timbre. I will review our research concerning timbral correspondences tastes, aromas, and visual stimuli, and illustrate how such insights can be used in the development of everything from more effective advertising communications to sonic seasoning.
Charles Spence is a world-famous experimental psychologist with a specialisation in neuroscience-inspired multisensory design. He has worked with many of the world’s largest companies across the globe since establishing the Crossmodal Research Laboratory (CRL) at the Department of Experimental Psychology, Oxford University in 1997. Prof. Spence has published over 1,100 academic articles and edited or authored 16 books including the Prose prize-winning The perfect meal (2014; Willey Blackwell), and the international bestseller Gastrophysics: The new science of eating (2017; Penguin Viking) – winner of the 2019 Le Grand Prix de la Culture Gastronomique from Académie Internationale de la Gastronomie. His latest book Sensehacking was published in 2021 (Penguin). Much of Prof. Spence’s work focuses on the design of enhanced multisensory food and drink experiences, through collaborations with chefs, baristas, mixologists, chocolatiers, perfumiers, and the food and beverage, and flavour and fragrance industries. Prof. Spence has worked extensively in the world of multisensory experiential wine and coffee and has also worked extensively on the question of how technology will transform our dining/drinking experiences in the future.
Jesse Engel: Disentangling style and structure in generative music models
Abstract. We often take concepts such as timbre, melody, harmony, and rhythm as givens of musical understanding. But what do they mean to a generative model? How useful are these priors for both human control and statistical modeling, and how can we inject them into the learning process? Finally, what underlying structure do these models discover in music, and how well does it map to our preexisting notions? In this talk, I’ll describe a series of works from the Magenta research group targeting these questions over the past several years. Specifically, we’ll look at generative music models that factorize control over style and structure (both implicitly and explicitly), and examine the tradeoffs of the different approaches, and remaining open questions in the field.
Jesse Engel is lead research scientist on Magenta, a research team within Google Brain exploring the role of machine learning in creative applications. He has a UC Berkeley ‘cubed’ degree (BA, PhD, Postdoc) and his research background is diverse, including work in Astrophysics, Materials Science, Chemistry, Electrical Engineering, Computational Neuroscience, and now Machine Learning. His research on Magenta includes creating new generative models for audio (DDSP, NSynth), symbolic music (MusicVAE, GrooVAE), adapting to user preferences (Latent Constraints, MIDI-Me), and work to close the gap between research and musical applications (NSynth Super, Magenta Studio). Previously he worked with Andrew Ng to help found the Baidu Silicon Valley AI Lab and was a key contributor to DeepSpeech2, a speech recognition system named one of the ‘Top 10 Breakthroughs of 2016’ by MIT Technology Review. Outside of work, he is also a professional-level jazz guitarist, and likes to include in his bio that he once played an opening set for the Dalai Lama at the Greek Theatre.