Music of the Mind
Share
- Details
- Text
- Audio
- Downloads
- Extra Reading
Where does music live in the mind? Is it an evolutionary byproduct, a trick of perception—or something deeper? From Pinker’s auditory cheesecake to sonic illusions like the McGurk effect and Deutsch’s phantom melodies, music plays with our senses in ways we barely comprehend. Yet its ties to language, memory, and emotion suggest it is fundamental to human thought. Can we see its traces in the brain? This lecture explores whether music is a fleeting illusion—or the key to understanding the mind itself.
Download Text
Music of the Mind
Professor Milton Mermikides
“The human brain is by far the most complex physical object known to us in the entire cosmos."
Owen Gingerich, astronomer
We find ourselves in an age obsessed with technology, where we routinely fork over fistfuls of cash to update our already-extravagant smart phones, laptops and TVs. For what? Just incremental changes on what would have been perceived as pure magic just decades ago. Among such an obsession with technological advancement, it’s easy to take for granted that we carry with us - in fact what we are embedded within – the “most complex physical object known to us in the entire cosmos”[1]: the human brain. If marketed like an item of contemporary technology it sounds too good to be true: A product of several million years of research and development, it weighs just three pounds, the size of a medium melon (fitting most human skulls), it can last over a century, and runs on just 20 watts of power. That’s equivalent to – rather appropriately – one light bulb. Within this elegant and portable package lies the most extraordinarily complex super computer: 86 billion neurons capable of forming a network of 100 trillion synaptic connections. This allows a theoretical storage capacity of 2.5 petabytes (although retrieval success is variable), with complex and highly refined sensory apparatus to respond and react to the external world. But the real value is that the human brain comes preinstalled with the latest version of MindOS™️ an astonishing adaptive software compatible with language, social interaction, emotion, logical reasoning, music – and perhaps one of the most miraculous of all phenomena: consciousness. As such, the mind/brain is able to do the extraordinary: to be aware of, to appreciate – and even learn to understand – itself.[2]
“Musical activity involves nearly every region of the brain we know about, and nearly every neural subsystem"
Daniel Levitin (2006)
Given the immense complexity, capacity and diverse functions of the brain and mind, it should be all the more remarkable that – for some reason – the peculiar and mysterious activity of music engages so deeply with so much of it. As neuroscientist and musician, Daniel Levitin puts it “musical activity involves nearly every region of the brain we know about, and nearly every neural subsystem".[3] This includes resources not just associated with listening (the auditory cortices), but also those associated with movement cognitive deliberation and prediction – even when we passively listen – (including the basal ganglia, cerebellum, motor cortices etc.), and all manners of emotion (in the amygdala, hypothalamus, nucleus accumbens and others). Some key areas are illustrated in Figure 1.
Figure 1: Key regions of the brain involved in networks relevant to music. Image ©2021 Kringelbach. [See downloadable document]
However it is not so much the location of the music’s activation in the brain that is relevant but the interconnectivity that it promotes. When listening, and particularly when making music, the human brain creates uniquely expansive orchestrations across the brain, linking areas that are otherwise rarely associated. It is telling that the corpus callosum – the largest bundle of white matter in the brain (200-300 million axons) which acts like a bridge – a conductor – between the left and right cerebral hemispheres is "significantly larger in musicians" (Schlaug et al. 1995).
So extraordinary is music’s hold on the brain that some definitions of music are made not in terms of sonic structure or cultural purpose, but by its effect on the brain: Music – for neuroscientist Lawrence Sherman – is simply “changes in air molecules that induce activity in both hemispheres”.
To support such a position, there are many accounts of music’s robustness in the brain, and how it survives in terms of memory. Clive Wearing (1938– ) the British musicologist, conductor, singer and pianist contracted a herpes virus in the nervous system in 1985 resulting in profound amnesia, and unable to preserve long-term memories. He spends his day ‘waking up’ every 15 seconds, attempting in vain to cling on to any semblance of time through repeated diary entries, and greeting his wife with unbridled joy as if after a long separation every time she returns to the room. However, despite not remembering the names of his children, the food he just started eating, or the names of any musical pieces, he remains completely capable of conducting a choir, sight-reading and performing complex piano pieces.
The jazz guitarist Pat Martino (1944–2021) provides a parallel account of how the procedural memory of music (knowing how to perform music) from explicit biographical memory (an awareness of one’s life story and identity). A virtuoso and award-winning jazz musician (and prodigy in his teen years), Martino suffered a traumatic brain aneurysm. Emergency life-saving surgery removed a lemon size blood clot from his brain leaving him with profound amnesia, including the knowledge that he was a musician. He had to be convinced through record sleeves and recordings of his skills, and – by transcribing his own recordings – learned to play (and progress) again. The neat story that he learned to play again from scratch is beguiling, but not as interesting as the more likely scenario, that much of his musical skill remained dormant. It had to be carefully reignited but was so deeply embedded that it survived such a trauma.
Brain trauma can even induce a musical passion. In 1994, American orthopaedic surgeon Dr. Tony Cicoria (1952– ), after being struck by lightning in the face, experienced a profound near-death experience and transformation. After recovery from some memory issues, and no apparent permanent damage to the brain, he started to develop a deep interest in romantic-era piano music. Cicoria had no prior practical experience of the instrument or genre but started listening, practising and composing obsessively. His interest has not waned and he continues to practise, arrange and perform, His 2008 solo piano recital in New York included the music of Chopin, Brahms and his very own Lightning Sonata.
In the great composer Maurice Ravel’s (1875–1937) neurodegenerative aphasia, it was the ability to remember words, letters that disappeared first. The ability to connect musical symbols to notes was lost later, compromising his ability to communicate his inner music to the page, and then to perform at all. But in a sad beauty, Ravel continued to his last days to attest that I still have so much music in my head. I have said nothing. I have so much more to say.’
Indeed music seems to be the most sticky of memories, and its use can be a profoundly powerful non-pharmacological tool in dementia care, reducing agitation, anxiety, and depression while improving mood, cognition, and quality of life. There are many cases of non-verbal subjects, and those with profound learning difficulties who have astonishing musical capabilities. Such as the savant blind pianist Derek Paravicini (1978–) who with absolute pitch can play back with great accuracy pieces of great duration and complexity. However he clearly does so not as mindless parrotting, but with a level of absorption: he can interpret and improvise around melodies in first hearing in a range of styles, including his favourite: stride piano. What’s also fascinating and illuminating, is that his ability to do so greatly diminished if he does not ‘get’ – hear the inner logic – of the music; even his absolute pitch fails with Schoenberg.
Even a typical functioning brain has a staggering capacity for music. I have run informal tests on fellow guitarists who can recognise the player and piece from a sample of one single open low E note. Beatles superfans like the musician Jack White can recognise any track from very short audio extracts. I ran a similar test on vocalist Glyn Protheroe who can achieve the same feat with 500-1000ms audio samples from any of the Beatles catalogue. In the rare occasions when it doesn't come to him immediately, you can see him ‘walking through’ the song in his mind, picking up the aural breadcrumbs until the appropriate lyric ignites a full recollection.
One could – and should – ask why our brains are so musically dedicated. Musical activity has both ubiquity and antiquity. We have made it for at the very least 60,000 years and we are yet to find a culture on Earth without music. But what purpose does it serve? If we can argue that music provides an evolutionary advantage through say sexual selection, group bonding, or cognitive sharpening then we might bring us some way to explain its prevalence. However there is as yet sufficient consensus and supporting evidence to support this claim. An alternative explanation is put forward (in)famously by Steven Pinker: "Compared with language, vision, social reasoning, and physical know-how, music could vanish from our species and the rest of our lifestyle would be virtually unchanged. Music appears to be a pure pleasure technology, a cocktail of recreational drugs that we ingest through the ear to stimulate a mass of pleasure circuits at once". And, more succinctly, "music is auditory cheesecake, an exquisite confection crafted to tickle [...] of our mental faculties". The argument is that we have the capacity of music as a by-product of more (evolutionary) important resources, not as a central purpose; and music is crafted to most deeply trigger such purposeful pleasure circuits. This statement of course left many musicians clutching their guitar strings in horror, but we should not be offended or dismiss it out of hand. There are many deeply valuable and meaningful things that are ‘evolutionary useless’” empathy (beyond our immediate family, tribe and even species), art, coffee, most instances of humour and play, and – yes – cheesecake. Such trivialities may not be central to existence, but they can be to a meaningful life. But still it’s hard to fully accept that music is just a by-product. How can something that means so much to us, mean nothing at all?
In order to understand the power – and even purpose – of music it can be useful to understand how it forms in the mind from our listening faculties, and how it relates to another type of human sonic communication: speech. Even ‘everyday’ listening itself is an extraordinary feat. From the mass of subtle, fast, and contradictory movement of air molecules we are able to build a vivid inner world, separating and integrating these wave forms into likely sound sources, their positions in space, and even predict their movements. This involves extraordinary physiological and brain mechanics and even a cursory introduction to the faculties and processes involved can fill a book.[4] In short, when sound enters our brain through our ears it is distributed to various centers feasting on the information in terms of its spectral content (the ‘colour’ of the sound’), loudness, pitch, position and timing. Such complex parallel processes are required for us to differentiate the sound of the waves, from animals, from footsteps, a familiar member from a stranger, and even compounded information like the distance, age, gender, movement and emotional state. We should pause to consider what our minds are capable of doing here: it’s akin to looking at ripples on the bank of a lake and unconsciously being able to identify what animals or vessels exist within it, their positions and movements. Except sound waves approach us three dimensionally, travel over 30 times faster than water waves, vary in length from 2 to 700cm and its fluctuations are detectable at a billionth of a cm up to several orders of magnitude in height. This extraordinary superpower that allows us to integrate and segregate such varied, subtle and intertwined waveforms back into their sources provides us with a toolkit for sophisticated two-way communication of speech, whereby we can invent the sound sources themselves.
Human speech – an apparently ubiquitous modern human skill across cultures – involves the use of mutually agreed distinguishable sound fragments (‘phonemes’) – which like lego blocks can be used to build syllables and words linked to a shared library of referents (for example the six phonemes m uː n l aɪ t create the syllables ‘moon’ and ‘light’ which are symbolically connected to a definable phenomena in the world). The creation of a spoken language involves the selection of phonemes which a) can be distinguishable when heard, and b) are producible by the human mouth. This intersection of the practically perceptible and the reproducible form the canvas of the IPA (the international phonetic alphabet) a library (and symbolic dictionary) of the phonemes used across languages. What we find across languages are some extremely common (nearly universal sounds) presumably as they are the easiest to produce and discern. These include such phonemes as /m/ /a/ and /p/ (useful for mama and papa). The phonemic possibilities are vast (though not infinite), and languages around the world can be seen as carving subsets in the field of possibilities. Some languages share many phonemes (like English and Greek) but may differ on a few: Modern Greek has the’voiced velar fricative’ of /𝛾/ a soft gutteral g, and the voiceless fricative of /χ/ - a little like the ch of the scottish “Loch”. However the Greek phonemic repertoire lacks the “h” of “hat”, “sh” of “ship”, “j” of “judge” and “ch” of “church” (“Hello Judge Judy” would be “χello Tzutze Tzudy”). English has a moderate amount of phonemes at 44, while some have much fewer such as the Oceanic language of Rokatas which has just 11 (and 12 letters). Perhaps the largest repertoire is the Botswanian language of !Xóõ which uses (depending on definitions) around 161 sounds thanks to its variety of diverse click and pop phonemes. An illustration of how languages use overlapping subsets of phonemic space is shown in Figure 2.
Figure 2: An illustration of how spoken languages employ various subsets of a shared phonemic space. [See downloadable document]
Useful to our discussion on music is how we construct and perceive these phonemes. It turns out that listening is not entirely objective. It depends on what we expect to hear. For example, many of us have mistakenly heard our name called in whispers, wind or city noise. Looped overlapped voices can produce ‘phantom words’ as our mind desperately tries to resolve the ambiguity. A distorted voice can sound meaningless out of context but if the listener is told or reads the message first, they recognise it clearly. This clarity lasts for as long as they remember what the message is supposed to be and disappears again. The subjective nature of hearing is particularly apparent at the boundaries between phonemes. For example the McGurk effect is a perceptual phenomenon whereby conflicting visual and auditory speech cues combine to create a new, illusory sound. For example, watching a video of mouth movements for "fa" while hearing the audio "ba" often causes the brain to perceive "fa" or "tha," highlighting that vision significantly influences speech perception.[5] The Yanny/Laurel meme of 2018 which divided the nation in a 53/47% split was the audio equivalent of the black/blue vs white/gold dress. A short fragment of audio seemed to be at the boundary of these two words. How individuals heard it seemed to depend on sensitivity to higher frequencies and their own dialect. The critical boundary between these phonemic boundaries seemed to be different to different listeners, however the astonishing fact is that the alternate hearing can sometimes suddenly ‘pop out’ for a listener particularly when the timbre is slowly manipulated. and once switched the listener they can find themselves ‘stuck’ in that listening mode. In short, listening is not a wholly passive objective sense, but contains ‘top-down’ components whereby the listener’s expectations, visual cues and inner templates can dictate what they hear. Music shares, and perhaps relies on, such subjective sonic categorisations and illusions.
The core material of spoken language is derived from phonemes: categories of timbral objects. Literal meaning is thus derived from (often subtle) categories of timbre. The pitch and speed of these phonemes - how high or quickly we say phonemes – does not generally change the literal meaning of words. There are examples of tonal languages where pitch can change literal meaning, for example in the Nigerian language of Mambila the word ‘ba’ can be intonated at 4 different pitch gestures with 4 different literal meanings. Nonetheless, the role of pitch (and loudness) in the vast majority of contexts and spoken languages is for implied meaning, emphasis and emotionality. Music (again generally not universally), tends to house its core material within pitch and timing (as represented in musical symbols), and timbre becomes the expressive layer. We can play the recognisably ‘same’ melody with different expressive timbres (even instruments).
What is fascinating is the boundary between speech and language. There is evidence of shared gestures in how we construct and intone both spoken and melodic phrases. They both seem to follow up-down ‘arch’ structures, unless they expect an answering continuation: ‘Did you remember the passports” with an upward pitch motion expects a quick (and hopefully positive) response down completing the arch. “Did you remember the passports” ending lower is more ominous and expects another long arch explanation or apology. Melodies have been shown to follow such question/arch shapes and even the speech rhythm of their composer’s language. Conversely there are musical forms where language components are embedded into the musical objects, like the talking drum language of Ghana where drum strokes have specific literal meanings, or the sophisticated syllabic library of the Hindustani tabla.
What is key, however, is that repetition in speech tends to aid memory, and bring the listeners focus to the pitch and rhythmic parameters over literal meaning. Perhaps music emerged as a way of embedding language in collective memories? The eminent researcher of Diana Deutsch demonstrated that when individuals hear a spoken phrase once, they reproduce it accurately. When it is repeated 10 times and asked to repeat it they emphasise its pitch and rhythmic ‘shape’ transforming it from speech to melody.
What could possibly be gained by losing the literal meaning of phonemes, and transforming them to pitch and rhythmic shapes? Perhaps the ‘expressive dressing’ of spoken language is preserved – a question, intentionality, emotion is preserved? But what else do musical shapes provide beyond this? What meaning do they hold? The composer Felix Mendelssohn in a letter of 1842 to his friend the poet and composer Marc-André Souchay provides an insight:
“People usually complain that music is so ambiguous, and what they are supposed to think when they hear it is so unclear, while words are understood by everyone. But for me it is exactly the opposite...what the music I love expresses to me are thoughts not too indefinite for words, but rather too definite.”
― Felix Mendelssohn (1842)
So what are the “definite thoughts” found in music?
Pitch and rhythmic categories provide unique opportunities to the communicator. Whereas we prefer to here one work (and phoneme) at a time for comprehensibility, pitch and rhythm are readily layered, combined and ‘scaled’: We can discern multiple pitches and rhythms at once, a pitch and rhythm can repeat exactly, or be disassociated (the same pitch can repeat at a different duration, or the same rhythm can have different pitches), Imitation, variation and novelty can happen to single musical objects or with clusters of notes. We can readily infer ‘second-order’ patterns: if a pitch ascends, we can expect another ascent. Unlike timbre, pitch and rhythm can be scaled, a step higher or lower in pitch, or a halving or doubling of a note duration suggests a general principle of organisation, and complex musical spaces can emerge a note can ascend but ‘return to itself’ an octave higher acting as both a ladder and a circle (a helix) in virtual space. Our brains actively seek out repetitions, patterns and form predictive patterns based on preceding musical material, and the memories of music we have heard before. In the simplest music forms rich networks of perceived connections emerge. The researcher Adam Ockelford (and long time teacher and mentor of the aforementioned musical savant Derek Paravicini) posits this zygonic theory[6] which holds that imitation, occurring at multiple domains of perceived sound is the ultimate organising and expressive force in music. It is through this elaboration of perceived imitations and variations, that music is constructed, perceived, remembered, composed, improvised and felt. Consider how we can recognise the melody of hundreds of tunes not just by listening to a known recording, but by a pattern of pitches irrespective of starting note, tempo or the instrument on which it is played. This suggests that an important component of music cognition is like a vector – a geometric shape – in pitch and rhythmic space. Much like how we readily recognise faces or contemporary face-recognition technologies work. The ‘delicious cheesecake’ here is the multiple thrills of successful predictions and surprises that come from well cooked music. Figure 3 – an extract from Ockelford’s Comparing Notes – demonstrates the complex predictive patterns that emerge from the ‘simple’ opening phrase of The Carpenters’ Goodbye to Love, and a hint of the rich patterning that is the stuff of music.
Figure 3 An illustration of zygonic connections – perceived imitation and variations – of just the opening phrase of The Carpenters’ Goodbye to Love. Repetitions and expectations are annotated, adapted from Ockelford (2005: 276-278) [See downloadable document]
We have explained how a zygonic framework can harness the brain’s pattern-seeking hunger, offering a particular kind of cognitive and perceptual pleasure: the thrill that arises when expectations are fulfilled, subtly varied, or artfully thwarted. This interplay between prediction and surprise operates at a fundamental level of musical experience, drawing listeners into a dynamic relationship with unfolding sound. Yet this is only one layer of musical expression. A more complete picture of emotional engagement is offered in Juslin’s BRECVEMA framework, which proposes multiple, parallel mechanisms through which music can evoke affective responses.
Importantly, these mechanisms can be understood along three interrelated dimensions: their relative speed of response, their developmental emergence across the lifespan, and their position on a continuum from universal to highly personal experience. At one end are rapid, automatic, and biologically ingrained responses; at the other, slower, reflective, and individually shaped meanings.
- Brainstem Reflexes (B) – These are the fastest and most automatic responses, triggered by basic acoustic features such as sudden loudness, dissonance, or sharp attacks. A fortissimo orchestral hit or a sudden cymbal crash can startle the listener almost instantaneously. Developmentally, these responses are present from infancy and are shared across cultures, reflecting deeply rooted auditory processing mechanisms.
- Rhythmic Entrainment (R) – This involves the synchronization of internal bodily rhythms with external musical pulse. Foot tapping, head nodding, or dancing to a groove are clear examples. While still relatively fast and widespread, entrainment develops slightly later than reflexes and depends on exposure to rhythmic regularity. It remains broadly universal, though shaped by cultural rhythmic norms.
- Evaluative Conditioning (E) – Here, music becomes associated with positive or negative experiences through repeated pairing. A song played at a joyful event may later evoke happiness when heard again. This mechanism develops through life experience and is more personal, though still grounded in general associative learning processes.
- Emotional Contagion (C) – Listeners internally mimic or simulate the emotional expression perceived in music. A slow, minor melody may induce sadness through this empathic process. This response is relatively quick but requires some degree of emotional and cognitive development, and while widely shared, it can vary in intensity between individuals.
- Visual Imagery (V) – Music can evoke mental images, scenes, or narratives. For example, Debussy might conjure water or light, while film music may recall specific visual sequences. This mechanism is slower and more cognitively mediated, emerging with imaginative capacity and shaped by cultural and personal experience.
- Episodic Memory (E) – Music can trigger vivid recollections of specific events or periods in one’s life. Hearing a song from adolescence might instantly transport a listener back to a particular place or emotional state. This is highly personal and depends on autobiographical memory, developing over time.
- Musical Expectancy (M) – Closely related to the zygonic processes discussed earlier, this involves the anticipation of musical structure based on learned conventions. A cadence resolving as expected, or deviating from expectation, produces satisfaction or surprise. This mechanism develops with musical exposure and training, and while partly universal, is strongly influenced by stylistic familiarity.
- Aesthetic Judgement (A) – The most reflective and slowest mechanism, involving conscious evaluation of music’s quality, meaning, or beauty. A listener might admire the craftsmanship of a fugue or the innovation of a jazz solo. This response is highly individual, shaped by education, culture, and personal taste. It is about we – as individuals – value.
Taken together, these mechanisms operate not in isolation but in parallel, layering immediate, bodily reactions with learned associations, emotional simulations, and reflective judgements. It is this rich, simultaneous interplay of responses, ranging from the universal to the deeply personal, that helps us unpick the complexity and depth of the musical listening experience.
Figure 4: Extract from Atasoy et al. (2016) Figure 1. Laplace eigenfunctions and connectome harmonics on a metal plate surface, mammalian skin, atomic structure, and brain interconnectivity in the 9 lowest frequency connectome harmonics. [See downloadable document]
Research into music cognition has often taken the form of “digging into” the brain – locating centres, circuits, and responses that explain how music is processed. Yet a striking shift in contemporary research suggests a reversal of this perspective: rather than simply finding music in the brain, we may begin to see the brain itself as fundamentally musical in its organisation. This idea is beautifully encapsulated in Figure 4. Here, the work of Selen Atasoy and colleagues situates the brain within a much broader natural context. The figure draws together examples from across scales: the elegant standing-wave patterns formed by sand on vibrating metal plates, the emergence of stripe and spot patterns across animal skins, the structured wave functions of electrons in atomic orbitals, and patterns arising in electromagnetic systems.In each case, the structure itself determines a set of natural modes – preferred patterns through which energy or activity can organise. These are not imposed from outside; they are intrinsic possibilities of the system.
The crucial insight is that the brain appears to operate in exactly this way. Rather than being a collection of isolated regions exchanging signals, large-scale brain activity organizes into spatial wave patterns shaped by the brain’s wiring – the connectome. These connectome harmonics function much like the resonant modes of a musical instrument: a repertoire of patterns through which neural activity can coherently unfold.
In this light, the relationship between music and the brain becomes more than metaphor. Musical structures – harmony, resonance, interference, expectation – may reflect the same fundamental principles that govern brain activity itself. When we listen to music, we are not simply processing sound; we may be engaging with patterns that echo the brain’s own intrinsic modes of organisation. Perhaps, then, music is not something external that we decode (or a delicious but unimportant treat), but something deeply continuous with the way the brain itself works. Music may be nature’s way of understanding itself.
© Professor Milton Mermikides 2026
Footnotes
[1] This quote is commonly attributed to the astronomer Owen Gingerich, but such sentiments about the brain’s complexity are (justifiably) commonplace.
[2] The ability for our brains to appreciate themselves is one thing, but to fully understand themselves is fraught with a paradox for – as is attributed to physicist Emerson Pugh – "If our brains were simple enough for us to understand them, we'd be so simple that we couldn't"
[3] The only activities that come close to music’s breadth of brain activity it seems are dance (with music), meditation and the use of psychedelics.
[4] For one such accessible book see Nina Kraus’s Of sound mind: How our brain constructs a meaningful sonic world.
[5] The effect can be so surprising that one might say it's floody bantastic.
[6] Like the biological counterpart of the zygote, the name is derived from the Greek zygon (ζυγόν) meaning a yoke (as in joining two oxen together).
Acknowledgments
Professor Morten Kringelbach for the brain illustration, and the many tours of the human brain given from his impressive – to my struggling - one. Eminent linguistic scholar Bruce Connell for sharing research on the wonderful Mambila language. Selen Atasoy for the harmonic brain data. Daniel Solomons for lending his brain activity to the musical cause.
References and Further Reading
Atasoy, S., Donnelly, I., & Pearson, J. (2016). Human brain networks function in connectome-specific harmonic waves. Nature Communications, 7, Article 10340.
Ball, P. (2010). The music instinct: How music works and why we can’t do without it. Oxford University Press.
Connell, B. (2012). Tones in tunes: A preliminary look at speech and song melody in Mambila. Speech and Language Technology, 14, 137–146.
Connell, B. (2000). The perception of lexical tone in Mambila. Language and Speech, 43(2), 163–182.
Deutsch, D. (2013). Musical illusions and phantom words: How music and speech unlock mysteries of the brain. Oxford University Press.
Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. MIT Press.
Faria, R. & Antunes da Silva, M. (2016). Lowering dissonance by relating spectra on equal tempered scales.
Foster Vander Elst, O., Foster, N. H. D., Vuust, P., & Kringelbach, M. L. (2021). The neuroscience of dance: A systematic review of the present state of research and suggestions for future work.
Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235–266
Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. MIT Press.
Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. Dutton.
Margulis, E. H. (2014). On repeat: How music plays the mind. Oxford University Press.
Mendelssohn, F. (1864). Letters of Felix Mendelssohn Bartholdy from 1833 to 1847 (P. Mendelssohn, Ed.). Longman, Green, Longman, Roberts, & Green.
Ockelford, A. (2005). Comparing notes: How we make sense of music. Oxford University Press.
Ockelford, A. (2008) In the Key of Genius: The Extraordinary Life of Derek Paravicini, Arrow.
Ockelford, A. (2009). Zygonic theory: Introduction, scope, and prospects. Zeitschrift der Gesellschaft für Musiktheorie, 6(1), 91–172.
Patel, A. D. (2008). Music, language, and the brain. Oxford University Press.
Pinker, S. (1997). How the mind works. W. W. Norton & Company.
Sacks, O. (2007). Musicophilia: Tales of music and the brain. Knopf.
Schlaug G, Jäncke L, Huang Y, Staiger JF, Steinmetz H. (1995) Increased corpus callosum size in musicians. Neuropsychologia.
Sherman, L. S., & Plies, D. (2023) Every brain needs music: The neuroscience of making and listening to music. Columbia University Press.
Zaatar, M. T., Alhakim, K., Enayeh, M., & Tamer, R. (2023). The transformative power of music: Insights into neuroplasticity, health, and disease. Brain, Behavior, & Immunity - Health, 35, 100716.
Atasoy, S., Donnelly, I., & Pearson, J. (2016). Human brain networks function in connectome-specific harmonic waves. Nature Communications, 7, Article 10340.
Ball, P. (2010). The music instinct: How music works and why we can’t do without it. Oxford University Press.
Connell, B. (2012). Tones in tunes: A preliminary look at speech and song melody in Mambila. Speech and Language Technology, 14, 137–146.
Connell, B. (2000). The perception of lexical tone in Mambila. Language and Speech, 43(2), 163–182.
Deutsch, D. (2013). Musical illusions and phantom words: How music and speech unlock mysteries of the brain. Oxford University Press.
Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. MIT Press.
Faria, R. & Antunes da Silva, M. (2016). Lowering dissonance by relating spectra on equal tempered scales.
Foster Vander Elst, O., Foster, N. H. D., Vuust, P., & Kringelbach, M. L. (2021). The neuroscience of dance: A systematic review of the present state of research and suggestions for future work.
Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235–266
Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. MIT Press.
Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. Dutton.
Margulis, E. H. (2014). On repeat: How music plays the mind. Oxford University Press.
Mendelssohn, F. (1864). Letters of Felix Mendelssohn Bartholdy from 1833 to 1847 (P. Mendelssohn, Ed.). Longman, Green, Longman, Roberts, & Green.
Ockelford, A. (2005). Comparing notes: How we make sense of music. Oxford University Press.
Ockelford, A. (2008) In the Key of Genius: The Extraordinary Life of Derek Paravicini, Arrow.
Ockelford, A. (2009). Zygonic theory: Introduction, scope, and prospects. Zeitschrift der Gesellschaft für Musiktheorie, 6(1), 91–172.
Patel, A. D. (2008). Music, language, and the brain. Oxford University Press.
Pinker, S. (1997). How the mind works. W. W. Norton & Company.
Sacks, O. (2007). Musicophilia: Tales of music and the brain. Knopf.
Schlaug G, Jäncke L, Huang Y, Staiger JF, Steinmetz H. (1995) Increased corpus callosum size in musicians. Neuropsychologia.
Sherman, L. S., & Plies, D. (2023) Every brain needs music: The neuroscience of making and listening to music. Columbia University Press.
Zaatar, M. T., Alhakim, K., Enayeh, M., & Tamer, R. (2023). The transformative power of music: Insights into neuroplasticity, health, and disease. Brain, Behavior, & Immunity - Health, 35, 100716.
Part of:
This event was on Wed, 15 Apr 2026
Support Gresham
Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham College plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.