Tuesday, January 23, 2024

New psychology research sheds light on the mystery of music enjoyment

by Eric W. Dolan
January 23, 2024
in Cognitive Science

(Photo credit: OpenAI's DALL·E)


In a new study, scientists unveiled new insights into how our brains process and enjoy music by distinguishing between the sensory and cognitive elements of musical experience. They found that both sensory perceptions, like the actual sound of music, and cognitive factors, such as our understanding and familiarity with musical styles, independently contribute to our expectations and enjoyment of music. The research has been published in Philosophical Transactions of the Royal Society B.

Music has been a part of human culture since prehistoric times, and most people find music deeply rewarding. Studies have shown that our pleasure in music often stems from the way it meets, violates, or delays our expectations. These expectations are believed to arise from two sources: sensory expectations, based on the actual sounds we hear, and cognitive expectations, which come from our learned understanding of music patterns. Until now, however, the distinct roles and interactions of these sensory and cognitive elements in shaping our musical experiences were not fully clear.

“Music is fascinating to me because it can evoke strong emotions with just patterns of sounds. Intuitively, we seem to like music that is somewhat predictable, but not overly so. I want to find out how people come to form these musical expectations and their role in shaping how much we like a song,” explained Vincent K. M. Cheung of Sony Computer Science Laboratories, who conducted the research as a PhD student at the Max Planck Institute for Human Cognitive and Brain Sciences.

The study draws upon the predictive coding of music (PCM) model, a concept suggesting that musical expectations and subsequent surprises – whether fulfilling or violating these expectations – are a source of pleasure. The PCM model proposes that the brain creates musical expectations, and any deviation from these expectations results in a surprise element. This surprise is a key factor in the pleasure we derive from music.

At the heart of this study are chord progressions from commercially successful pop songs. These progressions, which form the backbone of many musical pieces, were extracted from the McGill Billboard dataset. This dataset is a rich repository containing over 80,000 chords from 745 pop songs that hit the U.S. Billboard ‘Hot 100′ chart between 1958 and 1991. The chosen chord progressions, each consisting of 30 to 38 chords, were transposed to the key of C major and played using a combination of marimba, jazz guitar, and acoustic guitar timbres.

The research consisted of two separate experiments.

Experiment 1 focused on two distinct groups: musicians and non-musicians. The study involved 25 healthy adults, including 13 musicians and 12 non-musicians. Participants were presented with the selected chord progressions and asked to rate their surprise at each chord using a mechanical slider. This continuous rating system provided nuanced data on how each chord compared to the listeners’ expectations. The researchers aimed to capture the instant reaction to each chord, gauging how predictable or surprising the chord was based on the listeners’ musical experience.

The second experiment expanded the participant pool to 39 healthy adults, with no specific requirements for musical training. Participants underwent a similar procedure as in Experiment 1, but this time, they rated how pleasant each chord was, again using a mechanical slider. This setup aimed to explore the emotional response to music, particularly how expectancy and surprise translate into feelings of pleasure or displeasure.

In both experiments, the research team controlled for various factors, such as the duration of each chord and the presence of background rhythms, to ensure that the focus remained solely on the harmony and structure of the music.

The researchers used four computational models to simulate and predict the participants’ responses. These models were designed to represent different aspects of musical expectancy: two focused on sensory elements, one on a mix of sensory and cognitive elements, and one solely on cognitive aspects.

The models included the Spectral Distance (SD) model, the Periodicity Pitch (PP) model, the Tonal Expectation (TE) model, and the Information Dynamics of Music (IDyOM) model. These models calculated expectancy in terms of spectral similarity, neural encoding in the auditory system, a mix of sensory and cognitive processing, and internal cognitive representations of musical styles, respectively.

In the first experiment, both the IDyOM and PP models were effective in predicting the surprise ratings of the participants, with IDyOM being particularly more effective, especially among musicians. This suggests that our brains use both the actual sounds and our learned knowledge of music to form expectations. Interestingly, the IDyOM model, which focuses on cognitive aspects, accounted for more variance in the surprise ratings than the PP model, indicating a stronger role for cognitive factors in shaping musical expectations.

In simpler terms, both sensory and cognitive aspects were important in predicting how surprised the participants were by the chords. However, the cognitive aspect (like familiarity with music styles) seemed to play a bigger role, especially for musicians.

In the second experiment, focusing on pleasantness ratings, the results were equally revealing. Here, the researchers found that both sensory and cognitive expectations, as modeled by PP and IDyOM, independently predicted how pleasant the participants found the chords. This suggests that the sensory aspects of music (such as its rhythm and melody) and our cognitive understanding of music (like recognizing a familiar tune or style) both play distinct roles in how much we enjoy music.

In other words, when it came to how pleasant the participants found the music, sensory and cognitive factors were independently important. This means that what we actually hear and what we understand or know about music both contribute to how much we enjoy it.

“Our finding that cognitive and sensory surprise contributed additively was quite unexpected,” Cheung told PsyPost. “Basically, it suggests that there are two separate systems in the brain—one higher level, and another lower level—that monitor music structure and generate subsequent predictions. It is reminiscent of the two systems of thought popularized by Daniel Kahneman (author of ‘Thinking, Fast and Slow’), where System 1 is fast and intuitive, and System 2 is slow and logical.”

In short, this study tells us that our enjoyment and surprise from music come not just from the music itself, but also from how it interacts with our knowledge and expectations of music.

“People derive pleasure from music by the confirmation and violations of predictions,” Cheung said. “These predictions are mostly learnt through extended exposure to music from a particular genre, but are also influenced by the way sounds are processed in the brain. So it really is shaped by both nature and nurture.”

However, the study is not without limitations. One of the main constraints is that the computational models used may not fully capture the complexity of how humans process music. Moreover, the study primarily used chord progressions from Western pop music, raising questions about whether these findings would apply to other musical styles or cultural contexts.

“Our focus was on music that participants had never heard before,” Cheung added. “Although the chord progressions were taken from real pop songs, none of the participants could identify the original song. An important question that remains to be addressed is why people still enjoy songs that they are familiar with, despite them being completely predictable.”

The study, “Cognitive and sensory expectations independently shape musical expectancy and pleasure“, was authored by Vincent K. M. Cheung, Peter M. C. Harrison, Stefan Koelsch, Marcus T. Pearce, Angela D. Friederici, and Lars Meyer.

No comments:

Post a Comment