Anyone who notices the poetry in song lyrics knows that rhythm underlies both speech and music. It shouldn’t be surprising that professional researchers in the neurobiology of language development have been busy trying to understand all the connections. What is surprising is how little progress there has been. Noting the similarities between music and speech in relation to rhythm, and the large amount of research interest to date, a very recent paper in Neuropsychology (Fiveash A, et al., 2021 Nov;35(8):771-791. doi: 10.1037/neu0000766) attempts a synthesis of the theoretical and empirical work, and proposes a way forward.
That way forward the authors call the ‘PRISM framework’, describing three mechanisms that underlie processing of the rhythm of music and speech. These are precise auditory processing, synchronization/entrainment of neural oscillations to external rhythms, and sensorimotor coupling.
The general reason for this way of describing development and related issues is data on observed timing impairments across speech and language disorders that include dyslexia, DLD, and stuttering. It seems reasonable that there is shared neural circuitry between music and speech rhythm processing. As the authors put it, ‘overlapping mechanisms involved in encoding, perception, prediction, and production of … speech signal, rhythmic training, in particular when exploiting metrical structures and other benefits of musical material, appears to be a promising avenue for future research[.]’
MyoNews from BreatheWorksTM is a report on trends and developments in oromyofunctional disorder and therapy. These updates are not intended as diagnosis, treatment, cure or prevention of any disease or syndrome.