When we talk face to face, we exchange many more signals than just words. We communicate using our body posture, facial expressions and head and eye movements; but also through the rhythms that are produced when someone is speaking. A good example is the rate at which we produce syllables in continuous speech – about three to seven times per second. In a conversation, a listener tunes in to this rhythm and uses it to predict the timing of the syllables that the speaker will use next. This makes it easier for them to follow what is being said.
Many other things are also going on. Using brain-imaging techniques we know for instance that even when no one is talking, the part of our brain responsible for hearing produces rhythmic activity at a similar rate to the syllables in speech. When we listen to someone talking, these brain rhythms align to the syllable structure. As a result, the brain rhythms match and track in frequency and time the incoming acoustic speech signal.
When someone speaks, we know their lip movements help the listener, too. Often these movements precede the speech – opening your mouth, for example – and provide important cues about what the person will say. Yet even on their own, lip movements contain enough information to allow trained observers to understand speech without hearing any words – hence some people can lip-read, of course. What has been unclear until now is how these movements are processed in the listener’s brain.
Lip-synching
This was the subject of our latest study. We already knew that it is not just a speaker’s vocal chords that produce a syllable rhythm, but also their lip movements. We wanted to see whether listeners’ brain waves align to speakers' lip movements during continuous speech in a comparable way to how they align to the acoustic speech itself – and whether this was important for understanding speech.
Our study has revealed for the first time that this is indeed the case. We recorded the brain activity of 44 healthy volunteers while they watched movies of someone telling a story. Just like the auditory part of the brain, we found that the visual part also produces rhythms. These align themselves to the syllable rhythm that is produced by the speaker’s lips during continuous speech. And when we made the listening conditions more difficult by adding distracting speech, which meant that the storyteller’s lip movements become more important to understand what they were saying, the alignment between the two rhythms became more precise.
In addition, we found that the parts of the listener’s brain that control lip movements also produce brain waves that are aligned to the lip movements of the speaker. And when these waves are better aligned to the waves from the motor part of the speaker’s brain, the listener understands the speech better. This supports the idea that brain areas that are used for producing speech are also important for understanding speech, and could have implications for studying lip-reading between people with hearing difficulties. Having shown this in relation to a speaker and listener, the next step will be to look at whether the same thing happens with brain rhythms during a two-way conversation.
Why are these insights interesting? If it is correct that speech normally works by establishing a channel for communication through aligning brain rhythms to speech rhythms – similar to tuning a radio to a certain frequency to listen to a certain station – our results suggest that there are other complementary channels that can take over when necessary. Not only can we tune ourselves to the rhythms from someone’s vocal chords, we can tune into the equivalent rhythms from their lip movement. Instead of doing this with the auditory part of our brain, we do it through the parts associated with seeing and movement.
And neither do you need to be a trained lip-reader to benefit – this is why even in a noisy environment such as a pub or a party, most people can still communicate with each other.
Joachim Gross has received funding in the past from the Wellcome Trust, BBSRC, ESRC, MRC and Volkswagen Stiftung.
Hyojin Park does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Joachim Gross, Professor in Psychology, University of Glasgow
Hyojin Park, Research Associate, University of Glasgow
This article was originally published on The Conversation. Read the original article.



Eli Lilly’s Inluriyo Gains FDA Approval for Advanced Breast Cancer Treatment
Merck Nears Acquisition of Cidara Therapeutics at Significant Premium
Novo Nordisk Appoints Greg Miley as Global Head of Corporate Affairs Amid U.S. Pricing Pressure
Major Drugmakers Slash U.S. Prices and Sell Directly to Patients Amid Trump’s Push for Affordable Medicines
FDA Pilot Program Eases Rules for Nicotine Pouch Makers
U.S. Experts to Reassess Newborn Hepatitis B Vaccination Guidelines Amid Growing Debate
Bayer’s Stroke Drug Achieves Breakthrough Trial Results, Boosting Market Confidence
Obamacare Premiums Set to Double in 2026 as Subsidy Expiration Looms Amid U.S. Shutdown
U.S. Reveals 2026 Medicare Star Ratings: Aetna, UnitedHealth Lead in Quality Scores
Novo Nordisk and Eli Lilly Lower Prices for Weight-Loss Drugs Amid U.S. Agreement
Novartis to Acquire Avidity Biosciences for $12 Billion to Strengthen Rare Muscle Disorder Portfolio
Pfizer Boosts Bid for Metsera Amid Intensifying Rivalry with Novo Nordisk in Obesity Drug Market
Trump Signs Executive Order to Boost AI Research in Childhood Cancer
Eli Lilly Becomes First Pharma Giant to Hit $1 Trillion Amid Soaring Weight-Loss Drug Demand
Trump Hints at Major Autism Announcement, Raises Questions on Tylenol Link 



