The neuroscience of speech, Language and Music
By Dr. Andrew Huberman & Dr. Erich Jarvis
Recently, I came across Dr. Andrew Huberman’s podcast. He is a neuroscientist working at Stanford University. His work is concentrated in the fields of brain development, functioning and neural plasticity.
While going through his playlist, I found almost all topics to be really interesting, to the point where I got into a state of choice paralysis. What caught my eye were his podcasts covering music and languages.
I decided to watch his interview with Dr. Erich Jarvis. Yes, a guy talking about speech is named Jarvis! According to his short bio on Rockefeller university’s website, where he works as a professor, Dr. Jarvis studies neural and genetic aspects of spoken language and vocal learning using song learning birds and other animals.
Language
He starts off by explaining that speech and language while being different entities still share the same pathway. He believes there aren’t 2 separate areas in the brain responsible for understanding different languages and another for producing sound/speech. He argues that since there is a lack of convincing evidence for this, the logical theory is that there is a single system which controls the muscles used for speech. This system or speech pathway contains the necessary algorithms to learn words and speak a language. He also suggests that there is a separate auditory system housing a similar algorithm to help us understand when others are speaking.
They shed light on how genetics plays a role in the way we learn a language and our overall quality of speech. An exciting study which they shared was how a bird with vocal learning ability was able to learn and imitate the sound of a completely different species. Albeit the sound of its own species is learned better than that of a different one. They also discussed the possibility of this being true for people in different environments.
For example, a person born in Yemen to an Arab family but raised in The UK by an English family will learn to speak English fluently, but had he learned Arabic he would have found it comparatively easier to learn and speak. Strange right? This is due to a specific genetic predisposition in humans caused by cultural and traditional practices affecting the path of genetic evolution.
The beauty of genes controlling our vocals
You can skip this part if it’s getting too science-y!
Jarvis then talks about 3 types of genes which control our motor functions, speech muscles and overall ability to learn and speak.
Genes which repel new connections from forming are turned off when speech circuits are active to allow us to learn new words as we speak, read or listen.
Genes which contained what he called “Heat shock proteins”. Released when our brain gets too ‘hot’. The need rises as the neurons need to control the fast-moving muscles in the larynx.
Genes involved in neuroplasticity. Which is basically our brain rewiring itself based on new things we learn.
Furthermore, they discussed how emotions play a huge role in speech. In music, the voice of the singer is used to convey the happiness or the pain which the writer wishes to express through written words. Speaking on his research of brain circuits that get activated either when one talks or sings, Jarvis said both the right and left sides of our brain is used for speech and singing. Although, the right side is more dominant in terms of understanding musical sounds.
Dr. Huberman inquired about the connection of vocal learning with dancing. Interestingly, only birds who have the ability to imitate sounds, can also learn how to dance (cockatoo dancing).
Moreover, vocal learning pathways in humans and parrots are embedded within circuits that also help us learn how to move. I feel this connection between vocal learning and movement explains our innate urge to tap our feet or hands with the beats of songs.
Communication through writing
When we read a text message or make notes in a meeting/classroom, the visual signal travels from our eyes to our brain. Which interprets the signal and allows us to ‘read’. By using EMG electrodes to monitor muscle movement, Jarvis said they observed activity in laryngeal muscles (used to produce sound/voice) even if the person wasn’t speaking anything! He terms this as silent reading or speaking in our heads. He theorizes that this speech signal is then sent to our auditory pathway, so we end up listening to our own thoughts. Finally, the auditory signal is translated into a movement, and we end up writing it, translating it back into words. This suggests how we write what we read in a lecture for example or when we reply to a text message.
The talk concludes with them discussing modern day communication through typing and texting, the neuroscience behind stuttering and more.
This was the first podcast I’ve attentively listened to and completed. Honestly, spending 2 hours listening to people talk is tedious! However, the various avenues of knowledge that we discovered, and the intricate studies shared makes me admire the positive aspect of the internet age. We are getting the results from decades worth of research and experiments, served on a silver platter. There are countless podcasts, youtubers and documentaries to aid our learning. You may follow a person you admire, or a specific topic you want to know more about, as long as you want to learn something new, you are best suited to do so today than any other time in history.
Share this with your friends and family if you learned something new today! Happy holidays and look out for my next post
References: –
https://www.brainfacts.org/archives/2013/erich-jarvis-connecting-birdsong-to-human-speech
http://www.unm.edu/~quadl/Principles/PrC-Innate_Predispositions
https://byjus.com/question-answer/what-are-semantic-barriers-of-communication/
https://www.bbc.com/future/article/20140410-can-we-drive-our-own-evolution
https://www.sciencedaily.com/releases/2014/09/140916112242.htm
Check out the full podcast episode here
WOW just what I was searching for. Came here by searching for keyword