you may also like

Ears are Just the Beginning

How Sound Turns into Language and Language into Literacy

by Lydia Denworth

When I learned 10 years ago that my youngest son Alex had a significant hearing loss, all I could think about, frankly, was ears. I thought I needed to know everything there was to know about them. Putting Alex to bed at night, I stared at his small earlobes, caressed them, and whispered into them.

brain childAs we grappled with questions of language and learning, however, I quickly came to understand that I needed to think more broadly. To really know how to help a child who doesn’t hear, one needs to understand the chain reaction that is triggered by sound, which goes far beyond the inner ear and deep into the networks of the brain.

AG Bell recognizes this important fact, too, and that’s why the theme for the 2015 Listening and Spoken Language (LSL) Symposium is “The Brain Science of Hearing: Connecting New Pathways to Spoken Language.” In addition to being the mother of a child with hearing loss, I am also a science writer. At the LSL Symposium, my keynote presentation will explore sound, language and literacy, which is the topic of my recently published book, I Can Hear You Whisper.

What I understood far more clearly by the time I was done with all my research is that the circuits that sound creates in a child’s brain lay the groundwork for the networks that will bring spoken language, and those networks, in turn, are the foundation for learning to read. The more clearly professionals and parents understand what goes on behind the scenes in the brain of a child with hearing loss, the better able they will be to help that child maximize his or her potential.

Brain Basics

The mechanics of the ear are probably familiar. The folds of the earlobe collect sound waves and funnel them to the eardrum, which vibrates and sets the tiny bones of the middle ear in motion. Those bones trigger another vibration in a membrane-covered opening called the oval window, the boundary between the middle and inner ear. Waves of fluid wash through the cochlea in response to the vibrations, stimulating the ribbon-like basilar membrane that runs through the inner ear and the hair cells found there. The movement of the fluid causes the tiny stereocilia on each hair cell to bend and send an electrical impulse into the brain.

The central auditory system begins where the inner ear passes a signal to the auditory nerve. In the brainstem, the auditory nerve ends at two collections of neurons called the cochlear nuclei and things start to get complicated from there. That is as it should be. The cochlear nuclei sort the incoming auditory signal along two different tracks, and the various types of cells of the cochlear nuclei each specialize in just one kind of signal. It is thought that features such as one's ability to tell where a sound comes from or the fact that one jumps at loud noises can be traced to specific cells. From the cochlear nuclei, sound signals follow an ascending path to the auditory cortex in the temporal lobe, just above the ear where the sound started. 

Sound Signals

In a child who can hear, sound sets off a cascade of responses in the brain. Waves of electrical pulses ripple along complex though predictable routes on a schedule tracked in milliseconds.

In a baby, as University of Colorado auditory neuroscientist Anu Sharma has shown, the response to sound is naturally slow. By adulthood a signal that once took 300 milliseconds to register in the brain takes about 60. That increase in speed, which represents neurons communicating with each other more and more efficiently, requires practice, otherwise known as listening. From the minute a child is born—even earlier actually, in the mother’s womb—every experience that child has is being etched into his or her brain. Sound, or its absence, is part of that experience. Neurons make connections with each other, or don’t, based on experience.

Cochlear implants allowed Sharma and her colleagues to investigate what happened when sound was re-introduced to a child with profound hearing loss. They found that there is an important window, a “sensitive period,” between the ages of 3 ½ and 7 (or the equivalent number of years of deafness) during which the brain can be changed by experience and is still receptive to sound. The networks of hearing—the pathways that sound travels through the brain—could still be laid down. But after that, those brain areas would reorganize and be used for something else. “That’s important real estate,” Sharma told me. It’s not going to sit idle.

Furthermore, we now know quite clearly that earlier is better. According to several studies, such as one headed by John Niparko, M.D., of University of Southern California, that has been following nearly 200 children for more than five years, the earlier a child’s brain is stimulated by sound, the better that child seems to be able to make use of that sound to develop language. 

Learning Language

Take Home Messages

  1. Don’t Wait: To develop spoken language and listening skills, children need as much sound as possible reaching their brains as early as possible. Their brains won’t respond in the same way later on.
  2. Get Rhythm: Infants and young children learn best through repetition, rhythm and rhyme. Use baby talk, music and poetry to help develop auditory circuits, which will in turn develop language circuits.
  3. Know the Windows of Opportunity: Different language systems (phonology, grammar, semantics, etc.) develop over different periods of time in children’s brains. Emphasize sound and rhythm first, and then grammar, as those windows close first.
  4. Establish a Sound Basis for Reading: Before kids can relate letters to sounds, they have to be able to hear the separate syllables and phonemes in speech. Use music, poetry and rhythm to help them do that.
  5. See the Big Picture: Always remember that for children with hearing loss, the brain matters as much as the ears. 
 

If we want to know how language develops in children with hearing loss, we need to understand how it works in children with typical hearing. Researchers into early language acquisition have long known that sound and the networks it creates through listening lay the neural groundwork for spoken language. What’s new is the growing understanding of just how much language learning is happening before a child’s first birthday.

As kids practice listening and the circuits in their brains get more efficient, they begin to notice patterns in the speech around them. As they notice patterns, they begin to learn language because learning language means learning patterns. Essentially, as Jenny Saffran of the University of Wisconsin discovered, kids are taking statistics, noticing how often one sound, such as “ba,” is paired with another, such as “by.” “Input matters far earlier than we thought,” said Roberta Golinkoff, a developmental psychologist at the University of Delaware and language acquisition expert, in a recent presentation. “During the first six months of life, babies are pulling apart the speech stream, finding the words, calculating statistics, storing frequently occurring words and more.”

By 3 months of age, babies with typical hearing show a preference for speech over other sounds. At 4 ½ months, babies recognize their own names and by 6 months, Golinkoff has shown, they are using their own names as an auditory wedge to help them make sense of everything else they’re hearing. By 9 months a baby can tell if the sound sequence s/he is hearing is part of her/ his native language or not, and they even already understand quite a few words such as names for body parts and food items. The exaggerated cadences of baby talk, the repetition of so many common words, and the playful rhymes that are part of children’s music and poetry all help the process along.

In any one of the nearly 7,000 languages of the world, babies begin to talk around their first birthday. They are able to do that and then go on to add words at an impressive clip through toddlerhood, because of all the practice they’ve had in that first year of life.

A child who had no access to sound during that first year has to do some heavy lifting to catch up, notes Golinkoff.

Windows of Opportunity

For much of the last century, it was thought that once a person acquired language, it was organized in two general brain regions: Broca’s area governed speech production and Wernicke’s area governed speech perception. But to some researchers, that model seemed far too simplistic. About 10 years ago, New York University neuroscientist David Poeppel and his colleague Greg Hickok of University of California Irvine suggested that perhaps the brain organizes language the same way linguists do, or grammar teachers, according to systems such as phonology, syntax or semantics. Their idea caught on and has become the dominant one.

It was also thought that language, like hearing, had a sensitive period that ended around puberty. Now researchers believe that there is not just one window of opportunity for learning language, there are several. Each of the skills around which language is now thought to be organized— phonology, grammar, meaning and so on— has its own timetable of acquisition.

Helen Neville, a neuroscientist at the University of Oregon, who pioneered this kind of work, refers to the various windows of language learning as “profiles in plasticity.” The window for phonology, our ability to hear the varying phonemes of different languages, closes first. Generally, we will have an accent in any language we learn past the age of 7. Grammatical skills are next, falling off steeply through adolescence. The good news is that semantic skills, the capacity to add vocabulary, remains with us for life.

The newest research on critical periods seeks to understand at a molecular level just how these windows work. “What processes open them, mediate their operation, and close (or reopen) them,” asked developmental psychologist Janet Werker of the University of British Columbia and her colleague Takao Hensch of Harvard University in reviewing the literature. Perhaps, they suggest, each of the cascading steps of perceptual development that leads to language, has a different sensitive period that can be altered, and importantly for children with hearing loss, perhaps extended by experience. Windows that stay open longer would allow children more time to fully develop language skills.

Reading

brain child

As complicated as I’ve just made it sound, for those with typical hearing, spoken language, though not necessarily easy, comes naturally. Reading on the other hand is not natural. As the neuroscientist and writer Steven Pinker has put it, “it’s an optional accessory that must be painstakingly bolted on.”

The link between sound and reading was among the most surprising things I learned in my research. In retrospect, I was surprised at my ignorance, but I came to find out that I was not alone in it. Not everyone understands that the term “phonological awareness” doesn’t relate to letters but to sound, specifically syllables and phonemes and then use those smaller chunks. Significantly, in 2000, the National Reading Panel listed phonological awareness as the first of their five principles required for reading success. In fact, you can think of phonological awareness as the foundation of the house a child needs to build as s/he learns to read, beginning with decoding and progressing to fluency. Without that foundation most children struggle to read.

Those problems with phonological awareness are reflected in how sound is processed in the brain. Research by Bruce McCandliss’s lab at Vanderbilt University (he is now at Stanford) showed that how a child reacts to sound on the first day of kindergarten correlates to how many words per minute a child will read in fourth grade. And it turns out that problems with processing sound appear to be at the heart of the majority of reading problems.

Reading is an exercise in plasticity, says Ken Pugh, a cognitive psychologist at Haskins Laboratories in New Haven, Connecticut, who studies the neuroscience of reading and will also be speaking at the 2015 LSL Symposium in July. “What reading demands in children with typical hearing is to get away from vision and into language as quickly as possible, because ultimately you want to use the biologically specialized systems for phonology, syntax and comprehension,” Pugh told me.

Neural activity shifts and concentrates as children learn. The beginning reader looking at a word will use more of the brain in both hemispheres and from front to back—making use of areas that control vision, language and executive processes. Over time, the activity coalesces primarily in the left hemisphere, using less of the frontal lobe (anything automatic requires less thinking) and less of the right hemisphere (because language networks have become more fully engaged). Reading fluency has a signature as clear as John Hancock’s—a concentrated response that runs along the side and front of the brain, right through the language processing areas. 

Putting it all Together

It’s that signature that all of us are striving to create in the brains of children with hearing loss and that I came to understand I needed to create in my son to allow him to keep up with the growing academic demands that will be placed on him.
When I help Alex with his sixth grade homework these days, I can’t reach out and touch his brain the way I can his ears, but sometimes, I visualize the sound he’s hearing. I picture it coursing through his ears to the brainstem, up into the auditory cortex and then fanning out through the language networks and reasoning areas of the brain, connecting new pathways and fine-tuning old ones. His ears truly were just the beginning.

References

Denworth, L. (2014). I Can Hear You Whisper: An Intimate Journey through the Science of Sound and Language. New York, NY: Dutton.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393-402.

Maurer, U., Brem, S., Kranz, F., Bucher, K., Benz, R., Halder, P., Steinhausen, H. C., & Brandeis, D. (2006). Coarse neural tuning for print peaks when children learn to read. Neuroimage, 33(2), 749-758.

Niparko, J. K., Tobey, E. A., Thal, D. J., Eisenberg, L. S., Wang, N.-Y., Quittner, A. L., & Fink, N. E. (2010). Spoken language development in children following cochlear implantation. JAMA: The Journal of the American Medical Association, 303(15), 1498-1506.

Parish-Morris, J., Golinkoff, R. M., & Hirsh-Pasek, K. (2013). From coo to code: Language acquisition in early childhood. In P. Zelazo (Ed.), The Oxford handbook of developmental psychology, Vol. 1 (pp. 867-908). New York, NY: Oxford University Press.

Schlaggar, B. L., & McCandliss, B. D. (2007). Development of neural systems for reading. Annual Review of Neuroscience, 30, 457-503.

Sharma, A., Nash, A. A., & Dorman, M. (2009). Cortical development, plasticity, and re-organization in children with cochlear implants. Journal of Communication Disorders, 42(4), 272-279.

Stevens, C., & Neville, H. (2009). Profiles of development and plasticity in human neurocognition. In M. Gazzaniga (Ed.), The Cognitive Neurosciences, 4th ed. Cambridge, MA: MIT Press.

Werker, J. F., & Takao, K. H. (2014). Critical periods in speech perception: New directions. Annual Review of Psychology, 66(1). doi: 10.1146/ annurev-psych-010814-015104

Lydia Denworth is the author of the best-selling book I Can Hear You Whisper: An Intimate Journey through the Science of Sound and Language. Denworth is one of the keynote presenters at the upcoming 2015 AG Bell Listening and Spoken Language Symposium to be held July 9-11, 2015 in Baltimore, Maryland. She lives with her family in Brooklyn, New York. Her third son, Alex, was diagnosed with moderate to profound progressive hearing loss.

Source: Volta Voices (2015): Volume 22, Issue 1.