Communication: How Babies Learn
If you want to study how a baby's experiences influence his or her speech perception, the city of Vancouver in British Columbia, Canada, is a good place to start. An increasing proportion of the population there speaks English as a second language. Many children are raised in homes where two or more languages are spoken, and an estimated 67 different languages are used with some level of frequency in the local schools. For Dr. Janet Werker, a leader in developmental cross-language and speech research, Vancouver offers a rich natural research environment in which she can study the speech perception and language learning abilities of young children.
A prolific author and fellow of both the Royal Society of Canada and the American Association for the Advancement of Science, Werker is consumed with questions about how and when young children develop the skills to differentiate environmental sounds from meaningful speech.
"Children [with typical hearing] rapidly acquire language, and children with exposure rapidly acquire the language or languages in their environment," she said at the 2007 Talk for a Lifetime Summer Conference. "One of the mysteries that keeps me jumping out of bed every morning is how does this happen so quickly?"
A theory that Werker shares with many of her fellow researchers is that babies' language learning is supported by the development of perceptual systems that integrate listening and the visual cues offered by the speakers around them. At one time, researchers thought that babies developed speech perception in a sequential process, Werker explained. Now, the research community is relatively confident that babies are listening at multiple levels,even very early in life.
What does listening at multiple levels entail, exactly? In brief, babies are learning the properties of a language - rhythm,acceptable sound sequences, language categorization and visual cues such as lip movements and hand gestures - simultaneously. In turn,those skills set the stage for language acquisition with a focus on word learning.
Despite the seeming complexity of this developmental process,most babies get a natural head start to listening in utero so that,by the time they are born, they already show a predisposition for their native language. Babies even have the ability to use the acoustic and phonological cues in words to discriminate between content words (such as nouns, verbs and adjectives) and function words, said Werker.
"Ultimately, by age 6 months, [babies] prefer content words. By 8 months, they hear a function word and know to expect a content word to come," Werker explained. "It allows them to learn when to listen for meaning, when to listen for structure. It's a kind of boot-strapping mechanism for learning the native language."
Speech Versus Nonspeech
Let's return to the concept of infant preferences at birth. In research published in Developmental Science, Werker and doctoral candidate Athena Vouloumanos outlined an experiment that gave babies the choice of listening to speech versus nonspeech. Babies were laid in a bassinette and given a pacifier connected to a system that enabled the researchers to measure the baby's sucking pattern. After establishing a baseline, the researchers "rewarded" babies with a speech or nonspeech sound for each high-amplitude sucking motion. The study showed that babies delivered more sucking sounds in order to hear speech.
Their research is corroborated by studies of adults that show how certain parts of the brain are selectively activated when hearing speech. In addition, brain imaging research conducted on newborn babies by Marcela Peña, Jacques Mehler and colleagues at the International School for Advanced Studies in Trieste, Italy, shows greater activation in the form of oxygenated blood flow to the language areas of the brain's left hemisphere in response to speech.
Werker and colleagues have gone on to explore whether babies listen selectively to some languages over others. One recent study of word learning in bilingual infants tested whether babies exposed in utero to English, the Philippine dialect of Tagalog or both languages showed a language preference based on rhythmic differences. As in other studies, the babies exposed only to English showed a preference for English.
When it comes to sound sequences, adults often have difficulty hearing the differences among a language's nuances. Consider English speakers stumbling through the consonants and inflections of Russian or Werker's example of Japanese adults distinguishing between "ra" and "la." The ability to distinguish between consonants is simply a matter of tuning in to the sounds that makeup a listener's experience.
"Babies begin life with the ability to discriminate virtually all of the consonant distinctions they might need," she said. "Overtime, they become more specific about what they need to learn as a function of their listening experience."
To investigate this ability further, Werker used a procedure in which babies were conditioned to turn their heads when hearing a change between two slightly different "da" sounds that exist between English and Hindi. The babies were rewarded for the appropriate response with a visual of toy animals performing off to one side. Researchers then were able to count the number of correct head turns, indicating the children's ability to discriminate between the different speech sounds.
The study's findings showed that, early on, both English and Hindi learning babies could distinguish between the two variations in sound. By 10 to 12 months of age, however, babies learning English began to have difficulty making the distinction. The reason? The differentiation is not critical to the English language, whereas it is in Hindi. This example of phonetic discrimination illustrates how babies attune to the sound system of their native language.
Werker's research is also heavily influenced by colleagues in her lab and around the world. Previous work conducted by Sebastian-Galles and Bosch (2005) with babies learning Spanish and Catalan showed that, at 4 months of age, both monolingual and bilingual babies could differentiate between "e" and "ε." By age 8 months, Catalan babies continued to hear the distinction but Spanish babies did not. Once again, the reason is simple: Catalan has nine vowels, whereas Spanish has only five.
Notably, bilingual babies learning both Spanish and Catalan also did not demonstrate an ability to hear the difference at this age. When tested again at age 12 months, however, the babies once again had started to discriminate between the vowels. Werker stressed that such a "hiccup" is part of a pattern common to bilingual babies in a number of areas of language acquisition.
Based on this and other research, Werker currently is exploring how bilingual babies learning French and English distinguish between the "bah" and "pah" sounds, used in both languages but with different meanings in each.
In another recent study, Werker partnered with Stephanie Baker of Dartmouth College to test babies' ability to discriminate visual signs. Their work was based on Baker's previous research conducted with Roberta Michnick-Golinkoff of the University of Delaware and other colleagues, which showed that both 4-month-old infants with typical hearing and adults with hearing loss who use sign language draw sharp distinctions between different visual cues. Yet after the first year, infants with typical hearing no longer demonstrate that same distinction or boundary.
Werker and Baker then tested babies using an eye tracker and replicated the finding that babies with typical hearing did, in fact, look back and forth between the face and hand, just like babies who were learning sign language. Babies who used sign as their native language continued to make the distinction, a finding that shows that the ability to discriminate between cues and to sharpen those skills is critical to both verbal and visually signed languages.
But do babies pick up other visual cues, such as how lip movements correspond to the sounds of a language? Research published in Science already had shown that adults can identify by sight a speaker who might be using their native language, but not very well.
So, Werker's team presented babies with a rotation of three bilingual English/French speakers reciting a series of sentences from a children's book, but in silence. One group of babies "saw" the sentences in English and the other in French. Once the babies were habituated to the speakers and grew bored, the team had each speaker either continue to present new sentences in the same language or present new sentences in the alternate language. The team found that at 4 and 6 months of age, monolingual babies responded to the change in language. By age 8 months, the monolingual babies no longer responded to the change; they simply treated language as language. The bilingual babies continued to discern the change, and Werker's team is now looking at how bilingual babies maintain that distinction.
A major area of Werker's research is determining how the properties of speech, specifically the categorization of speech sounds, influence word learning. She describes the process as a "really long journey" because, although babies recognize highly familiar words by 6 and 8 months of age, they still have not developed a system for categorizing them, a skill that generally develops around 14 months of age.
To test whether infants could use speech sound categories to direct word learning, Werker's team used a "switch" procedure,creating two different consonant-vowel sounds, neither of which formed a recognizable word. They then paired each word with an object and taught the baby to associate the word and object. Next,they performed a test using two types of trials, one using the familiar word-object pairing and the other switching the word-object pairs to determine if the baby could discriminate the change.
The researchers found that the babies could perform a similar task with ease at 14 months if the words were recognizable. When discriminating between the made-up words "bi" and "di" for which they had no contextual support, the babies were not able to perform the task. In essence, babies appear to have difficulty with minimal pair word learning until age 17 months.
Predicting Language Development
Building on her research into minimal pair word learning, Werker's team partnered with Barbara Bernhardt, a speech-language pathologist at the University of British Columbia, to link the babies' performance on the "bi/di" minimal pair task to vocabulary acquisition and phonological development at 2 ½ and 4 years of age.The test involved a follow-up with the babies from the previous switch task study to determine how much longer they looked at the switch trial task.
The latest trial found a high correlation between performance on the minimal pair word learning task and the length of time a baby looked at the switch trial task as opposed to the familiar word-object pairing. The team now has funding to conduct a three-year longitudinal study to examine whether the switch task might be used as an indicator of later language delay.
The local health department has loaned the researchers its audiometric van so that the team can expand the study to include a diverse group of children from families representing a range of ethnic, linguistic and socioeconomic backgrounds. The team also is working with the Vancouver audiology clinic, located at the confluence of a number of heavily traveled bus lines, to expand participation in the study. In addition, speech pathologists working throughout the area helped to identify infants who had siblings with a language delay.
Through this new avenue, Werker hopes to build on what she and her colleagues already have learned about infants' discrimination of perceptual categories and minimal pairs to identify points at which practitioners can intervene to accelerate the development of infants struggling through their first steps of language learning.
Source: Volta Voices, January/February 2008