From Language to Literacy
From Language to Literacy
When a baby coos back at her mother from her crib or a toddler imitates his father’s laugh or favorite saying, their actions clearly show that babies learn language by imitating the sounds they hear around them. Years of research findings support the basic principle that language acquisition is dependent on early exposure to the sound patterns of one’s native language.
Or, as Maria Mody, Ph.D., CCC-SLP summed it up at the recent Talk for a Lifetime Summer Conference, “The development of language abilities depends on one’s early sensitivity to the sound patterns of language in order to make distinctions that are meaningful.”
Mody explained that dyslexia is characterized by a lack of sensitivity to the phonological structure of language, i.e., a deficit in phonological awareness. She then outlined three essential components of language comprehension: word recognition, decoding and spelling.
“Think of any experience traveling in a foreign country with a little book and your cheat sheet of words you are going to remember,” she said. “After a while, your brain says, ‘Give up! I can’t hold on to all this new information!’” Like the child learning language, says Mody, you have to learn to code the information in memory, in the form of small, manageable units (e.g., phonemes, syllables) so that you can then draw on that information to produce meaningful words and sentences and communicate with other people.
Mody uses that analogy to illustrate the complexity of language organization and the brain’s innate wiring, which allows people first to categorize information, then combine it into words and sentences and later to attach those words to symbols. She describes this transition as ‘magical’ and one that never comes into question — until a child is born with some form of sensory or cognitive deficit.
“Our ability for language is eons old in terms of our cognitive circuitry and has had a chance to set into our brain’s wiring,” Mody explained. “Evolutionarily, reading is about 5,000 years old, not very long, so you can think of reading as having co-opted some of the brain’s language circuits.”
Although the field of reading disabilities is often divided on issues, Mody says, the consensus is that a phonological deficit is at the core of all reading disabilities. She adds that although reading usually is not taught until a child enters school, children as young as 3 years old can handle language testing that includes phonological awareness.
The Neuroimaging “Toy Chest”
Unlike previous studies of language and reading, which focused predominantly on behavior, Mody’s work and that of many others in the field now use highly specialized brain imaging techniques to show how different parts of the brain work when listening, speaking and reading. To best understand Mody’s work, it is helpful to know how the various tools she uses on the job — her “cool toys” — help her decipher which parts of the brain influence spoken and written language. Electroencephalography (EEG) is used to measure higher brain function by detecting electrical activity in various areas of the brain. Testing is done by arranging a number of electrodes at specific sites on an individual’s head using a conductive gel. Once commonly used in the diagnosis and management of disorders and diseases that disrupt brain function, tools such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) have largely replaced EEG because of their superior spatial resolution, making it easy to localize the source of aberrant brain activity.
- Magnetoencephalography (MEG) is a non-invasive technique used to measure magnetic fields generated by electrical currents in the brain’s neurons. Like EEG, MEG allows real-time recording of brain activity (temporal resolution), with the data presented in a topographical layout that represents averages over repeated stimuli, motor responses or continuous raw data. Benefits of using MEG include the millisecond temporal resolution combined with a spatial resolution of a few millimeters to a couple centimeters and the ability to conduct tests with subjects without electrodes pasted to their scalps. Combined, these characteristics make MEG well-suited for delineating where and when the brain processes speech and for investigating the workings of the typical and atypical brain.
- MRI scans, also known as fMRI, use radio waves and large magnets to map the distribution of water in the body. Because water molecules contain hydrogen nuclei, which are magnetic, they respond to radio wave pulses sent by the scanner with their location encoded in the frequency and phase of the radio waves they emit. Researchers then use image reconstruction algorithms run on computers to recreate a high-spatial-resolution two- or three-dimensional image of the body. Although MRI scanners are costly to purchase and maintain, they are a valuable tool for researchers who are studying the brain’s response to therapeutic treatment because MRI images link human functions to specific locations in the brain.
- PET scans give researchers a tool for showing the chemical functioning of organs and tissues; however, it is moderately invasive because subjects are administered a low-dosage, short half-life radioactive tracer that is absorbed within the blood stream and accumulates in the target tissue. Invaluable for the early diagnosis of a disease before anatomical changes are apparent, PET has seen limited research use in the past 10 years due to the advent of the non-invasive MRI technology and the need to simplify and standardize development of the drugs used in PET imaging.
Each of these tools, when used with carefully designed behavioral experiments, offers invaluable insight into the relationships between the components of language and their influences on success with spoken and written language in everyday life.
Language as a Picture Puzzle
Mody has used the neuroimaging tools at her disposal in a series of studies designed to analyze components of reading in children with dyslexia. By comparing groups of research subjects with typical language and reading skills to people whose brains struggle with traditional learning strategies, she hopes to help clinicians design more tailored interventions specific to profiles of children with disabilities to develop fluency with language and reading.
Looking at the various brain maps Mody presented, it is readily apparent that language does not reside in just one part of the brain. Areas that process spoken and written language work together as a dynamic whole. For example, the front part of the brain, responsible for speech articulation, appears to be engaged when posterior reading circuits fail to activate due to difficulties with phonologic analysis. Scans of children with reading difficulties show activity in Broca’s area, a region in the left frontal lobe because these children appear to engage in covert articulation to facilitate decoding words that they have difficulty reading.
Although oral and written language share the same circuitry for the most part, neuroimaging shows a differential pattern of activation within this circuitry depending on whether a person is listening, speaking or reading. For example, the brain uses middle temporal and posterior superior temporal areas when listening to words. When speaking, the inferior frontal and motor areas appear to be activated. While reading, neuroimaging shows initial activation of the primary visual cortex, as would be expected because the brain is working with print stimuli. Initial activation is followed by inferior temporal/visual ventral stream activity in response to recognition of letter and word forms, in addition to activation of inferior frontal and superior temporal areas needed to analyze the words’ sounds.
“The beginning reader’s brain starts by linking orthographic patterns with auditory word forms (vocabulary) and meaning (semantics), and gradually learns to automate these linkages with increasing sophistication. That is, the fluent reader gets faster and better at word recognition as fewer resources are spent decoding,” Mody said. “This is why auditory working memory is such a critical factor in language and reading success, and something I’ve observed as problematic for some children with hearing loss.”
By combining behavioral research with the wealth of imaging technology at her disposal, Mody has positioned herself on the cutting edge of neurodevelopmental science. Her recent studies funded by the National Institutes of Health (NIH) and other sources provide a unique glimpse into the brain’s responses to language stimuli.
Phonology and Semantics
In one study, Mody and colleagues looked at the interaction between phonology and semantics in bottom-up and top-down processing involved in learning to map sound to meaning. During the test, college-educated adults and typically developing children were given a pair of printed words such as “plane and “plain” and asked if they sounded the same. Test subjects also were presented with a control pair, “plane” and “dog,” and a trick pair, “plane” and “jet.”
The study showed that both the adults and the children rejected the synonym pairs more slowly than the control foils (called a synonym interference effect, SIE) because they detected a relationship between the words. Yet, the adults’ brains showed activation in the frontal areas, suggesting that when phonology and semantics are tightly coupled, the brain is more efficient at rejecting interference from a false stimulus, evident in the smaller magnitude of the adult SIE compared to that of the children. The study also suggests that the relationship between phonology and semantics changes as typically developing children grow up to become adults who are sophisticated language users and fluent readers.
The Effects of Auditory Stimuli
Mody also has explored how children with and without reading difficulties respond to auditory stimuli to understand speech perception deficits implicated in dyslexia and the speech-language-reading connection. In a study with doctoral student Daniel Wehner, children were given the word “pat” and asked to press a button every time they heard a deviant, such as “bat,” “cat” or “rat.”
At a behavioral level, the study showed no group differences. However, the MEG studies showed that the brains of fluent readers activate bilaterally, with much stronger left-side activation. Atypical readers, for whom phonological contrasts were difficult, showed a reduced and delayed reaction in the brain’s left hemisphere with a tendency to resort to the right hemisphere in an effort to compensate for their difficulties with phonological discrimination.
Language in Context
Mody’s most recent study explored how readers use context to help comprehend what they hear. She and her colleagues hypothesized that children who have difficulty reading may rely more on context to differentiate between words that sound similar (i.e., phonetically similar but phonologically contrastive) due to difficulties with speech perception than do those who are good readers.
Both groups of children were asked to listen to sentences ending in words that were either semantically congruent with the sentence context, as in “the fireman slid down the pole,” or incongruent and phonetically similar (PS), as in “the fireman slid down the ‘bowl’” (PS) or phonetically dissimilar, as in “the fireman slid down the ‘sole’”(PD). The hypothesis was that the phonetically similar sentences would pose the biggest problem because they sounded closest to the correct sentences.
According to Mody, the results showed that both groups had greater difficulty with the similar than the dissimilar condition and the impaired readers, like normal readers, were capable of doing the task. The critical finding, however, was the difference in MEG activity between the good and poor readers. Brain images of good readers showed left lateralized but delayed activity in the PS compared to the PD condition indicating that, although good readers were able to identify PS anomalous sentences, they were slower at making a decision involving these stimuli. In contrast, poor readers’ brain activity was bilateral, with early and overall sustained right hemisphere activation, suggestive of less efficient or not fully developed left hemisphere language networks.
In essence, the results show that demanding conditions require more from both impaired and unimpaired readers, but that it is how their brains work differently to process language information that is essential to understanding reading-related problems.
The brain images available through Mody’s research clearly illustrate how essential neuroimaging tools can be in validating the role phonological processing plays in reading development and the interaction between the various components of language.
“We have good neurobiological markers and behavioral markers of reading problems at early ages,” Mody said in closing. “Next we need to investigate more closely the role of language in potential reading disability subtypes so that we can design the most effective interventions.”
Source: Volta Voices, 2007