Story Retelling Patterns Among Children With and Without Hearing Loss [STUDY]
Story Retelling Patterns Among Children With and Without Hearing Loss: Effects of Repeated Practice and Parent-Child Attunement
By Lyn Robertson, Ph.D., Gina Annunziato Dow, Ph.D., and Sarah Lynn Hainzinger, B.S.
In two analyses, transcripts from 21 children (ages 3–6) reading and retelling stories with a parent over a six-week period were studied. Ten children with moderate-to-profound hearing loss used assistive technology and the Auditory-Verbal approach for language learning; 11 had typical hearing. In Analysis 1, no significant difference between groups was found in the recall of words and phrases from the text, suggesting that children with hearing loss can benefit from shared reading experiences in ways similar to children with typical hearing. Analysis 2 showed that the factors correlating with word recall differed: for children without hearing loss, joint attention was related to remembering; for children with hearing loss, explicit parental scaffolding techniques correlated with memory for the text, suggesting that the involvement of parents of children with hearing loss may need to be more directed to the children’s listening needs.
Reading aloud to children is a valuable first step in helping them acquire literacy. The number of excellent books for children and the focus on family literacy (e.g., Sapin & Padak, 1998) attest to this belief, as does the popular influence of Jim Trelease (2001), who exhorts parents and caregivers to read to their children from infancy. Numerous studies on children with typical hearing support the notion and cite a long-documented connection between acquisition of reading and reading to children, as well as surrounding them with print materials (see, for example, Durkin, 1966; Teale, 1978; and Watson, 2002).
Shared book reading has been investigated as a tool in developing emergent literacy skills in children with typical hearing who have language impairments and is seen as offering valuable additional support toward reading progress (e.g., Boudreau & Hedberg, 1999; American Speech-Language Hearing Association [ASHA], 2001, as cited in Kaderavek & Justice, 2002). This population tends to develop literacy skills at a slower pace than their peers without language impairments (Marvin & Mirenda, 1993). Similarly, Musselman (2000) argues for a parallel impediment for literacy acquisition for children with hearing loss: “Language delay – which is the hallmark of deafness – increases the challenges of acquiring (literacy) skills. If one lacks normal hearing, spoken language develops slowly and may never progress beyond a minimal level. Deaf children, therefore, have only limited knowledge of the spoken language that print represents” (p. 9). If one assumes that processes for literacy acquisition are qualitatively similar for people with and without hearing loss (in particular, the roles of phonological awareness, knowledge of syntactical and semantic features of the spoken language, and vocabulary), then it is not surprising that degree of spoken language impairment predicts difficulties in literacy acquisition in both people with hearing loss (Tye-Murray, 1998) and without hearing loss (e.g., Edmiaston, 1984; Heath, 1994; Scarborough, 1989). In addition, although research on shared book reading has been conducted with typical (e.g., Whitehurst et al., 1988) and language-delayed children without hearing loss (e.g., Crain-Thoreson & Dale, 1999; Dale, Crain-Thoreson, Notari-Syverson, & Cole, 1996; Rabidoux & MacDonald, 2000), there is a dearth of research on the correlates of coreading with children with hearing loss. As well, research on reading achievement of children with hearing loss has rarely included comparisons with the reading achievement of children with typical hearing.
A growing body of literature reporting results for children with typical hearing demonstrates that frequent and effective shared book reading during the first six years of life correlates with development of oral and written language components (Snow, 1983; Crain-Thoreson & Dale, 1992; Bus, van IJzendoorn, & Pellegrini, 1995). For example, exposure to print aids in understanding the concept of “words,” the clusters of sounds between the distinct pauses in long speech streams (Ehri, 1975). Further, “[c]hildren who are successful in learning to read English learn that in the English writing system letters (actually graphemes) correspond to speech sounds, and they use this knowledge in actual reading” (Perfetti & Sandak, 2000, p. 34).
Interactive book reading also aids in vocabulary development (Whitehurst & Lonigan, 1998), as the child is able to hear more words through commentary, predictive/comprehension questions (Notari-Syverson, Maddox, & Cole, 1998) and labeling of objects (Ninio & Bruner, 1978). Significantly, parents typically use more complex language in such rich literary settings than in many other contexts (Crain-Thoreson, Dahlin, & Powell, 2001). Shared reading passes along cultural knowledge, prepares the child for “school-type” questions and answers, and generally sets the scene for literacy acquisition during the school years (Watson, 2002, p. 49). In the case of children with hearing loss, parents who reported their children read at average to above-average grade levels also reported having read with their children on as close to a daily basis as they could manage (Robertson & Flexer, 1993).
A major part of shared reading is the retelling of the text. Retelling “provid[es] ‘on-task’ practice of a range of literacy skills (reading, writing, listening, talking, thinking, interacting, comparing, matching, selecting and organizing information, remembering, comprehending” [Brown & Cambourne, 1987, p. 1]). Preschool children engage in and begin to hone these processes as they share a book with an adult reader; these events “should be seen as an opportunity to use language to learn about language” (Brown & Cambourne, 1987, p. 115).
Being read aloud to helps the listener build spoken language, the basis for written language. The greater the spoken language capability, regardless of hearing status, the better chance an individual has for becoming a fluent, competent reader and writer. Adams observes that “. . . listening comprehension is shown to exceed reading comprehension until students can read above the sixth-grade level” (2001, p. 73), implying that spoken language typically precedes reading competence during the formative stages, pulling the nascent reader through the various steps in “learning to read” until she or he can use reading skill more independently so as to “read to learn” (see Chall, 1990, for a discussion of the progression involved in reading at beginning and then more complex levels).
Reading, aloud or silent, is a complex of multiple and interactive processes. Connectionist and parallel-distributed processing (PDP) models posit subprocesses utilizing a context processor (interpretation), a semantic processor (meaning), an orthographic processor (letters and spellings) and a phonological processor (speech sounds) (Adams, 2001). A deficit in any of the subprocesses presents a real challenge to the process overall; for example, in the absence of appropriate hearing technology that delivers as clear a speech signal as possible, the child with hearing loss will have diminished ability in the category of speech sounds. A diminished speech signal will usually result in diminishment of the other processes associated with meaning making and interpretation capabilities in spoken language, because phonological information is foundational to spoken language. Although speechreading can help, it supplies incomplete information; therefore, it is much easier and more efficient to support hearing and listening with hearing aids and cochlear implants. The child with hearing loss may also have other processing deficits, compounding problems in developing literacy.
Consequently, listening to and building a large, flexible vocabulary in the language to be read is essential to literacy acquisition. Vocabulary growth and phonological skills then can be seen as informing each other (Goswami, 2001). One hypothesis is that throughout language acquisition, a “lexical restructuring” (Metsala & Walley, 1998) takes place that allows for “phonological representations [to] become increasingly segmental and distinctly specified in terms of phonetic features” (Goswami, 2001, p. 113). Memory becomes more efficient as sounds and syllables find multiple placements in a flexible set of categories that correspond to phonology and meaning. Remembering words “in terms of their constituent parts rather than as wholes” occurs as vocabulary grows, spurring further vocabulary growth (Whitehurst & Lonigan, 2001, p. 19). Individuals whose vocabularies remain small enough that they can remember each individual word are bound to a system that need not become flexible and representational, and this keeps the vocabulary small. If vocabulary growth can be stimulated, however, the individual can be pushed to “move from global to segmented representations of words” (Whitehurst & Lonigan, 2001, p. 19) and can move toward a position of generative reading, that is, using processes that generate greater word and conceptual understanding based on a system that allows for flexible combinations of syllables, sounds and meanings. On average, children with typical hearing comprehend around 14,000 words by the age of six (Goswami, 2001, p. 113), suggesting that literacy acquisition requires a broad and flexible meaning system stored in terms of efficient overlapping and connected categories of sounds and syllables.
Development of phonemic awareness is apt to suffer in the child with hearing loss, unless good listening situations are created with great frequency. Therefore, shared reading is of interest, as it provides such listening opportunities. Phonemic awareness includes knowledge of the smallest sound units of spoken language; English has 41-44 phonemes, depending upon the speaker’s dialect (McCardle & Chhabra, 2004). Phonemes combine in various ways to form predictable spoken syllables and words. However, written English does not always present phonemic information in invariant ways, so reading is not mere transformation from print to sound (Smith, 2004). Adams (1990) cautions that phonemic awareness, a psychological process, is not to be confused with phonics, an approach to teaching beginning reading. Coming to understand how spoken words are composed has an important relationship to understanding an alphabetic and flexible written representation of language. According to Adams (2001), . . . the goal in conducting phonemic awareness activities is to induce children to understand something…every word can be conceived as a sequence of phonemes…without [this insight], neither phonics nor spelling can make any sense . . . explicit phonemic awareness training is about developing in children the attentional and metacognitive control that renders unnecessary the drill and skill of traditional phonics…once children “get it” with two or three letters, they need barely a word to transfer that understanding to the rest. (p. 76)
Whitehurst and Lonigan (2001) regard the emerging spoken language user as progressing from large concrete units to subsyllabic units and finally to small abstract units (phonemes) as they learn language. Auditory-Verbal therapy for children with hearing loss includes intensive work in all of these phonological areas as part of a holistic approach to stimulating spoken language comprehension and production (Estabrooks, 1994).
The child who does not experience frequent verbal interactions with adults (Hart & Risley, 1995), perhaps because she doesn’t hear well, or the child who hears but does not pay attention to the sounds of words around her, may in the course of schooling experience what teachers refer to as “the fourth grade problem,” ordinarily described in terms of a child failing to make progress in reading beyond the fourth grade level and becoming limited to identification of a simple corpus of words (Leach, Scarborough, & Rescorla, 2003). If listening comprehension stagnates at the same level as reading comprehension, the child faces difficulty as a reader, as the odds for making progress are slim. Instead, however, if the child has developed some proficiency in organizing word parts into “neighborhoods” (Goswami, 2001), memory for words will be more efficient, and the child will be able to read more generatively, matching up words on the page with word parts stored in categories in phonological memory. Furthermore, one investigator has found “there is some evidence that parents’ speech is correlated with children’s cognitive organization” (Watson, 2002, p. 50) and points out that parents’ categories, even the metalinguistic and paradigmatic features they use, become those of the child (Watson, 2002). Such categorizing is seen as foundational to the child’s developing an ability to process text. Parental talk about past and future occurrences and parental responsiveness and encouragement are also considered foundational (Watson, 2002).
Investigations of literacy acquisition in children with typical hearing generally conclude the following are necessary: phonemic awareness in the form of understanding rhyming and oral language acquisition in the form of vocabulary knowledge (Whitehurst & Lonigan, 2001), print awareness (Clay, 1993), oral language development (Clay, 2005) and semantic knowledge (Smith, 2004). Frequent reading practice is judged to be essential, and writing practice is thought to assist in linking literacy to spoken language (Vygotsky, 1934/1992); this may be especially important for “low-readiness” children, for whom the active hypothesis-making about sounds and spelling inherent in invented spelling seems to enhance literacy acquisition (Clarke, 1988).
Emotional issues are important as well. An increasingly valued aspect of shared book reading is attunement (emotional “connectedness”) between parent and child; such attunement is seen as supporting emergent literacy skills in typically functioning children, as well as those with language impairments. In this latter population that tends to develop literacy skills at a slower pace than their peers (Marvin & Mirenda, 1993), shared book reading has been shown to offer extra support for literacy development (Boudreau & Hedberg, 1999; American Speech-Language Hearing Association [ASHA], 2001, as cited in Kaderavek & Justice, 2002).
Attunement is a measure of how well one senses, interprets and reacts promptly and appropriately to another’s signals, allowing the other person in the interaction to feel “understood” (e.g., Ainsworth, Bell, & Stayton, 1974) and playing a facilitative role in enhancing feelings of self-efficacy and supporting the emergence of self (Stern, 1985) beginning in infancy (Stern, 1977). Attunement is a dynamic, relationship-level variable; in any particular situation the parent does not have complete control over attunement, although his or her behavior may affect it significantly. For example, the parent can aid in the child’s engagement in the task by allowing the child to turn pages, by matching the story to “real life,” by pausing for child commentary or by reading predictable narratives. A parent can also choose to become more aware of the child’s behaviors and more sensitive to the child’s signals; parents can provide opportunities that promote parent-child interaction (e.g., having books or toys around, playing games with the child), and they can choose to adjust their behavior to match the child’s capabilities (Beckwith & Rodning, 1996). Teacher-student attunement positively predicts students’ academic performance, motivation, interest and attention (Poulsen, 2001). Merlo and Barnett (2003) showed in a longitudinal analysis that greater parental nurturance and sensitivity predicted higher levels of young children’s literacy, with effects extending well beyond the child’s socioeconomic status, verbal scores, IQ and level of phonological awareness, as well as the influence of home and academic environments. Affective components in shared book reading also predict later academic performance and attitude toward the reading task (Bergin, 2001); the more affectionate and attuned the parent-child relationship, the more likely the child was to engage in the task and to display less frustration with the activity and the earlier the child was able to read more words per minute, as compared to children of a less attuned dyad.
These associations between shared reading, retelling and literacy achievement in children with typical hearing raise important questions for parents, caregivers and teachers of children with hearing loss: Does this conventional wisdom apply to their children, and, if so, what might explain the efficacy of reading aloud with their children? Are they able to do with their children with hearing loss what they can do with their children with typical hearing? To what extent can children use assistive hearing devices in listening to book language?
Some claim that children with hearing loss cannot use this avenue to reading, pointing to data demonstrating that literacy achievement for individuals with hearing loss remains markedly low compared to that of individuals with typical hearing, with average skills reaching a plateau at about the fourth-grade level (e.g., Holt, 1994). Expectations for achievement among some observers also tend to be low (Stewart & Clarke, 2003). In all too many cases, guarded expectations may reflect the actual history of results, but literacy levels appear to increase in relation to the amount of listening the person with hearing loss learns to do and the level of spoken language the person acquires (Geers & Moog, 1989; Robertson, 2000). Indeed, the skills the child brings to school to begin with are highly predictive of achievement in later schooling; beginning school behind in spoken language competency and then catching up to one’s classmates appears to be almost impossible (Whitehurst & Lonigan, 2001). Edmiaston (1984) found receptive and expressive language abilities were directly related to reading comprehension in a sample of third graders. Preschool expressive language ability was found to be predictive of reading performance in second grade, even after controlling for IQ (Scarborough, 1989). If this is so for children with typical hearing, more than one-third of whom have trouble acquiring literacy (Whitehurst & Lonigan, 2001), then would not the phenomenon be accentuated for children with hearing loss?
In the present study, Analysis 1 investigated parental shared reading behaviors and children’s memory for explicit and implicit text content in a group of parents and their children with hearing loss who are acquiring spoken language through use of listening made possible by technology. It compared the shared reading experiences of children learning speech and language through the Auditory-Verbal approach with those of children with typical hearing in order to explore questions raised about literacy acquisition in the Auditory-Verbal population. This study’s hypothesis is that, once hearing is augmented, the child with hearing loss can listen and learn about language during book reading in ways similar to the child with typical hearing.
Analysis 2 addressed parent-child attunement behaviors in shared book-reading and the extent to which they predicted the number of words the children with and without hearing loss remembered from the text after hearing the story. As in Analysis 1, memory performance variables included total number of verbatim words and phrases repeated from the text. Stronger relationships between measures of attunement and memory performance (e.g., words, utterances and words per utterance) were predicted for children with hearing loss than for children with normal hearing. Since less sensitivity by the parent resulted in less participation by children with language impairment in a book-reading activity (Rabidoux & MacDonald, 2000), it was expected that time spent in joint attention, conversational balance and use of scaffolding techniques by the parent would correlate more strongly with word recall from the story for the children with hearing loss than for the children without hearing loss.
Children with hearing loss: Ten 3- to 6-year-old children (M = 4 years, 9 months, four girls) with moderate-to-profound hearing loss participated. All have used assistive technology for 1.6–4.8 years prior to participating in this study; seven wear hearing aids and three have received cochlear implants. Children were recruited from the Auditory-Verbal Clinic at the University of Akron Audiology and Speech Center.
Children without hearing loss: 11 3- to 5-year-old children (M = 4 years, 7 months, five girls) with typical hearing participated; children’s hearing status was verified through the consenting parent. Children were recruited from preschool and childcare programs in the Granville and Newark, Ohio, area. No participants were paid for their participation.
Four children’s books were used: Owen (464 words), Sheila Rae the Brave (481 words), A Weekend with Wendell (587 words) and Wemberley Worried (573 words), all by Kevin Henkes. For the first three weekly sessions of this study, each child had the same story read aloud; for the last three weeks, a new story was read each time. The order of presentation of the four stories was assigned to individual children at random with counterbalancing; over all children, each story was approximately equally likely to be presented in each session, and each story was equally likely to be presented during the first three weeks.
The week before the children’s first reading sessions, children were administered the Peabody Picture Vocabulary Test–Third Edition (PPVT-III) to assess receptive vocabulary. The Peabody Picture Vocabulary Test was chosen as a measure of word knowledge, which is especially relevant to the task of recognizing and understanding words heard during shared book reading. At each session, parents were asked to read to their children “as you would at home” and then to ask the child to tell the story back to the parent. The parent and child were permitted to look at the pictures in the books during the retelling part of each session. Sessions were held in a quiet room in the childcare center or clinic or, in some cases, in the child’s home. All sessions were videotaped for later analysis and were transcribed verbatim by two naive assistants. Each session transcript was then masked of identifying information and assigned a random four-digit number to permit blind coding by two of the authors (L.R. and G.A.D.).
Each transcript was coded phrase by phrase by two of the authors (L.R. and G.A.D.) and included every utterance by both parent and child. Transcripts were coded using codes developed for this study; coding reliability on a subset of 25% of all session transcripts ranged from 84%–100% (M = 91.7%) on all codes. A subset of codes was used in Analysis 1; see Table 1. (Please see Appendix for a complete list of coding categories; not all categories that were coded were used for the analyses discussed in this paper. For Analysis 1, categories were selected to allow an assessment of parent elicitation of memory for the texts and children’s responses to these elicitations (i.e., codes were for parent or child elicitation of or response to questions about explicit or implicit text content during the recall phase of each session), as well as to allow an assessment of children’s memory for the words from the texts, both in terms of the actual number of words from the text used in the recall of the story and the mean number of phrases containing words from the text.
Results and Discussion
Although PPVT-III standard scores for all children were within or above the normal range, independent samples’ t tests revealed a significant difference in receptive vocabulary (PPVT-III standard scores) between children without hearing loss (M = 115.27, S.D. = 13.58) and children with hearing loss (M = 95.00, S.D. = 18.33, t(19) = 2.90, p = 0.009); the mean score of children with typical hearing was the 75th percentile, and children with hearing loss averaged at the 50th. Further, although t tests revealed no significant difference in mean age between groups of children (children without hearing loss: M = 55.27, S.D. = 8.58; children with hearing loss: M = 56.70, S.D. = 13.53), t(19) = −0.29, p = 0.77), because of the large age range in both groups, children’s age was used as a covariate in all analyses. In addition, PPVT-III standard scores were used as a second covariate in all analyses to partition out the variance associated with PPVT-III standard scores (e.g., Howell, 1987), thus correcting for the significant mean difference on PPVT-III scores and allowing any differences uniquely attributable to hearing status to be revealed.
To determine whether the children in each group differed in their use of words from the books in their retelling of the stories, whether and how this changed over sessions and whether and how parents in each group may have differed in eliciting retellings, a series of 2 (hearing status) 6 (session) Analyses of Covariance (ANCOVAs) were performed, using as dependent variables (DVs) the number of occurrences of each relevant coded category per session (except for the DV “number of words from text”). With PPVT-III scores and age controlled for, as discussed above, results indicated no statistically significant differences between the groups; over all sessions, during retelling both groups produced equivalent numbers of total words (F(1, 16) = 3.55, n.s.) and phrases from the texts (F(1, 16) = .196, n.s.). In addition, performance of children with typical hearing and children with hearing loss did not vary significantly over the six sessions for either variable (F(5, 12) = 1.285, n.s., and F(5, 12) = 1.396, n.s., respectively); see Table 2 .
In eliciting retelling of the texts, parents of children with typical hearing did not differ in the number of questions asked about explicit text content, as compared to parents of children with hearing loss (F(1,16) = 0.899, n.s.), and children in both groups produced responses to these questions about explicit text content at equivalent levels (F(1,16) = 0.742, n.s.) (see Table 3). Parents of children in both groups produced questions about implicit text content at equivalent levels (F(1,16) = 1.67, n.s.), and children in both groups produced equal levels of responses to these questions (F(1,16) = 3.23, n.s.) (see Table 4). Ordinarily, the expectation would be that the children with hearing loss would produce less language heard from the book and answer fewer questions, whether explicit or implicit; lower expectations for children with hearing loss would, in turn, reduce the initiation of such verbal behaviors by their parents. In this study, both groups of parents and children exhibited behaviors usually associated with preliteracy stages of language learning in children with typical hearing.
Analysis 2 made use of several codes and combinations of codes developed for Analysis 1. Please see Table 1 for listing of codes used in this analysis; note that these codes were tabulated based either on codes reliably assigned but not used for Analysis 1, or on a subset of codes used in Analysis 1, or as ratios of the number of words spoken by parent and child. Because the number of times the former codes were assigned was mechanically tabulated using the word processing program’s word-find feature and as the latter numbers were tabulated by using the word processing program’s word count feature, interobserver reliability was not assessed.
Joint attention was calculated during reading time; conversational balance and scaffolding strategies (attunement) were recorded for child retellings of the story to the parent.
Joint attention is the amount of time the parent and child focus mutually on the reading activity (Tomasello & Farrar, 1986); the ratio of time spent in joint attention to total time of a reading was calculated. This included the amount of time the child’s eyes were on the book, the amount of time the child was talking if looking away and time spent in a joint gaze between parent and child. Joint attention was considered broken if either the child or the parent looked away silently, if the parent talked to the experimenter or if the child looked at the parent without a reciprocal glance. A parent’s gaze at the child during reading was counted in joint attention as engagement. The time examined began with the book’s title or the story’s first line and ended with the book’s last line. This measure was assessed with a stopwatch by one author (S.H.) after training by another author (G.A.D.). Reliability was established to within 5 seconds of agreement and was maintained at this level by two coders (S.L.H. and G.A.D.) independently timing 20% of the sessions.
Two variables measured conversational balance: the ratio of parent initiations to child responses and the ratio of child initiations to parent responses (see the lower half of Table 1 for a detailed description of the calculation of these ratios). As these were tabulated by using the word processing program’s word find feature (for codes reliably assigned in Analysis 1), interobserver reliability was not assessed.
Scaffolding behaviors were used as gauges of parental sensitivity, as they seemed to promote participation in the activity by modeling elaboration and acceptable responses. Scaffolding included asking a question or making a comment that elicited and supported a specific answer. Cloze items, the provision of blanks to be filled in (Snow, 1983), and leading questions are examples of “scaffolds.” Scaffolding could be explicit or implicit, and similar to the conversational balance variable, was assessed via ratios of codes that had been reliably assigned in Analysis 1.
Dependent measures of memory performance were identical to those used in Analysis 1; these included the number of verbatim words and utterances recalled from the story.
Results and Discussion
Pearson’s correlations were conducted for each group to determine how age and PPVT-III standard score related to the attunement variables and children’s memory for words from the text. For the children with hearing loss, age correlated with lower ratios of total parent initiations to child responses (r = −0.73, p = 0.02, r2 = 0.53) and higher ratios of total child initiations to parent responses (r = 0.66, p = 0.04, r2 = 0.44). In addition to participating more equally in the conversation compared with younger children with hearing loss, older children with hearing loss produced more words in general; age correlated positively with number of words produced from the text (r = 0.80, p = 0.01, r2 = 0.64). For the children without hearing loss, age was not significantly correlated with any variable.
For the children with hearing loss, no relationship emerged between joint attention and any of the memory performance variables, or for PPVT-III or age. However, for the children without hearing loss, joint attention and number of words produced from the text correlated positively (r = 0.61, p = 0.05, r2 = 0.37). In addition, for the children without hearing loss, PPVT-III standard score positively correlated with joint attention (r = 0.61, p = 0.05, r2 = 0.37), such that the better the child’s receptive vocabulary, the more joint attention in the activity.
For neither group were there significant correlations between measures of conversational balance and children’s memory for words from the text.
For children with hearing loss only, a positive relationship emerged between explicit scaffolding in the form of leading questions and the number of children’s verbatim utterances during recall (r = 0.63, p = 0.05, r2 = 0.39). The ratio of scaffolding techniques showed a smaller, more balanced relationship between child initiations and parent responses (r = −0.66, p = 0.04, r2 = 0.43) and higher ratios of parent initiations to child responses (r = 0.75, p = 0.01, r2 = 0.56); the more the parent controlled the conversation, the more often the parent used scaffolding techniques, presumably to stimulate the child’s involvement.
A series of 2 (group) 6 (session) repeated measures ANCOVAs were performed using the attunement measures as the dependent variable; as in Analysis 1, age and PPVT-III standard score were used as covariates in all analyses, and Bonferroni corrections were made for any post hoc tests.
Three “group-by-session” interactions qualified main effects for the following variables related to conversational balance: ratio of parent to child total words, number of parent words per turn and the ratio of the parent answering his/her own question to the total number of questions.
For the ratio of parent to child total words, an ANCOVA revealed an interaction between session and group, F(1, 16) = 6.66, p = .02, with follow-up analyses indicating a significant difference between children without hearing loss (M = 3.20, S.D. = 1.64) and children with hearing loss (M = 10.11, S.D. = 8.17) at Session 1, t(19) = −2.83, p = 0.01, such that parents of children with hearing loss were talking much more than their children were during the first session only, as compared to the parents of children without hearing loss.
For the average parent words per turn, a main effect was detected for group (F(1, 16) = 6.33, p = 0.02), such that the parents of the children with hearing loss (M = 13.28, S.E.M. = 1.12) used more words per turn than those of the children without hearing loss (M = 8.99, S.E.M. = 1.12). This main effect was qualified by a significant group X session interaction (F(1,16) = 14.05, p = 0.002), with follow-up tests revealing that parents of the children with hearing loss used more words per turn (M = 16.35, S.D. = 6.75), compared to those of parents of children without hearing loss (M = 8.72, S.D. = 1.75) during Session 1 only (t(19) = −3.75, p = 0.001). No differences reached significance in the other sessions.
Finally, an interaction was detected between session and group for the variable of implicit scaffolding (F(1, 16) = 4.89, p = 0.04), although follow-up analyses were unable to elucidate the source of this interaction.
Thus, results of these analyses indicate that both groups performed equally in recalling words from the text, but the factors that correlated with word recall differed between the groups: for children without hearing loss, joint attention was related to remembering; for children with hearing loss, explicit scaffolding techniques correlated with memory for the text.
That the parents of children with hearing loss were more directive and controlling of the reading activity at Session 1 may be interpreted as evidence that the parents were responding sensitively and appropriately to their child’s ability and/or motivation at the start of a new task (Evans & Schmidt, 1991; Schneider & Hecht, 1995). These parents adjusted their verbal behavior to the needs of their children with hearing loss, demonstrating attunement with them.
Joint attention is predictive generally of word recall for children with typical hearing (Tomasello & Farrar, 1986) as this study found, but it was not found to be related to lexical acquisition in the children with hearing loss. Instead, for this group, other factors such as conversational equality, syntactical ability and explicit scaffolding techniques seemed to correlate with learning words from the text.
This study finds a degree of balance between the parents and children with hearing loss, especially during Sessions 2 through 6. Although this is a bidirectional, dynamic relationship, the specific element of parental sensitivity appears important because the parent has the responsibility to promote the interaction. These results support the finding of Rabidoux and MacDonald (2000), which asserted that sensitivity toward the child promoted efficacy of word acquisition. Although in Session 1 the verbal control by the parents of children with hearing loss could be perceived as dominant or insensitive, these behaviors appear to be in line with Bornstein and Tamis-LeMonda’s (1989) definition of sensitivity: responding to a child’s behaviors with prompt, contingent and appropriate action.
The findings of few differences and many similarities between the ways parents and children with and without hearing loss read and talk together suggest that parents, caregivers and teachers of children with hearing loss can make good use of what is known about literacy acquisition in children with typical hearing. Although methods for helping children with hearing loss come to literacy need to be developed, they need not be based upon theories specially derived for them. The children with hearing loss included in the study have had intensive Auditory-Verbal therapy, an approach based on knowledge about language and speech acquisition in children with typical hearing. The insight that phonemic awareness is highly useful for children with typical hearing in learning to read prompts the observation that Auditory-Verbal therapy is, in part, phonemic awareness therapy. It is important to note that a principal focus of Auditory-Verbal therapy is that parents learn to intensify their children’s language experiences (Estabrooks, 1994).
The results from Analysis 1 indicate few differences in child or parent behaviors between the groups; the children and parents performed similarly in repeated readings of the same book as in reading new, different books. The finding that the children with hearing loss remember and repeat verbatim words from the stories as often as the children with typical hearing suggests they need to be read and talked with more frequently so as to strengthen their knowledge and use of both the old and new words they encounter in the stories. Their ability to learn and use new vocabulary is intact; they just need to have good instances of use in terms of presentation and opportunities for practice. Active reading and interaction around stories is the important matter. Parents, caregivers and teachers can relax and just read with children with hearing loss, putting aside ideas about drilling the child on certain words and not worrying about whether they are reading the “right” story to the child. While one cannot measure or control what the reading experience means for the child, one can ensure that the child has frequent and meaningful reading interactions.
The parents in the two groups did not use different strategies in eliciting story retelling by their children. Although parents in both groups used words from the stories with similar frequency, with more at Session 6 than at Sessions 2 and 3 for both groups, both groups of parents asked questions about implicit and explicit story content at similar rates. Compared to children with hearing loss, children without hearing loss provided more responses to questions about implicit story content, but they responded at similar levels when the questions concerned explicit content.
In this study, children with and without hearing loss spent similar amounts of time in joint attention, and it was predictive of word production from the text for the children without hearing loss, as has been demonstrated in other studies (Tomasello & Farrar, 1986). However, for children with hearing loss, joint attention was not predictive of word production from the text; it was also not predicted by receptive vocabulary score, although it was predicted by conversational balance. This may be due to the children’s extensive training in the Auditory-Verbal approach, which stresses careful attention and close listening to the other speaker.
In conclusion, we found that parents of children with hearing loss showed sensitivity during the first few sessions by responding more and by creating conversations in which they did more of the speaking.
Limitations and Future Directions
As is well known, failing to detect differences between groups is not evidence that such differences are not there—particularly with the relatively small samples in the current research. However, when members of one of these groups (children with hearing loss) have a history of significant challenges in literacy acquisition and when the differences that are not found are in an arena so central in emergent literacy as memory for words from a text, not finding differences between them and children without hearing loss seems noteworthy. Further, in Analysis 2, the variables that measured balance (ratio of parent to child words, ratio of parent initiations to child responses, and ratio of child initiations to parent responses) were in the form of ratios, which prevented observing the underlying verbal behaviors of the parent and child. Similarly, the “balance” variables did not measure contingency of response but simply measured the proportion of total initiations to total responses for both parent and child. Because of the sizable body of research showing that contingent utterances are predictive of language development in typically developing children (e.g., Barnes, Gutfreund, Satterly, & Wells, 1983), such contingency should be explored in more detail in future research.
Although the children with hearing loss in this study were operating with comparatively less language in this task, they appeared to be using the same processes as children with typical hearing. If these processes continue, they should become literate in the normal course of events.
Although Analysis 1 indicated that parents used similar strategies to elicit story retelling and children recalled words from the texts at similar levels, Analysis 2 suggested that the behaviors of parents of children with and without hearing loss that were predictive of task performance varied. For children without hearing loss, the PPVT-III standard score was positively correlated with joint attention, which in turn was positively correlated with the number of words produced in retelling. However, neither PPVT-III standard score nor joint attention correlated with any outcome variable for children with hearing loss. For these children, explicit scaffolding by parents correlated with memory for words from the text; in addition, as previously noted, the parent used more scaffolding in creating more “imbalanced” exchanges. This may be clear evidence of high sensitivity to child participation and comprehension in these parents.
On the basis of this study, parents and teachers of children with (and without) hearing loss can feel confident in reading with children often, retelling and talking about the stories, taking turns talking, using words from the stories in meaningful ways and using synonyms of words from the stories. Parents and teachers should listen carefully to what children say about a story and engage in authentic conversation about it. They should expand on stories and use them as starting points for making connections between what children already know and what is new to them. It is appropriate and useful for parents and teachers to ask children for clarification when they don’t understand what they have said. Designing both explicit and implicit questions is a good strategy, but the adult reader should be sensitive to the child’s willingness to entertain such questions, knowing that such willingness changes day-to-day, according to temperament and development.
The adult should use questions as communication acts, not drills. Waiting for the child to answer a question asked about a story and, if necessary, rephrasing the question in words the adult is aware the child knows will stimulate communication.
The adult reading with a child should be conscious of preparing the child to be a reader. Above all, the adult reader can rely on the evidence that even the child with hearing loss can listen to, understand and talk about the stories being read together.
Source: The Volta Review, 2006