Examples of recasts from
— Saxton, M. (2010). Child Language Acquisition and Development. UK: Sage Publishing.
The Achievement Gap
Meaningful Differences in the Everyday Experience of Young American Children
Hart and Risley (1995)
Optimizing post-critical-period language learning
Catherine J. Doughty
University of Maryland Center for Advanced Study of Language
In: Granena, Gisela and Mike Long (eds.), Sensitive periods, language aptitude, and ultimate L2 attainment . 2013. xv, 295 pp. (pp. 153–176)
This chapter reports on a new language aptitude test, the High-level Language Aptitude Battery (Hi-LAB), whose development was motivated by the need for an aptitude measure for more advanced L2 speakers.
Since many language learners begin as adults, critical-period constraints work against the desired outcome.
All may not be lost, however, given that some individuals attain high-level, if not native, proficiency, despite a late start.
We hypothesize that they possess language aptitude comprising inherent cognitive and perceptual abilities that compensate, at least in part, for the typical post-critical-period degradation in language-learning capacity.
While tests currently in use were designed to predict early rate of learning in instructed settings, Hi-LAB is conceptualized to predict successful ultimate attainment.
Aptitude is a measurable ceiling on language learning, holding equal all other factors.
We discuss constructs and measures, reliability and validity evidence, and uses of Hi-LAB for selecting learners for language training and in aptitude-by-treatment interaction studies.
Prosody cues word order in 7-month-old bilingual infants
Nature Communications, Feb. 14, 2013, 4: 1490
A central problem in language acquisition is how children effortlessly acquire the grammar of their native language even though speech provides no direct information about underlying structure.
This learning problem is even more challenging for dual language learners, yet bilingual infants master their mother tongues as efficiently as monolinguals do.
Here we ask how bilingual infants succeed, investigating the particularly challenging task of learning two languages with conflicting word orders (English: eat an apple versus Japanese: ringo-wo taberu ‘apple.acc eat’).
We show that 7-month-old bilinguals use the characteristic prosodic cues (pitch and duration) associated with different word orders to solve this problem.
Thus, the complexity of bilingual acquisition is countered by bilinguals’ ability to exploit relevant cues.
Moreover, the finding that perceptually available cues like prosody can bootstrap grammatical structure adds to our understanding of how and why infants acquire grammar so early and effortlessly.
Speaking In Tones
By Diana Deutsch
Scientific American Mind July / August 2010
the brain areas governing music and language/speech overlap
a person’s native tongue influences the way he or she perceives music
speakers of tonal languages such as Mandarin are much more likely than Westerners to have perfect pitch
nonverbal sounds such as music
some aspects of music engage the left hemisphere more than the right
the neural networks dedicated to speech and song significantly overlap.
This overlap makes sense, because language and music have a lot in common.
They are both governed by a grammar, in which basic elements are organized hierarchically into sequences according to established rules.
In language, words combine to form phrases, which join to form larger phrases, which in turn combine to make sentences.
Similarly, in music, notes combine to form phrases, which connect to form larger phrases, and so on.
Thus, to understand either language or music, listeners must infer the structure of the passages that they hear, using rules they have assimilated through experience.
In addition, speech has a natural melody called prosody.
Prosody encompasses overall pitch level and pitch range, pitch contour (the pattern of rises and falls in pitch), loudness variation, rhythm and tempo. Prosodic characteristics often reflect the speaker’s emotional state. When people are happy or excited, they frequently speak more rapidly, at higher pitches and in wider pitch ranges; when people are sad, they tend to talk more slowly, in a lower voice and with less pitch variation.
Prosody also helps us to understand the flow and meaning of speech.
Boundaries between phrases are generally marked by pauses, and the endings of phrases tend to be distinguished by lower pitches and slower speech. Moreover, important words are often spoken at higher pitches. Interestingly, some pitch and timing characteristics of spoken language also occur in music, which indicates that overlapping neural circuitries may be involved.
in 2009 medical anthropologist Kathleen Wermke of the University of Würzburg in Germany and her colleagues recorded the wails of newborn babies—which first rise and then fall in pitch—who had been born into either French- or German- speaking families.
The researchers found that the cries of the French babies consisted mostly of the rising portion, whereas the descending segment predominated in the German babies’ cries. Rising pitches are particularly common in French speech, whereas falling pitches predominate in German. So the newborns in this study were incorporating into their cries some of the musical elements of the speech to which they had been exposed in the womb, showing that they had already learned to use some of the characteristics of their first language.
When parents speak to their babies, they use exaggerated speech patterns termed motherese that are characterized by high pitches, large pitch ranges, slow tempi, long pauses and short phrases.
These melodious exaggerations help babies who cannot yet comprehend word meanings grasp their mothers’ intentions. For example, mothers use falling pitch contours to soothe a distressed baby and rising pitch contours to attract the baby’s attention. To express approval or praise, they utter steep rising and falling pitch contours, as in “Go-o-o-d girl!” When they express disapproval, as in “Don’t do that!” they speak in a low, staccato voice.
the melody of the speech alone, apart from any content, conveys the message.
… but after the six months of instruction, the children who had taken music lessons outperformed the others. Musically trained children may thus be at an advantage in grasping the emotional content—and meaning—of speech.
music lessons can improve the ability to detect emotions conveyed in speech (presumably through a heightened awareness of prosody).
that the language we learn early in life (e.g.: English, Vietnamese) provides a musical template that influences our perception of pitch.
pitch range of speech
Vietnamese and Mandarin are tone languages (words take on entirely different meanings depending on the tones with which they are spoken)
September 10, 2014
Enough With Baby Talk; Infants Learn From Lemur Screeches, Too
September 02, 2013
New research suggests that 3-month-old human babies can use lemur calls as teaching aids.
The findings hint at a deep biological connection between language and learning.
Babies begin learning as soon as they’re born.
They’re listening, too.
But researchers still don’t know exactly how the development of language and learning are linked: “How do language and concepts come together in the mind of the baby?” asks Sandy Waxman, a psychologist at Northwestern University in Evanston, Ill.
Waxman has devoted her career to answering that fundamental question.
She says the language-learning connection is clear in older children.
For example, a 2-year-old hears the word “dinosaur” when she sees many different kinds of dinosaurs.
She soon connects the word “dinosaur” to the dinosaur category, and she can more easily identify future dinosaurs when she sees them.
… But that’s not what the researchers found.
The backward speech didn’t help the babies to learn categories at all.
But the lemur shrieks did.
The study appears in this week’s issue of the Proceedings of the National Academy of Sciences.
Janet Werker, who studies the roots of language acquisition at the University of British Columbia in Canada.
Werker says the new study shows that there is something unique about the sounds we and our nearest animal relatives make.
Even if little babies can’t pick out the words, the sounds say, “Pay attention, you just might learn something!”
Whatever the effect, it doesn’t last for long.
By the time they were 6 months old, the babies had tuned out the lemur cries.
Only human speech played forward helped them to learn.
Nonhuman primate vocalizations support categorization in very young human infants.
Proc Natl Acad Sci U S A. 2013 Sep 3.
Ferry AL, Hespos SJ, Waxman SR.
Cognitive Neuroscience Sector, Scuola Internazionale Superiore di Studi Avanzati, 34136 Trieste, Italy.
Speak Parentese, Not Baby Talk
Early Language Acquisition: Cracking the speech code
Nature Reviews Neuroscience 5, 831-843 (November 2004)
Patricia K. Kuhl
Cracking the speech code: Language and the Infant Brain
April 16, 2010
Foundations for a New Science of Learning [REVIEW]
Science 17 July 2009: 325 (5938): 284-288
Derivation and inflection
frequency-based theories of sentence comprehension
CLiPS (Computational Linguistics & Psycholinguistics) is a research center associated with the Linguistics department of the faculty of Arts of the University of Antwerp
The Virtual Linguistics Campus
hosted by Marburg University, Germany
what is “essential”?
Steven Pinker on his sense of style