1. Emotional introductions
2. Point to the sound you hear (17:30)
3. Letter race (19:21)
4. Cross the River (22:06) stepping stones
5. Run to the …
6. Hula hoops (24:21)
BLENDING AND SEGMENTING GAMES (26:00)
1. Help the puppet
2. Word race (32:21) flash cards, spelling out
3. Fix the mistake
4. Running word dictation
6 Key principles of teaching very young learners:
3. They benefit from meaning-focused activities and natural language use – not from explicit rules
songs, rhymes and chants
17:10 we need more contact hours. It’s that simple as that: quantity matters.
19:02 An age-appropriate methodology:
– multi-sensory learning
– songs, chants and rhymes
– learning to interact
– playing games
– learning to think
26:49 Strategy 2: Use rhythm and auditory sub-modalities to help children to remember difficult words
29:20 Strategy 3: Revise lexical sets with the help fo memory games
30:08 What do you think is next?
Dr. Joan Kang Shin discusses how literacy activities can be introduced with learners as young as pre-school age and provide the building blocks for balanced literacy programming for young EFL learners.
conceptual habilities in L2
16:00 phonological awareness
17:16 phonemic awareness
Green Eggs and Ham
20:12 Chicka Chicka
23:56 Hot Potato Game
29:21 “BINGO”, a game for spelling words
30:05 Valuable dispositions of early literacy instruction
34:50 skills integration
38:53 Five helpful building blocks for an effective EFL literacy program
46:52 shared reading
Word decoding and phonics
formal phonics instruction … should not take up more than 25 percent of available reading instruction time. Students should be engaged in actual reading much more than they are engaged in discussing the act of reading (Allington, 2001).
Auditory Processing in Noise: A Preschool Biomarker for Literacy.
PLoS Biol, July 14, 2015, 13(7): e1002196.
White-Schwoch T, et al.
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child’s future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3–14 y), we show brain–behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers’ performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Speaking In Tones
By Diana Deutsch
Scientific American Mind July / August 2010
the brain areas governing music and language/speech overlap
a person’s native tongue influences the way he or she perceives music
speakers of tonal languages such as Mandarin are much more likely than Westerners to have perfect pitch
nonverbal sounds such as music
some aspects of music engage the left hemisphere more than the right
the neural networks dedicated to speech and song significantly overlap.
This overlap makes sense, because language and music have a lot in common.
They are both governed by a grammar, in which basic elements are organized hierarchically into sequences according to established rules.
In language, words combine to form phrases, which join to form larger phrases, which in turn combine to make sentences.
Similarly, in music, notes combine to form phrases, which connect to form larger phrases, and so on.
Thus, to understand either language or music, listeners must infer the structure of the passages that they hear, using rules they have assimilated through experience.
In addition, speech has a natural melody called prosody.
Prosody encompasses overall pitch level and pitch range, pitch contour (the pattern of rises and falls in pitch), loudness variation, rhythm and tempo. Prosodic characteristics often reflect the speaker’s emotional state. When people are happy or excited, they frequently speak more rapidly, at higher pitches and in wider pitch ranges; when people are sad, they tend to talk more slowly, in a lower voice and with less pitch variation.
Prosody also helps us to understand the flow and meaning of speech.
Boundaries between phrases are generally marked by pauses, and the endings of phrases tend to be distinguished by lower pitches and slower speech. Moreover, important words are often spoken at higher pitches. Interestingly, some pitch and timing characteristics of spoken language also occur in music, which indicates that overlapping neural circuitries may be involved.
in 2009 medical anthropologist Kathleen Wermke of the University of Würzburg in Germany and her colleagues recorded the wails of newborn babies—which first rise and then fall in pitch—who had been born into either French- or German- speaking families.
The researchers found that the cries of the French babies consisted mostly of the rising portion, whereas the descending segment predominated in the German babies’ cries. Rising pitches are particularly common in French speech, whereas falling pitches predominate in German. So the newborns in this study were incorporating into their cries some of the musical elements of the speech to which they had been exposed in the womb, showing that they had already learned to use some of the characteristics of their first language.
When parents speak to their babies, they use exaggerated speech patterns termed motherese that are characterized by high pitches, large pitch ranges, slow tempi, long pauses and short phrases.
These melodious exaggerations help babies who cannot yet comprehend word meanings grasp their mothers’ intentions. For example, mothers use falling pitch contours to soothe a distressed baby and rising pitch contours to attract the baby’s attention. To express approval or praise, they utter steep rising and falling pitch contours, as in “Go-o-o-d girl!” When they express disapproval, as in “Don’t do that!” they speak in a low, staccato voice.
the melody of the speech alone, apart from any content, conveys the message.
… but after the six months of instruction, the children who had taken music lessons outperformed the others. Musically trained children may thus be at an advantage in grasping the emotional content—and meaning—of speech.
music lessons can improve the ability to detect emotions conveyed in speech (presumably through a heightened awareness of prosody).
that the language we learn early in life (e.g.: English, Vietnamese) provides a musical template that influences our perception of pitch.
pitch range of speech
Vietnamese and Mandarin are tone languages (words take on entirely different meanings depending on the tones with which they are spoken)
September 10, 2014
Dance your way to Phonemic Awareness – Part 1 (video)
Sentence segmentation: jumping over freesbies
Part 1 of 5 – An Aural Skill…How to Integrate Phonemic Awareness
Movement and rhythm, blended with songs and chants weave a tapestry of phonemic success as Phyllis connects brain compatible learning, Phonemic Awareness and FUN!
Part 2 of 5 – Earl’s Too Cool For Me…a great book for phonemic awareness and movement.