This is the reason why exposure to music is a primary factor within the Snowdrop programme for brain injured children. With thanks to 'Medical News Today.'
---------------------------------------------------------------------------
Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University's Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.
"Spoken language is a special type of music," said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. "Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music."
Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.
"Infants listen first to sounds of language and only later to its meaning," Brandt said. He noted that newborns' extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language - "the most musical aspects of speech."
The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.
The authors define music as "creative play with sound." They said the term "music" implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as "an intentional and often repetitive vocal performance," Brandt said. "They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later."
Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. "We show that music and language develop along similar time lines," he said.
Infants initially don't distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don't distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.
The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed - a feature of musical hearing that has often been overlooked, Brandt said.
"You can't distinguish between a piano and a trumpet if you can't process what you're hearing at the same speed that you listen for the difference between 'ba' and 'da,'" he said. "In this and many other ways, listening to music and speech overlap." The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.
"While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child's view," Brandt said. "We conclude that music merits a central place in our understanding of human development."
Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. "A lot of people with language deficits also have musical deficits," Brandt said.
More research could also shed light on rehabilitation for people who have suffered a stroke. "Music helps them reacquire language, because that may be how they acquired language in the first place," Brandt said.
Showing posts with label language development. Show all posts
Showing posts with label language development. Show all posts
Tuesday, 13 November 2012
Monday, 16 April 2012
Exposure to speech sounds is the basis of speech production.
This is why, in the Snowdrop programme, there are activities designed to give children the maximum exposure to speech sounds. As I say to every family, there is a link in the brain between exposure to language and language production.
-------------------------------------
Experience, as the old saying goes, is the best teacher. And experience seems to play an important early role in how infants learn to understand and produce language.
Using new technology that measures the magnetic field generated by the activation of neurons in the brain, researchers tracked what appears to be a link between the listening and speaking areas of the brain in newborn, 6-month-old and one-year-old infants, before infants can speak.
The study, which appears in this month's issue of the journal NeuroReport, shows that Broca's area, located in the front of the left hemisphere of the brain, is gradually activated during an infant's initial year of life, according to Toshiaki Imada, lead author of the paper and a research professor at the University of Washington's Institute for Brain and Learning Sciences.
Broca's area has long been identified as the seat of speech production and, more recently, as that of social cognition and is critical to language and reading, according to Patricia Kuhl, co-author of the study and co-director of the UW's Institute for Brain and Learning Sciences.
"Magnetoencephalography is perfectly non-invasive and measures the magnetic field generated by neurons in the brain responding to sensory information that then 'leaks' through the skull," said Imada, one of the world's experts in the uses of magnetoencephalography to study the brain.
Kuhl said there is a long history of a link in the adult brain between the areas responsible for understanding and those responsible for speaking language. The link allows children to mimic the speech patterns they hear when they are very young. That's why people from Brooklyn speak "Brooklynese," she said.
"We think the connection between perception and production of speech gets formed by experience, and we are trying to determine when and how babies do it," said Kuhl, who also is a professor of speech and hearing sciences.
The study involved 43 infants in Finland -18 newborns, 17 6-month-olds and 8 one-year olds. Special hardware and software developed for this study allowed the infants' brain activity to be monitored even if they moved and captured brain activation with millisecond precision.
The babies were exposed to three kinds of sounds through earphones - pure tones that do not resemble speech like notes played on a piano, a three-tone harmonic chord that resembles speech and two Finnish syllables, "pa" and "ta." The researchers collected magnetic data only from the left hemisphere of the brain among the newborns because they cannot sit up and the magnetoencephalography cap was too big to securely fit their heads.
At all three ages the infants showed activation in the temporal part of the brain, Broca's area, that is responsible for listening and understanding speech, showing they were able to detect sound changes for all three stimuli. But the pure perception of sound did not activate the areas of the brain responsible for speaking. However, researchers began seeing some activation in Broca's area when the 6-month-old infants heard the syllables or harmonic chords. By the time the infants were one-year old, the speech stimuli activated Broca's area simultaneously with the auditory areas, indicating "cross-talk" between the area of the brain that hears language and the area that produces language, according to Kuhl.
"We think that early in development babies need to play with sounds, just as they play with their hands. And that helps them map relationships between sounds with the movements of their mouth and tongue," she said. "To master a skill, babies have to play and practice just as they later will in learning how to throw a baseball or ride a bike. Babies form brain connections by listening to themselves and linking what they hear to what they did to cause the sounds. Eventually they will use this skill to mimic speakers in their environments."
This playing with language starts, Kuhl said, when babies begin cooing around 12 weeks of age and begin babbling around seven months of age.
"They are cooing and babbling before they know how to link their mouth and tongue movements. This brain connection between perception and production requires experience," she said.
-------------------------------------
Experience, as the old saying goes, is the best teacher. And experience seems to play an important early role in how infants learn to understand and produce language.
Using new technology that measures the magnetic field generated by the activation of neurons in the brain, researchers tracked what appears to be a link between the listening and speaking areas of the brain in newborn, 6-month-old and one-year-old infants, before infants can speak.
The study, which appears in this month's issue of the journal NeuroReport, shows that Broca's area, located in the front of the left hemisphere of the brain, is gradually activated during an infant's initial year of life, according to Toshiaki Imada, lead author of the paper and a research professor at the University of Washington's Institute for Brain and Learning Sciences.
Broca's area has long been identified as the seat of speech production and, more recently, as that of social cognition and is critical to language and reading, according to Patricia Kuhl, co-author of the study and co-director of the UW's Institute for Brain and Learning Sciences.
"Magnetoencephalography is perfectly non-invasive and measures the magnetic field generated by neurons in the brain responding to sensory information that then 'leaks' through the skull," said Imada, one of the world's experts in the uses of magnetoencephalography to study the brain.
Kuhl said there is a long history of a link in the adult brain between the areas responsible for understanding and those responsible for speaking language. The link allows children to mimic the speech patterns they hear when they are very young. That's why people from Brooklyn speak "Brooklynese," she said.
"We think the connection between perception and production of speech gets formed by experience, and we are trying to determine when and how babies do it," said Kuhl, who also is a professor of speech and hearing sciences.
The study involved 43 infants in Finland -18 newborns, 17 6-month-olds and 8 one-year olds. Special hardware and software developed for this study allowed the infants' brain activity to be monitored even if they moved and captured brain activation with millisecond precision.
The babies were exposed to three kinds of sounds through earphones - pure tones that do not resemble speech like notes played on a piano, a three-tone harmonic chord that resembles speech and two Finnish syllables, "pa" and "ta." The researchers collected magnetic data only from the left hemisphere of the brain among the newborns because they cannot sit up and the magnetoencephalography cap was too big to securely fit their heads.
At all three ages the infants showed activation in the temporal part of the brain, Broca's area, that is responsible for listening and understanding speech, showing they were able to detect sound changes for all three stimuli. But the pure perception of sound did not activate the areas of the brain responsible for speaking. However, researchers began seeing some activation in Broca's area when the 6-month-old infants heard the syllables or harmonic chords. By the time the infants were one-year old, the speech stimuli activated Broca's area simultaneously with the auditory areas, indicating "cross-talk" between the area of the brain that hears language and the area that produces language, according to Kuhl.
"We think that early in development babies need to play with sounds, just as they play with their hands. And that helps them map relationships between sounds with the movements of their mouth and tongue," she said. "To master a skill, babies have to play and practice just as they later will in learning how to throw a baseball or ride a bike. Babies form brain connections by listening to themselves and linking what they hear to what they did to cause the sounds. Eventually they will use this skill to mimic speakers in their environments."
This playing with language starts, Kuhl said, when babies begin cooing around 12 weeks of age and begin babbling around seven months of age.
"They are cooing and babbling before they know how to link their mouth and tongue movements. This brain connection between perception and production requires experience," she said.
Tuesday, 3 April 2012
Ability for Grammar 'Hardwired' into Humans.
This is not too far away from Chomsky's idea of a 'Language Acquisition Device' which is hardwired into the human brain, which contains all 250 speech sounds which exist in every language. Exposure to a specific language, then stimulates the retention of the speech sounds within that language and the sounds which the child has no exposure to are dropped. The fact that these Nicaraguan boys were not exposed to language and therefore did not develop any spoken language supports this view. The fact that their sign system contained common grammatical components also supports Chomsky's notion that there is a 'deep structure' which is common to all languages. Food for thought.
---------------------------------
"Our findings suggest that certain fundamental characteristics of human language systems appear in gestural communication, even when the user has never been exposed to linguistic input and has not descended from previous generations of skilled communicative partners," says Elissa L. Newport, George Eastman Professor of Brain and Cognitive Sciences and Linguistics at the University of Rochester. "We examined a particular hallmark of known grammatical systems and found that these signers also used this same hallmark in their gestured sentences. They designed their own language and wound up with some of the same rules of grammar every other language uses."
For eight years, Newport and Marie Coppola, a post-doctoral student at the University of Chicago, studied three deaf Nicaraguan boys who had no exposure to any sign formal language. They were linguistically separated from spoken language by virtue of their complete deafness since birth; separated from knowledge of Nicaraguan Sign Language because they'd never had contact with another signer; and separated from written Spanish since they had little or no formal education. This isolation forced each of the three boys to develop their own gestural-based language, called 'home sign systems' in the field of sign language research. These three isolated languages gave Coppola and Newport a window into how the brain creates language.
The home signers watched 66 very short videos consisting of single actions, such as a woman walking or a man smelling flowers. Using their home sign, they explained what they had seen. All three home signers consistently used the grammatical construction of "subject" in the same form it is used throughout languages around the world.
The concept of "subject" is ubiquitous in language, but is complex and difficult to define. Language assigns concepts to symbols, but does so imperfectly--a noun is usually an object, but certainly not always, as the noun "liberty" demonstrates. A prominent example of this abstract property of language is the idea of subject. While grammar school teachers might explain that a subject is the person, place or thing that performs the action in the sentence, in fact subjects are not necessarily the one who produces or instigates an action.
For instance, in the sentence, "John opened the door," the subject is "John"; but in "The door opened," the word "door" has become the subject, and in "John got hit," the word "John" is the subject even though he is the recipient of the action. Despite having to essentially design their own languages without influence from any other speakers or signers of an established language, the home signers created a complex grammatical component and used it in the same way highly evolved languages do. That the idea of "subject" exists in these individuals and is used in the same manner, strongly suggests that this basic and somewhat arbitrary property of language is an innate tendency in humans as they develop any communication system.
"The notion of 'subject' does not appear to require either linguistic input or a lengthy history within a language to develop," says Newport. "We're starting to see that the grammatical concept of 'subject' is part of the bedrock on which languages form."
Newport is continuing her research into other aspects of linguistics to see what else may be innate in human language, and also how language input alters and expands these innate tendencies.
Saturday, 23 July 2011
Can how a baby cries predict his or her future language skills?
Thanks for this piece of wisdom, which reminds me of the first question on the Snowdrop language development profile. "Did your child have differentiated cries in response to his / her varying needs?"
------------------------------------------------------
According to a Japanese proverb: “A crying child thrives.” A recent studythat examines the complexity of an infant’s cries in relation to his or her language development seems to offer a scientific basis for this folk wisdom.
For babies whose cries exhibited complex melodies by the age of two months, the study, published in the The Cleft Palate-Craniofacial Journal, says the probability of a language delay greatly decreases. Those whose cries were less complex had a greater chance of language delays by two years.
In addition, the study examined the language development in infants with cleft lip and cleft palate. The findings suggest distinguishing characteristics heard in the cries of those infants with a cleft and those without. This research is important because the findings may offer new treatments to help language development for infants with clefts.
The psychology of crying is nothing new. In study after study, scientists have documented thecatharsis that only a good cry can bring. For infants, crying is the sole form of communication and there are three distinct types: A “basic cry” is a rhythmic pattern consisting of a cry followed by silence; an “anger cry” is similar to a basic cry but with more volume due to the release of excessive air through the infant’s vocal chords; and a “pain cry” is a loud cry followed by periods of breath holding.
Infants also exhibit what is called a “simple cry melody” – a crying arc consisting of a single rise and then a fall. According to researchers, it is the segmentation of these melodies by momentary pauses and respiratory movement that leads to syllable production.
-------------------------------------------
Tell us something we don't know!!
Subscribe to:
Comments (Atom)