Music Is Language and Language is Music
Music, Language and Literacy Connections
- Music is Language
- Research shows Connections Between Evolution, Music, Language, And Reading
- What is a Chant
- Chants Teach Reading
- Conversational Sulfege
- Hearing loss
- Perfect Pitch and Tonal Languages
- Evolutionary Roots of Language
- Interspecies Soundscape of Language
- Animal Intelligence
- Hearing loss
- Nursery Rhymes Oral Tradition
- Connections Between Evolution, Music, Language, And Reading
Music Is Language and Language is Music :
2016
Musical Play boosts Language Understanding
In 2012,
scientists in the US
proposed that music was not so much a byproduct of language, but a
crucial foundation
on which babies' language skills are built. According to Anthony
Brandt and others at Rice University in Houston, when infants hear
someone speaking, they listen to the patterns made by the units of
speech, or phenomes, and the rhythm of the language. The meaning
of the words and their emotional content comes later. For that
reason, they concluded that music was central to understanding
human development.
Nine-month-old children showed regular musically-based play
sessions improved their ability to process speech sounds and
rhythms. “The goal was to see whether music experience would train
a broader cognitive skill - pattern recognition - and the results
suggest that it does,” said Patricia Kuhl, who led the research at
the University of Washington in Seattle. “When you learn to
recognise auditory patterns, you can predict future sounds, and
that's helpful both to music and speech.”
“When we hear someone speak, or listen to music or even hear a
door slam, our cognitive pattern detectors know what's coming
next: each word gives a hint to the next one. Each note provides a
clue or the one coming next, and a door closing leads the brain to
expect footsteps,” said Kuhl. “Babies listening to music learned
the tempo of the waltz, and when that tempo was changed, they
noticed right away. We know the music babies became better at
patterns generally because they were better both at music and
speech,” she added. “Infants got better at detecting patterns and
predicting what's next. What could be better in such a complex
world?”
How music class can spark language development
12/23/14 medicalnewstoday.com/releases/287162.php
Music training may be one way to boost how the brain processes sound to remove the interference, said Kraus.
"Speech processing efficiency is closely linked to reading, since reading requires the ability to segment speech strings into individual sound units," said Kraus.
"A poor reader's brain often processes speech suboptimally."
"What we do and how we engage with sound has an effect on our nervous system," said Kraus. "Spending time learning to play a musical instrument can have a profound effect on how your nervous system works."
Music training has well-known benefits for the developing brain, especially for at-risk children. But youngsters who sit passively in a music class may be missing out, according to new Northwestern University research. In a study designed to test whether the level of engagement matters, researchers found that children who regularly attended music classes and actively participated showed larger improvements in how the brain processes speech and reading scores than their less-involved peers after two years. The research, which appears online on Dec. 16 in the open-access journal Frontiers in Psychology, also showed that the neural benefits stemming from participation occurred in the same areas of the brain that are traditionally weak in children from disadvantaged backgrounds. "Even in a group of highly motivated students, small variations in music engagement -- attendance and class participation -- predicted the strength of neural processing after music training," said study lead author Nina Kraus, the Hugh Knowles professor of communication sciences in the School of Communication and of neurobiology and physiology in the Weinberg College of Arts and Sciences at Northwestern. The type of music class may also be important, the researchers found. The neural processing of students who played instruments in class improved more than the children who attended the music appreciation group, according to the study.Infants in bilingual environments use pitch and duration cues to discriminate between languages - such as English and Japanese - with opposite word orders. In English, a function word comes before a content word (the dog, his hat, with friends, for example) and the duration of the content word is longer, while in Japanese or Hindi, the order is reversed, and the pitch of the content word higher. "By as early as seven months, babies are sensitive to these differences and use these as cues to tell the languages apart," says UBC psychologist Janet Werker, co-author of the study. http://www.medicalnewstoday.com/releases/256436.php
2013 How human language could have evolved from birdsong
Linguistics and biology researchers propose a new theory on the
deep roots of human speech.
"The sounds uttered by birds offer in several respects the nearest
analogy to language," Charles Darwin wrote in “The Descent of Man”
(1871), while contemplating how humans learned to speak. Language,
he speculated, might have had its origins in singing, which “might
have given rise to words expressive of various complex emotions.”
Researchers from MIT, along with a scholar from the University of
Tokyo, say that Darwin was on the right path. The balance of
evidence, they believe, suggests that human language is a grafting
of two communication forms found elsewhere in the animal kingdom:
first, the elaborate songs of birds, and second, the more
utilitarian, information-bearing types of expression seen in a
diversity of other animals. “It's this adventitious combination
that triggered human language,” says Shigeru Miyagawa, a professor
of linguistics in MIT's Department of Linguistics and Philosophy,
and co-author of a new paper published in the journal Frontiers in
Psychology.
The idea builds upon Miyagawa's conclusion, detailed in his
previous work, that there are two “layers” in all human languages:
an “expression” layer, which involves the changeable organization
of sentences, and a “lexical” layer, which relates to the core
content of a sentence. His conclusion is based on earlier work by
linguists including Noam Chomsky, Kenneth Hale and Samuel Jay
Keyser. Based on an analysis of animal communication, and using
Miyagawa's framework, the authors say that birdsong closely
resembles the expression layer of human sentences — whereas the
communicative waggles of bees, or the short, audible messages of
primates, are more like the lexical layer. At some point, between
50,000 and 80,000 years ago, humans may have merged these two
types of expression into a uniquely sophisticated form of
language. “There were these two pre-existing systems,” Miyagawa
says, “like apples and oranges that just happened to be put
together.”
These kinds of adaptations of existing structures are common in
natural history, notes Robert Berwick, a co-author of the paper,
who is a professor of computational linguistics in MIT's
Laboratory for Information and Decision Systems, in the Department
of Electrical Engineering and Computer Science. “When something
new evolves, it is often built out of old parts,” Berwick says.
“We see this over and over again in evolution. Old structures can
change just a little bit, and acquire radically new functions.”
A new chapter in the songbook
The new paper, “The Emergence of Hierarchical Structure in Human
Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya,
a biopsychologist at the University of Tokyo who is an expert on
animal communication. To consider the difference between the
expression layer and the lexical layer, take a simple sentence:
“Todd saw a condor.” We can easily create variations of this, such
as, “When did Todd see a condor?” This rearranging of elements
takes place in the expression layer and allows us to add
complexity and ask questions. But the lexical layer remains the
same, since it involves the same core elements: the subject,
“Todd,” the verb, “to see,” and the object, “condor.”
Birdsong lacks a lexical structure . Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.
By contrast, other types of animals have bare-bones modes of
expression without the same melodic capacity. Bees communicate
visually, using precise waggles to indicate sources of foods to
their peers; other primates can make a range of sounds, comprising
warnings about predators and other messages. Humans, according to
Miyagawa, Berwick and Okanoya, fruitfully combined these systems.
We can communicate essential information, like bees or primates —
but like birds, we also have a melodic capacity and an ability to
recombine parts of our uttered language. For this reason, our
finite vocabularies can generate a seemingly infinite string of
words. Indeed, the researchers suggest that humans first had the
ability to sing, as Darwin conjectured, and then managed to
integrate specific lexical elements into those songs. “It's not a
very long step to say that what got joined together was the
ability to construct these complex patterns, like a song, but with
words,” Berwick says.As they note in the paper, some of the
“striking parallels” between language acquisition in birds and
humans include the phase of life when each is best at picking up
languages, and the part of the brain used for language. Another
similarity, Berwick notes, relates to an insight of celebrated MIT
professor emeritus of linguistics Morris Halle, who, as Berwick
puts it, observed that “all human languages have a finite number
of stress patterns, a certain number of beat patterns. Well, in
birdsong, there is also this limited number of beat patterns.”
Birds and bees
The researchers acknowledge that further empirical studies on the
subject would be desirable. “It's just a hypothesis,” Berwick
says. “But it's a way to make explicit what Darwin was talking
about very vaguely, because we know more about language now.”
Miyagawa, for his part, asserts it is a viable idea in part
because it could be subject to more scrutiny, as the communication
patterns of other species are examined in further detail. “If this
is right, then human language has a precursor in nature, in
evolution, that we can actually test today,” he says, adding that
bees, birds and other primates could all be sources of further
research insight. MIT-based research in linguistics has largely
been characterized by the search for universal aspects of all
human languages. With this paper, Miyagawa, Berwick and Okanoya
hope to spur others to think of the universality of language in
evolutionary terms. It is not just a random cultural construct,
they say, but based in part on capacities humans share with other
species. At the same time, Miyagawa notes, human language is
unique, in that two independent systems in nature merged, in our
species, to allow us to generate unbounded linguistic
possibilities, albeit within a constrained system. “Human language
is not just freeform, but it is rule-based,” Miyagawa says. “If we
are right, human language has a very heavy constraint on what it
can and cannot do, based on its antecedents in nature.”
Daniel Levitin on auditory cheesecake and Steven Pinker - part 1
part 2
part 3
Sad Speech and the Descending Minor 3rd
Larry Sanger: "Reading is the ur-skill of education, arguably the most fundamental intellectual skill that schools develop. It is well known that children who are poor readers in the early elementary grades usually fall even farther behind in subsequent years. The failure is not just an inability to decode; it is also a failure to pick up basic vocabulary. Starting out behind, children end up getting discouraged; they learn to hate school and learning generally, so the cycle continues from generation to generation. If there were a way to teach them to read at an early age, both to decode and to comprehend grade-level books, they would be much less likely to fall behind."
Language evolves and changes with children, and gesture is an integral part of language .
VIDEO If you are interested in Arts Education, Children's Health, and Society the National Children's Folksong Repository is a public folklore project will preserve what is left of our oral culture.
Integrate Reading, Music and Language Module Curriculum
Toddlers don't listen to their own voice like adults do
When grown-ups and kids speak, they listen to the sound of their
voice and make corrections based on that auditory feedback. But
new evidence shows that toddlers don't respond to their own voice
in quite the same way, according to a report published online on
December 22 in
Current Biology,
a Cell Press publication. The findings suggest that very young
children must have some other strategy to control their speech
production, the researchers say.
"As they play music, violinists will listen to the notes they produce to ensure they are in tune," explained Ewen MacDonald of the Technical University of Denmark. "If they aren't, they will adjust the position of their fingers to bring the notes back in tune. When we speak, we do something very similar. We subconsciously listen to vowel and consonant sounds in our speech to ensure we are producing them correctly. If the acoustics of our speech are slightly different from what we intended, then, like the violinists, we will adjust the way we speak to correct for these slight errors. In our study, we found that four-year-olds monitor their own speech in the same way as adults. Surprisingly, two-year-olds do not."
That's despite the fact that infants readily detect small
deviations in the pronunciation of familiar words and babble in a
manner consistent with their native language. By the time they
turn two,
American children
have an average vocabulary of about 300 words and appear well on
their way to acquiring the sound structure of their native
language.
In the experiment, adults, four-year-olds, and two-year-olds said
the word "bed" repeatedly while simultaneously hearing themselves
say the word "bad." (To elicit those utterances from the young
children and toddlers, the researchers developed a video game in
which players help a robot cross a virtual playground by saying
the robot's 'magic' word "bed.") "If they repeat this several
times, adults spontaneously compensate, changing the way they say
the vowel," MacDonald said. "Instead of saying the word 'bed,'
they say something more like the word 'bid.'" Four-year-olds
adjusted their speech, too, the researchers show.
The two-year-olds, on the other hand, kept right on saying
"bed."
MacDonald says the results suggest a need to reconsider
assumptions about
how children make use of auditory feedback
. It may be that two-year-olds depend on their parents or other
people to monitor their speech instead of relying on their own
voice. MacDonald notes that caregivers often do repeat or reflect
back to young children what they've heard them say.