Educational CyberPlayGround ®

Native languages influence the way people group non-language sounds into rhythms.

TRANSDISCIPLINARITY
Combats Pedantic Scholarship and cultural decolonization.
HUMAN SYNCRONY
Improving literacy through arts education and advocacy by providing collaborative and interdisciplinary resources for understanding the universal laws of perception, underlying the rhythms of both speech and music; world culture and our national culture. Honoring the Unknown Culture Maker while bringing the arts to life on the Web, for all ages.

The sounds of our native languages affect how we hear music and other non-language sounds.

A team of American and Japanese researchers has found evidence that native languages influence the way people group non-language sounds into rhythms.
Source: American Institute of Physics
http://www.eurekalert.org/pub_releases/2006-11/aiop-nss113006.php

Exposure to certain patterns of speech can influence one's perceptions of musical rhythms.

People in different cultures perceive different rhythms in identical sequences of sound, according to Drs. John R. Iversen and Aniruddh D. Patel of The Neuroscience Institute in San Diego and Dr. Kengo Ohgushi of the Kyoto City University of Arts in Kyoto, Japan. This provides evidence that exposure to certain patterns of speech can influence one's perceptions of musical rhythms. In future work, they believe they may even be able to predict how people will hear rhythms based on the structures of their own languages.

Universal laws of perception, underlying the rhythms of both speech and music.

Researchers have traditionally tested how individuals group rhythms by playing simple sequences of tones. For example, listeners are presented with tones that alternate in loudness (...loud-soft-loud-soft...) or duration (...long-short-long-short...) and are asked to indicate their perceived grouping. Two principles established a century ago, and confirmed in numerous studies since, are widely accepted: a louder sound tends to mark the beginning of a group, and a lengthened sound tends to mark the end of a group. These principles have come to be viewed as universal laws of perception, underlying the rhythms of both speech and music. However, the cross-cultural data have come from a limited range of cultures, such as American, Dutch and French.

This new study suggests one of those so-called "universal" principles, perceiving a longer sound at the end of a group, may be merely a byproduct of English and other Western languages. In the experiments Iversen, Patel and Ohgushi performed, native speakers of Japanese and native speakers of American English agreed with the principle that they heard repeating "loud-soft" groups. However, the listeners showed a sharp difference when it came to the duration principle. English-speaking listeners most often grouped perceived alternating short and long tones as "short-long." Japanese-speaking listeners, albeit with more variability, were more likely to perceive the tones as "long-short." Since this finding was surprising and contradicted a common belief of perception, the researchers replicated and confirmed it with listeners from different parts of Japan.

Understanding how musical rhythms begin in the two cultures

To uncover why these differences exist, one clue may come from understanding how musical rhythms begin in the two cultures. For example, if most phrases in American music start with a short-long pattern, and most phrases in Japanese music start with a long-short pattern, then listeners might learn to use these patterns as cues for how to group them. To test this idea, the researchers examined phrases in American and Japanese children's songs. They examined 50 songs per culture, and for each beginning phrase they computed the duration ratio of the first note to the second note and counted how often phrases started with a short-long pattern versus other possible patterns such as long-short, or equal duration. They found American songs show no bias to start phrases with a short-long pattern. But Japanese songs show a bias to start phrases with a long-short pattern, consistent with their perceptual findings.

One basic difference between English and Japanese is word order. In English, short grammatical, or "function," words such as "the," "a," and "to," come at the beginning of phrases and combine with longer meaningful, or "content," words such as a nouns or verbs. Function words are typically reduced in speech, having short duration and low stress. This creates frequent linguistic chunks that start with a short element and end with a long one, such as "to eat," and "a big desk." This fact about English has long been exploited by poets in creating the English language's most common verse form, iambic pentameter.

Japanese, in contrast, places function words at the ends of phrases. Common function words in Japanese include "case markers," or short sounds which can indicate whether a noun is a subject, direct object, indirect object, etc. For example, in the sentence "John-san-ga Mari-san-ni hon-wo agemashita," ("John gave a book to Mari") the suffixes "ga," "ni" and "wo" are case markers indicating that John is the subject, Mari is the indirect object and "hon" (book) is the direct object. Placing function words at the ends of phrases creates frequent chunks that start with a long element and end with a short one, which is just the opposite of the rhythm of short phrases in English.

Link between Language and Music

In addition to potentially uncovering a new link between language and music, the researchers' work demonstrates there is a need for cross-cultural research when it comes to testing general principles of auditory perception.

Babies seem to have a keen eye for speech: they can distinguish between different languages simply by reading your lips.

Speaking like a Chinese native is in the genes [1]
Nora Schultz New Scientist magazine, June 2007, issue 2606 page 15.
ENQUIRE in Chinese after the health of someone's mother and you could well receive an answer about the well-being of their horse. Subtle pronunciation differences in tonal languages such as Chinese change the meaning of words, which is one reason why they are so hard for speakers of non-tonal languages like English [perfect pictch] to learn.
Babies of all backgrounds can grow up speaking any language, so there is no such thing as "a gene for Chinese". There may, however, be something in our genes that affects how easily we can learn certain languages. So say Dan Dediu and Robert Ladd of the University of Edinburgh, UK, who have discovered the first clear correlation between language and genetic variation.
Using statistical analysis, the pair show that people in parts of the world where non-tonal languages are spoken are more likely to carry different, more recently evolved forms of two brain development genes, ASPM and microcephalin, than people in tonal regions (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.0610848104).
"This is exciting because most genes and language features that vary at the population level are either not correlated or have a correlation that can be explained by geography or history," says Ladd. In ASPM and microcephalin, neither geography nor history can
account for the correlation.
Since both genes have a function in brain development, Dediu and Ladd propose that they may have subtle effects on the organisation of the cerebral cortex, including the areas that process language. Brain anatomy differs between English speakers who are good at learning tonal languages and those who find it harder, says Ladd, so now he wants to see whether similar learning differences can be found in carriers of the ASPM and microcephalin variant genes.
A remaining puzzle is the role of natural selection. The newer gene variants that are common in non-tonal regions must have been positively selected (New Scientist, 11 March 2006, p 30), but nobody has been able to show how they might provide a selective advantage.
Dediu and Ladd don't think their proposed linguistic effect could be the answer. "There is absolutely no reason to think that non-tonal languages are in any way more fit for purpose than tonal languages," says Ladd.
Bernard Crespi of Simon Fraser University, Burnaby, in British Columbia, Canada, has an explanation for the older genes, however. "Tonal languages may have some similarities to 'motherese' [baby talk]," which apparently helps infants learn language, he says.