Download PDF Say My Name Your Way: Reflections on a Multilingual Life

Free download. Book file PDF easily for everyone and every device. You can download and read online Say My Name Your Way: Reflections on a Multilingual Life file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Say My Name Your Way: Reflections on a Multilingual Life book. Happy reading Say My Name Your Way: Reflections on a Multilingual Life Bookeveryone. Download file Free Book PDF Say My Name Your Way: Reflections on a Multilingual Life at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Say My Name Your Way: Reflections on a Multilingual Life Pocket Guide.

The film is trying to explore those blurry overlaps of our experiences, which is what most of my work is about. I think about movement as an articulation of otherness. Movement is communicability. So our collaboration has always been about that interplay between movement and language, and different forms of storytelling. It is about being in a constant conversation with every aspect of my environment, reacting physically to all parts of my surroundings.

And more generally, do you identify yourself as radical, and in what way? I would prefer to think of what I do as being an interlocutor, a medium, an echo, or a go-between. I never had a fixed identity to begin with, so there was nothing to let go of. Language is tricky. As humans, we are sort of beholden to it, with our limited senses—we still depend on verbal communication in ways that other species do not. I always turn to the writings of Octavia Butler when I need a little consolation about how limited it feels to be human.

Duilian , for example, is a work inspired by the Chinese revolutionary Qiu Jin, an anti-Qing heroine whose personal life is normally overshadowed by her monumental figure. In your work, you rather address her fictive personal, affective space. Intimacy can also be spatial-temporal, in the sense of creating an intimate environment or moment. A designer might quibble that the English on the right side is unnecessarily falling into the gutter between the pages, and it might have looked nicer to horizontally center the block of poetry.

But designers always find something to quibble about. A lot of white space means that the poems get space to breathe. This might focus the eye of some readers, myself included, on what the translator did differently. Facing-page translation works really well with poems that can fit on a single page, like these. It gets more complicated when the poems span several pages as would be the case with The Odyssey : differences in line lengths can mount up, and keeping the two texts synchronized can become complicated. Because of this, it is sometimes difficult to think in a programming language, i.

The good news is that many programming languages use similar concepts and structures, since they all are based on the principles of computation. This means that it is often quite easy to learn a second programming language after learning the first. Learning a second natural language can take much more effort. One thing is clear - it is becoming increasingly important to learn both kinds of language.

The children who learned to use computers without teachers. Levinson, S. Pragmatics as the origin of recursion. Lefebvre Eds. Berlin: Springer. Some programming languages certainly look a lot like natural languages. However, some other programming languages are much less readable. Here is some code in Scheme that does the same as the code above. So, how similar or different are natural languages and programming languages really? To answer these questions, we need to understand some central terms that linguists use to describe the structure of languages, rather than just looking at the surface of what looks similar or not.

Two of the most central concepts in linguistics are the concepts of semantics and syntax. In short, semantics is the linguistic term for meaning, but a more precise explanation is that semantics contains the information connected to concepts. The syntax on the other hand is the structure of how words of different kinds e. So, semantics and syntax have rules, but semantics relates to meaning and syntax relates to how words can be combined.

In programming languages, the coder has an intention of what the code should do. That could be called the semantics or the meaning of the code. The examples of python and scheme above have the same semantics, while the syntax of the two programming languages differ. We have described many parallels between the basic structure of natural languages and programming languages, but how far does the analogy go?

After all, natural languages are shaped by communicative needs and the functional constraints of human brains. Programming languages on the other hand are designed to have the capacities of a Turing machine, i. It is necessary for programming languages to be fixed and closed, while natural languages are open-ended and allow blends. Code allows long lists of input data to be read in, stored and rapidly parsed by shuffling around data in many steps, to finally arrive at some output data.

The point is that this is done in a rigorous way. Natural languages on the other hand must allow their speakers to greet each other, make promises, give vague answers and tell lies. New meanings and syntax constantly appear in natural languages and there is a gradual change of e. A sentence from a spoken language can have several possible meanings. People use context and their knowledge of the world to tell the difference between these meanings. Natural languages thus depend on an ever changing culture, creating nuances and blends of meanings, for different people in different cultures and contexts.

In programming languages, a line of code has a single meaning, so that the output can be reproduced with high fidelity. Languages organize their nouns into classes in various different ways. Some languages with two noun classes differentiate between masculine and feminine e. While Western European languages might give the impression that grammatical gender e. The reasons for these differences between languages remain mysterious. Given that the gender system of a language permeates all sentences, one might wonder whether it goes further and also influences how people think in general.

On the face of it, this appears unlikely. A grammatical gender system is just a set of rules for how words change when combined. Nonetheless, a series of experiments have come up with surprising results. It is as if the gender distinction found in Hebrew nouns gave these children a hint about a similar gender distinction in the natural world. Roberto Cubelli and colleagues asked people to judge whether two objects were of the same category e. When the grammatical gender of the objects matched, people were faster in their judgements than when there was a mismatch.

Edward Segel and Lera Boroditsky found the influence of grammatical gender even outside the laboratory - in an encyclopedia of classical paintings. They looked at all the gendered depictions of naturally asexual concepts like love, justice, and time. They noticed that these asexual entities e. On top of that, this effect was consistent even when only looking at those concepts with different grammatical genders in the studied languages. These and similar studies have powerfully shown how a grammatical classification system for nouns affects the view language speakers have on the world.

By forcing people to think in certain categories, general thinking habits appear to be affected. This illustrates quite nicely that thought is influenced by what you must say — rather than by what you can say. The grammatical gender effect on cognition highlights the fact that language is not an isolated skill but instead a central part of how the mind works. Segel, E. Grammar in art. Frontiers in Psychology, 1,1. Most people who try to learn a second language, or who interact with non-native speakers notice that the way non-native speakers speak their second language is influenced by their native language.

They are likely to have a foreign accent, and they might use inappropriate words or an incorrect grammatical structure, because those words or that structure are used that way in their native language. A lesser known yet common phenomenon is the influence of a foreign language we learn on our native language. People who start using a foreign language regularly for example, after moving to a different country often find themselves struggling to recall words when using their native language.

Other common influences are the borrowing of words or collocations two or more words that often go together. For example, Dutch non-native English speakers might insert English words, for which there is no literal translation, such as native , into a Dutch conversation. Studies from the past couple of decades show that people show such an influence at all linguistic levels - as described above, they may borrow words or expressions from their second language, but they might also borrow grammatical structures or develop a non-native accent in their own native language.

In general, research has shown that all the languages we speak are always co-activated. This means that when a Dutch person speaks German, not only his German but also his Dutch, as well as any other language that person speaks, are automatically activated at the same time. This co-activation likely promotes cross-linguistic influence. So will learning a foreign language necessarily influence one's native language at all linguistic levels?

To a degree, but there are large individual differences. The influence is larger the more dominant the use of the foreign language is, and in particular if it is regularly used with native speakers of that language as when moving to a foreign country. The influence also increases with time, so immigrants, for example, are likely to show more influence after 20 years abroad than after 2, although there is also a burst of influence in the initial period of using a foreign language regularly. Some studies also suggest that differences among people in certain cognitive abilities, like the ability to suppress irrelevant information, affect the magnitude of the influence of the second language on the native language.

It is important to note though that some of these influences are relatively minor, and might not even be detectable in ordinary communication. Cook, V. Effects of the second language on the first. Clevedon: Multilingual Matters. No language has a spelling system or orthography which absolutely and completely represents the sounds of that language, but some are definitely better than others. Italian has a shallow orthography, which means that the spelling of the words represent the sounds of Italian quite well although Sicilian, Sardinian, and Neapolitan Italian speakers may disagree , while English has a deep orthography, which means that spelling and pronunciation don't match so well.

Italian is consistent for two main reasons. Firstly, the Accademia della Crusca was established in and has spent several centuries since regulating the Italian language; the existence of such an academy has enabled wide-ranging and effective spelling consistency. Secondly, Standard Italian only has five vowels; a , i , u , e , and o , which makes it much easier to distinguish between them on paper.

Other examples of languages with five vowel systems are Spanish and Japanese, both of which also have shallow orthographies. Japanese is an interesting case; some words are written using the Japanese characters, which accurately represent the sound of the words, but other words are written with adapted Chinese characters, which represent the meaning of the words and don't represent the sound at all.

Multilingualism - Wikipedia

French has a deep orthography, but in one direction; while one sound can be written several different ways, there tends to be one specific way of pronouncing a particular vowel or combination of vowels. For example, the sound [o] can be written au , eau , or o , as in haut , oiseau , and mot ; however, the spelling eau can only be pronounced as [o]. English, meanwhile, has a very deep orthography, and has happily resisted spelling reform for centuries interestingly enough, this is not the case in the USA; Noah Webster's American Dictionary of the English Language introduced a successful modern spelling reform programme… or program.

One obvious reason is the lack of a formal academy for the English language - English speakers are rather laissez-faire certainly laissez-faire enough to use French to describe English speakers' attitudes towards English - but there are several other reasons too. Formed out of a melting pot of European languages - a dab of Latin and Greek here, a pinch of Celtic and French there, a fair old chunk of German, and a few handfuls of Norse - English has a long and complicated history.

Some spelling irregularities in English reflect the original etymology of the words. The unpronounced b in doubt and debt harks back to their Latin roots, dubitare and debitum , while the pronunciation of ce- as "se-" in centre , certain , and celebrity is due to the influence of French and send and sell are not "cend" and "cell" because they are Germanic in origin.

All languages change over time, but English had a particularly dramatic set of changes to the sound of its vowels in the middle ages known as the Great Vowel Shift. The early and middle phases of the Great Vowel Shift coincided with the invention of the printing press, which helped to freeze the English spelling system at that point; then, the sounds changed but the spellings didn't, meaning that Modern English spells many words the way they were pronounced years ago.

This means that the Shakespeare's plays were originally pronounced very differently from modern English, but the spelling is almost exactly the same. Moreover, the challenge of making the sounds of English match the spelling of English is harder because of the sheer number of vowels.

Depending on their dialect, English speakers can have as many as 22 separate vowel sounds, but only the letters a , i , u , e , o , and y to represent them; it's no wonder that so many competing combinations of letters were created. Deep orthography makes learning to read more difficult, as a native speaker and as a second language learner. Despite this, many people are resistant to spelling reform because the benefits may not make up for the loss of linguistic history. The English may love regularity when it comes to queuing and tea, but not when it comes to orthography.

Children usually start babbling at an age of two or three months — first they babble vowels, later consonants and finally, between an age of seven and eleven months, they produce word-like sounds. Babbling is basically used by children to explore how their speech apparatus works, how they can produce different sounds. Along with the production of word-like sounds comes the ability to extract words from a speech input.

Grammar is said to have developed by an age of four or five years and by then, children are basically considered linguistic adults. The age at which children acquire these skills may vary strongly from one infant to another and the order may also vary depending on the linguistic environment in which the children grow up.

But by the age of four or five, all healthy children will have acquired language. The development of language correlates with different processes in the brain, such as the formation of connective pathways, the increase of metabolic activity in different areas of the brain and myelination the production of myelin sheaths that form a layer around the axon of a neuron and are essential for proper functioning of the nervous system.

Segalowitz Eds. Amsterdam: Elsevier. Homophones are words that sound the same but have two or more distinct meanings. This phenomenon occurs in all spoken languages. These words sound the same, even though they differ in several letters when written down therefore called heterographic homophones.

Such words are sometimes called homographic homophones. Words with very similar sounds but different meanings also exist between languages. One might think that homophones would create serious problems for the hearer or listener. How can one possibly know what a speaker means when she says a sentence like "I hate the mouse"? Indeed, many studies have shown that listeners are a little slower to understand ambiguous words than unambiguous ones. However, in most cases, it is evident from the context what the intended meaning is.

The above sentence might for example appear in the contexts of "I don't mind most of my daughter's pets, but I hate the mouse" or "I love my new computer, but I hate the mouse". People normally figure out the intended meaning so quickly, that they don't even perceive the alternative. Why do homophones exist? It seems much less confusing to have separate sounds for separate concepts. Linguists take sound change as an important factor that can lead to the existence of homophones. Also language contact creates homophones.

The benefits of a bilingual brain - Mia Nacamulli

Some changes over time thus create new homophones, whereas other changes undo the homophonic status of a word. Finally, a particularly nice characteristic of homophones is that they are often used in puns or as stylistic elements in literary texts. By David Peeters and Antje S. Cutler, A.

Voornaam is not really a homophone: Lexical prosody and lexical access in Dutch. Language and speech , 44 2 , Rodd, J. Making sense of semantic ambiguity: Semantic competition in lexical access. Journal of Memory and Language , 46 2 , Tabossi, P. Accessing lexical ambiguity in different types of sentential contexts.

Journal of Memory and Language , 27 3 , This condition was first described in the 's and referred to as 'congenital word blindness', because it was thought to result from problems with processing of visual symbols. Over the years it has become clear that visual deficits are not the core feature for most people with dyslexia. In many cases, it seems that subtle underlying difficulties with aspects of language could be contributing. To learn to read, a child needs to understand the way that words are made up by their individual units phonemes , and must become adept at matching those phonemes to arbitrary written symbols graphemes.

Although the overall language proficiency of people with dyslexia usually appears normal, they often perform poorly on tests that involve manipulations of phonemes and processing of phonology, even when this does not involve any reading or writing. Since dyslexia is defined as a failure to read, without being explained by an obvious known cause, it is possible that this is not one single syndrome, but instead represents a cluster of different disorders, involving distinct mechanisms. However, it has proved hard to clearly separate dyslexia out into subtypes.

Studies have uncovered quite a few convincing behavioural markers not only phonological deficits that tend to be associated with the reading problems, and there is a lot of debate about how these features fit together into a coherent account. To give just one example, many people with dyslexia are less accurate when asked to rapidly name a visually-presented series of objects or colours. Some researchers now believe that dyslexia results from the convergence of several different cognitive deficits, co-occurring in the same person.

It is well established that dyslexia clusters in families and that inherited factors must play a substantial role in susceptibility. Nevertheless, there is no doubt that the genetic basis is complex and heterogeneous, involving multiple different genes of small effect size, interacting with the environment. The neurobiological mechanisms that go awry in dyslexia are largely unknown.

Multilingualism

A prominent theory posits disruptions of a process in early development, a process in which brain cells move towards their final locations, known as neuronal migration. Indirect supporting evidence for this hypothesis comes from studies of post-mortem brain material in humans and investigations of functions of some candidate genes in rats. But there are still many open questions that need to be answered before we can fully understand the causal mechanisms that lead to this elusive syndrome. Carrion-Castillo, A. Molecular genetics of dyslexia: an overview. Dyslexia , 19 , — Demonet, J.

Developmental dyslexia. Lancet , 63 , — link. Fisher, S. Genes, cognition and dyslexia: learning to read the genome. Trends in Cognitive Science, 10, When we tell people we investigate the sign languages of deaf people, or when people see us signing, they often ask us whether sign language is universal. The answer is that nearly every country is home to at least one national sign language which does not follow the structure of the dominant spoken language used in that country. Chinese Sign Language and Sign Language of the Netherlands, for example, also have distinct vocabularies, deploy different fingerspelling systems, and have their own set of grammatical rules.

At the same time, a Chinese and Dutch deaf person, who do not have any shared language, manage to bridge this language gap with relative ease when meeting for the first time. This kind of ad hoc communication is also known as cross-signing. In collaboration with the International Institute for Sign Languages and Deaf Studies - iSLanDS , we are conducting a study of how cross-signing emerges among signers of varying countries for the first time. The recordings include signers from countries such as South Korea, Uzbekistan, and Indonesia.

This linguistic creativity often capitalizes on the depictive properties of visual signs e. Cross-signing is distinct from International Sign, which is used at international deaf meetings such as the World Federation of the Deaf WFD congress or the Deaflympics. International Sign is strongly influenced by signs from American Sign Language and is usually used to present in front of international deaf audiences who are familiar with its lexicon. Cross-signing, on the other hand, emerges in interaction among signers without knowledge of each other's native sign languages.

Information on differences and commonalities between different sign languages, and between spoken and signed languages by the World Federation of the Deaf: link. Mesch, J. Perspectives on the Concept and Definition of International Sign. World Federation of the Deaf. Supalla, T. The grammar of International Sign: A new look at pidgin languages. Emory and J.

If language tells us who we are, then who am I? | Stan Grant | Culture | The Guardian

Reilly Eds. Our bodies constantly communicate in various ways. In the context of social interactions, our body expresses attitudes and emotions influenced by the dynamics of the interaction, interpersonal relations and personality see also answer to the question " What is body language? These bodily messages are often considered to be transmitted unwittingly. Because of this, it would be difficult to teach a universal shorthand suitable for expressing the kind of things considered to be body language; however, at least within one culture, there seems to be a great deal of commonality in how individuals express attitudes and emotions through their body.

Another form of bodily communication is the use of co-speech gesture. Co-speech gestures are movements of the hands, arms, and occasionally other body parts that interlocutors produce while talking. Because speech and gesture are so tightly intertwined, co-speech gestures are only very rarely fully interpretable in the absence of speech. As such, co-speech gestures do not help communication much if interlocutors do not speak the same language.

What people often tend to resort to when trying to communicate without a shared language are pantomimic gestures, or pantomimes. These gestures are highly iconic in nature like some iconic co-speech gestures are , meaning that they map onto structures in the world around us. Even when produced while speaking, these gestures are designed to be understandable in the absence of speech. Without a shared spoken language, they are therefore more informative than co-speech gestures. An important distinction has to be made between these pantomimic gestures that can communicate information in the absence of speech and sign languages.

In contrast to pantomimes, sign languages of deaf communities are fully-fledged languages consisting of conventionalised meanings of individual manual forms and movements which equate to the components that constitute spoken language. There is not one universal sign language: different communities have different sign languages Dutch, German, British, French or Turkish sign languages being a small number of examples.

Kendon, A. Gesture: Visible action as utterance. McNeill, D. Hand and mind: What gestures reveal about thought. Chicago University press. Language appears to be unique in the natural world, a defining feature of the human condition. Although other species have complex communication systems of their own, even our closest living primate relatives do not speak, in part because they lack sufficient voluntary control of their vocalizations.

After years of intensive tuition, some chimpanzees and bonobos have been able to acquire a rudimentary sign language. But still the skills of these exceptional cases have not come close to those of a typical human toddler, who will spontaneously use the generative power of language to express thoughts and ideas about present, past and future. It is certain that genes are important for explaining this enigma. But, there is actually no such thing as a "language gene" or "gene for language", as in a special gene with the designated job of providing us with the unique skills in question.

Genes do not specify cognitive or behavioural outputs; they contain the information for building proteins which carry out functions inside cells of the body. Some of these proteins have significant effects on the properties of brain cells, for example by influencing how they divide, grow and make connections with other brain cells that in turn are responsible for how the brain operates, including producing and understanding language.

So, it is feasible that evolutionary changes in certain genes had impacts on the wiring of human brain circuits, and thereby played roles in the emergence of spoken language. Crucially, this might have depended on alterations in multiple genes, not just a single magic bullet, and there is no reason to think that the genes themselves should have appeared "out of the blue" in our species. There is strong biological evidence that human linguistic capacities rely on modifications of genetic pathways that have a much deeper evolutionary history. A compelling argument comes from studies of FOXP2 a gene that has often been misrepresented in the media as the mythical "language gene".

It is true that FOXP2 is relevant for language — its role in human language was originally discovered because rare mutations that disrupt it cause a severe speech and language disorder. But FOXP2 is not unique to humans. Quite the opposite, versions of this gene are found in remarkably similar forms in a great many vertebrate species including primates, rodents, birds, reptiles and fish and it seems to be active in corresponding parts of the brain in these different animals.

For example, songbirds have their own version of FOXP2 which helps them learn to sing. In-depth studies of versions of the gene in multiple species indicate it plays roles in the ways that brain cells wire together. Intriguingly, while it has been around for many millions of years in evolutionary history, without changing very much, there have been at least two small but interesting alterations of FOXP2 that occurred on the branch that led to humans, after we split off from chimpanzees and bonobos.


  • Structural Design Criteria for Buildings: Structural Welding (Engineering SoundBites)?
  • Strategies for the New Year.
  • Five Things Teachers Can Do to Improve Learning for ELLs in the New Year | Colorín Colorado!
  • Standing Tall - The Inspirational Story of a True British Hero: The Taliban Nearly Killed Me...But They Couldnt Take Away My Fighting Spirit. This is My Inspirational Story.
  • Searching For Dracula In Romania (Romania Explained To My Friends Abroad Book 4)!

Scientists are now studying those changes to find out how they might have impacted the development of human brain circuits, as one piece of the jigsaw of our language origins. Revisiting Fox and the Origins of Language link. Fisher S. Nature Reviews Genetics , 7, Culture, genes, and the human revolution. Science, , Learning a new language is not easy, largely because of the heavy burden on memory. The learning process becomes more efficient when the translation step is removed and the new words are directly linked to the actual objects and actions.

Many highly skilled second language speakers frequently run into words whose exact translations do not even exist in their native language, demonstrating that those words were not learned by translation, but from context in the new language. The idea is to mimic how a child learns a new language.

Another way to build a vocabulary quicker is by grouping things that are conceptually related and practicing them at the same time. For example, naming things and events related to transportation as one is getting home from work, or naming objects on the dinner table. In a bit more advanced stage of building a vocabulary, one can use a dictionary in the target language, such as Thesaurus in English, to find the meaning of new words, rather than a language-to-language dictionary.

Spaced Learning is a timed routine, in which new material such as a set of new words in a studied language is introduced, reviewed, and practiced in three timed blocks with two 10 minute breaks. It is important that distractor activities that are completely unrelated to the studied material, such as physical exercises, are performed during those breaks. It has been demonstrated in laboratory experiments that such repeated stimuli, separated by timed breaks, can initiate long-term connections between neurons in the brain and result in long-term memory encoding.

These processes occur in minutes, and have been observed not only in humans, but also in other species. It is inevitable to forget when we are learning new things and so is making mistakes. The more you use the words that you are learning, the better you will remember them.