https://www.proz.com/forum/linguistics/8159-ever_think_of_machine_translation_language_acquisition_is_so_complex.html

Ever think of machine translation? Language acquisition is so complex...
Thread poster: Jacek Krankowski (X)
Jacek Krankowski (X)
Jacek Krankowski (X)  Identity Verified
English to Polish
+ ...
Jan 31, 2003

How do we explain children\'s inevitable and early mastery of language? See, e.g.:



Language Acquisition

Steven Pinker

Massachusetts Institute of Technology

Chapter to appear in L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),

An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge,

MA: MIT Press.

NONFINAL VERSION: PLEASE DO NOTE QUOTE.



[excerpts]



L
... See more
How do we explain children\'s inevitable and early mastery of language? See, e.g.:



Language Acquisition

Steven Pinker

Massachusetts Institute of Technology

Chapter to appear in L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),

An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge,

MA: MIT Press.

NONFINAL VERSION: PLEASE DO NOTE QUOTE.



[excerpts]



Learnability Theory



What is language acquisition, in principle? A branch of theoretical computerscience called Learnability Theory attempts to answer this question (Gold,1967; Osherson, Stob, & Weinstein, 1985; Pinker, 1979). Learnability theory has defined learning as a scenario involving four parts (the theory embraces all forms of learning, but I will use language as the example):



A class of languages. One of them is the \"target\" language, to be - attained by the learner, but the learner does not, of course, know - which it is. In the case of children, the class of languages would - consist of the existing and possible human languages; the target - language is the one

spoken in their community.



An environment. This is the information in the world that the learner has to go on in trying to acquire the language. In the case of children, it might include the sentences parents utter, the context in which

they utter them, feedback to the child (verbal or nonverbal) in response to

the child\'s own speech, and so on. Parental utterances can be a random sample of the language, or they might have some special properties: they might be ordered in certain ways, sentences might be repeated or only

uttered once, and so on.



A learning strategy. The learner, using information in the environment, tries out \"hypotheses\" about the target language. The learning strategy is the algorithm that creates the hypotheses and determines whether

they are consistent with the input information from the environment. For

children, it is the \"grammar-forming\" mechanism in their brains; their \"language acquisition device.\"



A success criterion. If we want to say that \"learning\" occurs, presumably it is because the learners\' hypotheses are not random, - but that by some time the hypotheses are related in some systematic - way to the target language. Learners may arrive at a hypothesis - identical to the

target language after some fixed period of time; - they may arrive at an approximation to it; they may waiver among a - set of hypotheses one of which is correct.



Theorems in learnability theory show how assumptions about any of the three components imposes logical constraints on the fourth. It is not hard to show why learning a language, on logical grounds alone, is so hard. Like all \"induction problems\" (uncertain generalizations from instances), there are an infinite number of hypotheses consistent with any finite sample of environmental information. Learnability theory shows which induction problems are solvable and which are not.



A key factor is the role of negative evidence, or information about which

strings of words are not sentences in the language to be acquired. Human children might get such information by being corrected every time they speak ungrammatically. If they aren\'t -- and as we shall see, they probably aren\'t

-- the acquisition problem is all the harder. Consider Figure 1, where languages are depicted as circles corresponding to sets of word strings, and all the logical possibilities for how the child\'s language could differ from the adult language are depicted. There are four possibilities. (a) The child\'s hypothesis language (H) is disjoint from the language to be acquired

(the \"target language,\" T). That would correspond to the state of child

learning English who cannot say a single well-formed English sentence. For example, the child might be able only to say things like we breaked it, and we goed, never we broke it or we went. (b) The child\'s hypothesis and the target language intersect. Here the child would be able to utter some English sentences, like he went. However, he or she also uses strings of

words that are not English, such as we breaked it; and some sentences of English, such as we broke it, would still be outside their abilities. (c) The child\'s hypothesis language is a subset of the target language. That would mean that the child would have mastered some of English, but not all

of it, but that everything the child had mastered would be part of English.

The child might not be able to say we broke it, but he or she would be able to say some grammatical sentences, such as we went; no errors such as she breaked it or we goed would occur. The final logical possibility is (d), where The child\'s hypothesis language is a superset of the target language. That would occur, for example, if the child could say we broke it, we went,

we breaked it and we goed.



In cases (a-c), the child can realize that the hypothesis is incorrect by hearing sentences from parental \"positive evidence,\" (indicated by the \"+\" symbol) that are in the target language but not the hypothesized one: sentences such as we broke it. This is impossible in case (d); negative evidence (such as corrections of the child\'s ungrammatical sentences by his or her parents) would be needed. In other words, without negative evidence, if a child guesses too large a language, the world can never tell him he\'s wrong.



This has several consequences. For one thing, the most general learning algorithm one might conceive of -- one that is capable of hypothesizing any grammar, or any computer program capable of generating a language -- is in trouble. Without negative evidence (and even in many cases with it), there is no general-purpose, all-powerful learning machine; a machine must in some

sense \"know\" something about the constraints in the domain in which it is learning.



More concretely, if children don\'t receive negative evidence (see Section ) we have a lot of explaining to do, because overly large hypotheses are very easy for the child to make. For example, children actually do go through stages in which they use two or more past tense forms for a given verb, such

as broke and breaked -- this case is discussed in detail in my other chapter

in this volume. They derive transitive verbs from intransitives too freely: where an adult might say both The ice melted and I melted the ice, children also can say The girl giggled and Don\'t giggle me! (Bowerman, 1982b; Pinker,1989). In each case they are in situation (d) in Figure 1, and unless their parents slip them some signal in every case that lets them know they are not

speaking properly, it is puzzling that they eventually stop. That is, we would need to explain how they grow into adults who are more restrictive in their speech -- or another way of putting is that it\'s puzzling that the English language doesn\'t allow don\'t giggle me and she eated given that

children are tempted to grow up talking that way. If the world isn\'t telling children to stop, something in their brains is, and we have to find out who or what is causing the change.



Let\'s now examine language acquisition in the human species by breaking it down into the four elements that give a precise definition to learning: the target of learning, the input, the degree of success, and the learning strategy.



What is Learned



To understand how X is learned, you first have to understand what X is. Linguistic theory is thus an essential part of the study of language acquisition (see the Chapter by Lasnik). Linguistic research tries do three things. First, it must characterize the facts of English, and all the other languages whose acquisition we are interested in explaining. Second, since

children are not predisposed to learn English or any other language,

linguistics has to examine the structure of other languages. In particular, linguists characterize which aspects of grammar are universal, prevalent, rare, and nonexistent across languages. Contrary to early suspicions, languages do not vary arbitrarily and without limit; there is by now a large catalogue of language universals, properties shared exactly, or in a small number of variations, by all languages (see Comrie, 1981; Greenberg, 1978;

Shopen, 1985). This obviously bears on what children\'s language acquisition mechanisms find easy or hard to learn.



And one must go beyond a mere list of universals. Many universal properties of language are not specific to language but are simply reflections of universals of human experience. All languages have words for \"water\" and \"foot\" because all people need to refer to water and feet; no language has a word a million syllables long because no person would have time to say it.

But others might be specific to the innate design of language itself. For example, if a language has both derivational suffixes (which create new words from old ones, like -ism) and inflectional suffixes (which modify a word to fit its role in the sentence, like plural -s), then the derivational suffixes are always closer to the word stem than the inflectional ones. For example, in English one can say Darwinisms (derivational -ism closer to the stem than inflectional -s) but not Darwinsism. It is hard to think of a

reason how this law would fit in to any universal law of thought or memory:

why would the concept of two ideologies based on one Darwin should be thinkable, but the concept of one ideology based on two Darwins (say, Charles and Erasmus) not be thinkable (unless one reasons in a circle and

declares that the mind must find -ism to be more cognitively basic than the plural, because that\'s the order we see in language). Universals like this, that are specifically linguistic, should be captured in a theory of Universal Grammar (UG) (Chomsky, 1965, 1981, 1991). UG specifies the

allowable mental representations and operations that all languages are

confined to use. The theory of universal grammar is closely tied to the

theory of the mental mechanisms children use in acquiring language; their

hypotheses about language must be couched in structures sanctioned by UG. (...)



Conclusion



The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration.

Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social,

perceptual, and motor skills are all developing at the same time as their

linguistic systems are maturing and their knowledge of a particular language

is increasing, and none of their behavior reflects one of these components

acting in isolation.



Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology,

ethology, linguistic theory, naturalistic and experimental child psychology,

cognitive psychology, philosophy of induction, theoretical and applied

computer science. Language acquisition, then, is one of the best examples of

the indispensability of the multidisciplinary approach called cognitive

science. (...)



The most ambitious attempts to synthesize large amounts of data on language

development into a cohesive framework are Brown (1973), Pinker (1984), and

Slobin (1985b). Clark (1993) reviews the acquisition of words. Locke (1993)

covers the earliest stages of acquisition, with a focus on speech input and

output. Morgan & Demuth (in press) contains papers on children\'s perception

of input speech and its interaction with their language development.

http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/pinker.langacq.html







Collapse


 


To report site rules violations or get help, contact a site moderator:


You can also contact site staff by submitting a support request »

Ever think of machine translation? Language acquisition is so complex...






Trados Studio 2022 Freelance
The leading translation software used by over 270,000 translators.

Designed with your feedback in mind, Trados Studio 2022 delivers an unrivalled, powerful desktop and cloud solution, empowering you to work in the most efficient and cost-effective way.

More info »
CafeTran Espresso
You've never met a CAT tool this clever!

Translate faster & easier, using a sophisticated CAT tool built by a translator / developer. Accept jobs from clients who use Trados, MemoQ, Wordfast & major CAT tools. Download and start using CafeTran Espresso -- for free

Buy now! »