One Sunday, at one of our weekly salsa sessions, my friend Frank brought along a Danish guest. I knew Frank spoke Danish well, since his mother was Danish, and he, as a child, had lived in Denmark. As for his friend, her English was fluent, as is standard for Scandinavians. However, to my surprise, during the evening’s chitchat it emerged that the two friends habitually exchanged emails using Google Translate. Frank would write a message in English, then run it through Google Translate to produce a new text in Danish; conversely, she would write a message in Danish, then let Google Translate anglicize it. How odd! Why would two intelligent people, each of whom spoke the other’s language well, do this? My own experiences with machine-translation software had always led me to be highly skeptical about it. But my skepticism was clearly not shared by these two. Indeed, many thoughtful people are quite enamored of translation programs, finding little to criticize in them. This baffles me.
As a language lover and an impassioned translator, as a cognitive scientist and a lifelong admirer of the human mind’s subtlety, I have followed the attempts to mechanize translation for decades. When I first got interested in the subject, in the mid-1970s, I ran across a letter written in 1947 by the mathematician Warren Weaver, an early machine-translation advocate, to Norbert Wiener, a key figure in cybernetics, in which Weaver made this curious claim, today quite famous:
When I look at an article in Russian, I say, “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.”
Some years later he offered a different viewpoint: “No reasonable person thinks that a machine translation can ever achieve elegance and style. Pushkin need not shudder.” Whew! Having devoted one unforgettably intense year of my life to translating Alexander Pushkin’s sparkling novel in verse Eugene Onegin into my native tongue (that is, having radically reworked that great Russian work into an English-language novel in verse), I find this remark of Weaver’s far more congenial than his earlier remark, which reveals a strangely simplistic view of language. Nonetheless, his 1947 view of translation-as-decoding became a credo that has long driven the field of machine translation.
Since those days, “translation engines” have gradually improved, and recently the use of so-called “deep neural nets” has even suggested to some observers (see “The Great AI Awakening” by Gideon Lewis-Kraus in The New York Times Magazine, and “Machine Translation: Beyond Babel” by Lane Greene in The Economist) that human translators may be an endangered species. In this scenario, human translators would become, within a few years, mere quality controllers and glitch fixers, rather than producers of fresh new text.
Such a development would cause a soul-shattering upheaval in my mental life. Although I fully understand the fascination of trying to get machines to translate well, I am not in the least eager to see human translators replaced by inanimate machines. Indeed, the idea frightens and revolts me. To my mind, translation is an incredibly subtle art that draws constantly on one’s many years of experience in life, and on one’s creative imagination. If, some “fine” day, human translators were to become relics of the past, my respect for the human mind would be profoundly shaken, and the shock would leave me reeling with terrible confusion and immense, permanent sadness.
Each time I read an article claiming that the guild of human translators will soon be forced to bow down before the terrible swift sword of some new technology, I feel the need to check the claims out myself, partly out of a sense of terror that this nightmare just might be around the corner, more hopefully out of a desire to reassure myself that it’s not just around the corner, and finally, out of my longstanding belief that it’s important to combat exaggerated claims about artificial intelligence. And so, after reading about how the old idea of artificial neural networks, recently adopted by a branch of Google called Google Brain, and now enhanced by “deep learning,” has resulted in a new kind of software that has allegedly revolutionized machine translation, I decided I had to check out the latest incarnation of Google Translate. Was it a game changer, as Deep Blue and AlphaGo were for the venerable games of chess and Go?
I learned that although the older version of Google Translate can handle a very large repertoire of languages, its new deep-learning incarnation at the time worked for just nine languages. (It’s now expanded to 96.)* Accordingly, I limited my explorations to English, French, German, and Chinese.
Before showing my findings, though, I should point out that an ambiguity in the adjective “deep” is being exploited here. When one hears that Google bought a company called DeepMind whose products have “deep neural networks” enhanced by “deep learning,” one cannot help taking the word “deep” to mean “profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning of “deep” in this context comes simply from the fact that these neural networks have more layers (12, say) than do older networks, which might have only two or three. But does that sort of depth imply that whatever such a network does must be profound? Hardly. This is verbal spinmeistery.
I am very wary of Google Translate, especially given all the hype surrounding it. But despite my distaste, I recognize some astonishing facts about this bête noire of mine. It is accessible for free to anyone on earth, and will convert text in any of roughly 100 languages into text in any of the others. That is humbling. If I am proud to call myself “pi-lingual” (meaning the sum of all my fractional languages is a bit over 3, which is my lighthearted way of answering the question “How many languages do you speak?”), then how much prouder should Google Translate be, since it could call itself “bai-lingual” (“bai” being Mandarin for 100). To a mere pilingual, bailingualism is most impressive. Moreover, if I copy and paste a page of text in Language A into Google Translate, only moments will elapse before I get back a page filled with words in Language B. And this is happening all the time on screens all over the planet, in dozens of languages.
The practical utility of Google Translate and similar technologies is undeniable, and probably it’s a good thing overall, but there is still something deeply lacking in the approach, which is conveyed by a single word: understanding. Machine translation has never focused on understanding language. Instead, the field has always tried to “decode”—to get away without worrying about what understanding and meaning are. Could it in fact be that understanding isn’t needed in order to translate well? Could an entity, human or machine, do high-quality translation without paying attention to what language is all about? To shed some light on this question, I turn now to the experiments I made.
I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:
In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.
The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:
Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.
The program fell into my trap, not realizing, as any human reader would, that I was describing a couple, stressing that for each item he had, she had a similar one. For example, the deep-learning engine used the word “sa” for both “his car” and “her car,” so you can’t tell anything about either car-owner’s gender. Likewise, it used the genderless plural “ses” both for “his towels” and “her towels,” and in the last case of the two libraries, his and hers, it got thrown by the final “s” in “hers” and somehow decided that that “s” represented a plural (“les siennes”). Google Translate’s French sentence missed the whole point.
Next I translated the challenge phrase into French myself, in a way that did preserve the intended meaning. Here’s my French version:
Chez eux, ils ont tout en double. Il y a sa voiture à elle et sa voiture à lui, ses serviettes à elle et ses serviettes à lui, sa bibliothèque à elle et sa bibliothèque à lui.
The phrase “sa voiture à elle” spells out the idea “her car,” and similarly, “sa voiture à lui” can only be heard as meaning “his car.” At this point, I figured it would be trivial for Google Translate to carry my French translation back into English and get the English right on the money, but I was dead wrong. Here’s what it gave me:
At home, they have everything in double. There is his own car and his own car, his own towels and his own towels, his own library and his own library.
What?! Even with the input sentence screaming out the owners’ genders as loudly as possible, the translating machine ignored the screams and made everything masculine. Why did it throw the sentence’s most crucial information away?
We humans know all sorts of things about couples, houses, personal possessions, pride, rivalry, jealousy, privacy, and many other intangibles that lead to such quirks as a married couple having towels embroidered “his” and “hers.” Google Translate isn’t familiar with such situations. Google Translate isn’t familiar with situations, period. It’s familiar solely with strings composed of words composed of letters. It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things. Let me hasten to say that a computer program certainly could, in principle, know what language is for, and could have ideas and memories and experiences, and could put them to use, but that’s not what Google Translate was designed to do. Such an ambition wasn’t even on its designers’ radar screens.
Well, I chuckled at these poor shows, relieved to see that we aren’t, after all, so close to replacing human translators by automata. But I still felt I should check the engine out more closely. After all, one swallow does not thirst quench.
Indeed, what about this freshly coined phrase “One swallow does not thirst quench” (alluding, of course, to “One swallow does not a summer make”)? I couldn’t resist trying it out; here’s what Google Translate flipped back at me: “Une hirondelle n’aspire pas la soif.” This is a grammatical French sentence, but it’s pretty hard to fathom. First it names a certain bird (“une hirondelle”—a swallow), then it says this bird is not inhaling or not sucking (“n’aspire pas”), and finally reveals that the neither-inhaled-nor-sucked item is thirst (“la soif”). Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. “Il sortait simplement avec un tas de taureau.” “He just went out with a pile of bulls.” “Il vient de sortir avec un tas de taureaux.” Please pardon my French—or rather, Google Translate’s pseudo-French.