ABRATES 8th International Translation and Interpretation Conference is coming up, and you should definitely attend it!
ABRATES is the Brazilian Association of Translators and Interpreters, a non-profit association managed by volunteer translators and interpreters. It promotes courses and events and encourages the exchange of knowledge and networking among the industry’s professionals, institutions, and agencies.
Going beyond the obvious of any good industry event, attending ABRATES can give you a fantastic first-hand view of how big and important Brazil’s (and Latin America’s) market is. The networking opportunities are endless, and if you live in North America, Europe or Oceania, the exchange rate is your friend.
This post lists all the good reasons why you should attend it. Check it out here:
Translate at City is the fourth literary translation summer school to be held at City, University of London. Organised in conjunction with the Translators Association of the Society of Authors, it offers the opportunity to translate texts across the literary genres into English, working with leading professional translators. Groups will be limited to a maximum of 15 students to allow for individual attention, and places will be allocated on a strictly “first come, first served” basis.
Mornings will be spent working on a piece of fiction on a continuous basis and the afternoons will be dedicated to translating short pieces in a variety of genres. There will be plenty of opportunities for networking with publishers, City staff and one another, particularly at our lunchtime and evening events, which include:
- A French Translation Slam, with Ros Schwarz and Frank Wynne, chaired by Professor Amanda Hopkinson
- A Keynote Lecture, Who Dares Wins, by Professor Gabriel Josipovici
- Author and translator Daniel Hahn speaking on Translation and Children’s Books
- Buffet supper at a local gastro pub sponsored by the European Commission following a talk from Paul Kaye, Language Officer at Europe House, London
- The launch of a literary translation competition, open to all participants, sponsored by prize-winning Comma Press
- Short lunchtime talks on topics related to developing your skills and getting published as a literary translator.
For more information, see http://www.city.ac.uk/courses/short-courses/translate-summer
The Man Booker Internation Prize, which recognizes works of fiction translated into English and published in the UK, announced its longlist of contenders for the prize on March 15th. Novels and collections of short stories are eligible for the International, as long as they have been translated into English and are published in the UK. The prize is £50,000, to be split equally between the winning author and translator, and all shortlisted authors and translators will receive £1,000 each. As of 2016, the Man Booker International Prize will be awarded annually. This year’s longlist is:
Author (nationality), Translator, Title (imprint)
- Mathias Enard (France), Charlotte Mandell, Compass (Fitzcarraldo Editions)
- Wioletta Greg (Poland), Eliza Marciniak, Swallowing Mercury (Portobello Books)
- David Grossman (Israel), Jessica Cohen, A Horse Walks Into a Bar (Jonathan Cape)
- Stefan Hertmans (Belgium), David McKay, War and Turpentine (Harvill Secker)
- Roy Jacobsen (Norway), Don Bartlett, Don Shaw, The Unseen (Maclehose)
- Ismail Kadare (Albania), John Hodgson, The Traitor’s Niche (Harvill Secker)
- Jon Kalman Stefansson (Iceland), Phil Roughton, Fish Have No Feet (Maclehose)
- Yan Lianke (China), Carlos Rojas, The Explosion Chronicles (Chatto & Windus)
- Alain Mabanckou (France), Helen Stevenson, Black Moses (Serpent’s Tail)
- Clemens Meyer (Germany), Katy Derbyshire, Bricks and Mortar (Fitzcarraldo Editions)
- Dorthe Nors (Denmark), Misha Hoekstra, Mirror, Shoulder, Signal (Pushkin Press)
- Amos Oz (Israel), Nicholas de Lange, Judas (Chatto & Windus)
- Samanta Schweblin (Argentina), Megan McDowell, Fever Dream (Oneworld)
The shortlist is scheduled to be announced on April 20th, and the winner will be announced on June 14th.
See more >>
Story flagged by:
A couple of years ago, when there was talk of politician ND Tiwari and the result of a certain DNA test, a Kannada newspaper reporting the story found itself unable to come up with a term for “biological son”. It did what writers, translators and students dealing with Kannada-related linguistic crises have done now for decades: it asked Professor G Venkatasubbiah, a man whose name has become synonymous with Kannada usage and lexicography. There wasn’t a precise equivalent, he said, and then went on to suggest a phrase the newspaper could use instead.
Now in his 104th year, GV – known by his initials as teachers often are – is a towering figure in the world of Kannada letters (and, as it happens, words). He’s had a distinguished working life as a college teacher and principal, as an editor, as a translator who has made works by Kabir, Shankaracharya, RL Stevenson and J Krishnamurthi available in Kannada, and as author of a large shelf’s worth of literary history and criticism. His monumental achievement though remains the stewardship of the 54-year-long project which brought into being the Kannada Sahitya Parishat’s Nighanṭu – an eight-volume, 9,000-page monolingual dictionary.
“It happened this way,” he began, at his austerely appointed home in south Bengaluru. Writers and researchers had long been feeling the need for an authoritative and comprehensive Kannada-Kannada dictionary when the matter came up for discussion in December 1941 at the annual meeting of the Kannada Sahitya Parishat. The Parishat, a non-profit that serves to promote Kannada, resolved to create such a dictionary. Their model was to be the Oxford English Dictionary, in part because the “historical principles” approach, where the evolution of word meanings is traced, was appropriate for a language as old as Kannada. Also because, GV said, “It was the best dictionary. It is still the best dictionary.”
“Unfortunately, no linguistic survey had been done in Kannada,” GV said. Words would have to be gathered from written sources. The editors identified 903 (later expanded to 1,750) works of literature from different periods – the 10th century Pampa and Ranna, the 15th century Kumaravyasa, the 17th century Lakshmisha. They chose works of contemporary stalwarts such as KV Puttappa, Shivaram Karanth and others, making sure that different parts of the state were represented: “We wanted to collect words from Udupi, from Raichur, from Mysore, from Madikeri.” Then, there were words from nearly 10,000 Kannada inscriptions dating from the 4th century to the 18th century, and of course, words from all previously existing Kannada dictionaries.
It took around 10 years to collect words and another three to arrange them in alphabetical order before the writing of the dictionary could begin in earnest.
Read the full article >>
Wikitongues is a non-profit dedicated to language preservation and learning:
Wikitongues collects video oral histories from each of the world’s more than 7,000 language communities, preserving our common cultural heritage and amplifying stories from around the world. We publish our videos under a creative commons license to facilitate free educational use and raise awareness about the vast sum of human experience.
We compile word lists, phrasebooks, and dictionaries, a crucial step toward ensuring that every language is well documented, preserving it for future generations. We work to guarantee that students always have access, academics always have data, and activists always have resources to sustain and defend their cultures.
After a Kickstarter campain in 2016, Wikitongues is moving forward with Poly, a tool designed to streamline the process of creating and sharing dictionaries between any two languages. Speakers of languages without a written standard, including the world’s more than 200 sign languages, are supported by native video functionality. Poly is an open source and open data platform.
See more at Wikitongues and Poly.
Story flagged by:
Protemos, the translation management system, has released its version 1.18, which features an integration with SmartCAT.
See more >>
Netflix has introduced a translator screening test for its original content called Hermes. Via Hermes, translators can take a test to be rated qualified to translate Netflix content. The test is scored on a scale of 1 to 100, with 80 being the minimum score to be eligible. Hermes is the result of efforts to improve Netflix’s translated content.
Enter the Hermes platform here >>
Rates for translating at Netflix are published here (pdf)
Story flagged by:
Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. The 31-year-old is now a visiting researcher at OpenAI, the artificial intelligence lab started by Tesla founder Elon Musk and Y combinator president Sam Altman. There, Mordatch is exploring a new path to machines that can not only converse with humans, but with each other. He’s building virtual worlds where software bots learn to create their own language out of necessity.
As detailed in a research paper published by OpenAI this week, Mordatch and his collaborators created a world where bots are charged with completing certain tasks, like moving themselves to a particular landmark. The world is simple, just a big white square—all of two dimensions—and the bots are colored shapes: a green, red, or blue circle. But the point of this universe is more complex. The world allows the bots to create their own language as a way collaborating, helping each other complete those tasks.
All this happens through what’s called reinforcement learning, the same fundamental technique that underpinned AlphaGo, the machine from Google’s DeepMind AI lab that cracked the ancient game of Go. Basically, the bots navigate their world through extreme trial and error, carefully keeping track of what works and what doesn’t as they reach for a reward, like arriving at a landmark. If a particular action helps them achieve that reward, they know to keep doing it. In this same way, they learn to build their own language. Telling each other where to go helps them all get gets places more quickly.
As Mordatch says: “We can reduce the success of dialogue to: Did you end up getting to the green can or not?”
To build their language, the bots assign random abstract characters to simple concepts they learn as they navigate their virtual world. They assign characters to each other, to locations or objects in the virtual world, and to actions like “go to” or “look at.” Mordatch and his colleagues hope that as these bot languages become more complex, related techniques can then translate them into languages like English. That is a long way off—at least as a practical piece of software—but another OpenAI researcher is already working on this kind of “translator bot.”
Ultimately, Mordatch says, these methods can give machines a deeper grasp of language, actually show them why language exists—and that provides a springboard to real conversation, a computer interface that computer scientists have long dreamed of but never actually pulled off.
These methods are a significant departure from most of the latest AI research related to language. Today, top researchers typically exploring methods that seek to mimic human language, not create a new language. One example is work centered on deep neural networks. In recent years, deep neural nets—complex mathematical systems that can learn tasks by finding patterns in vast amounts of data—have proven to be an enormously effective way of recognizing objects in photos, identifying commands spoken into smartphones, and more. Now, researchers at places like Google, Facebook, and Microsoft are applying similar methods to language understanding, looking to identify patterns in English conversation, so far with limited success.
Mordatch and his collaborators, including OpenAI researcher and University of California, Berkeley professor Pieter Abbeel, question whether that approach can ever work, so they’re starting from a completely different place. “For agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient,” their paper reads. “An agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment.”
In the end, success will likely come from a combination of techniques, not just one. And Mordatch is proposing yet another technique—one where bots don’t just learn to chat. They learn to chat in a language of their own making. As humans have shown, that is a powerful idea.
Read more >>
Story flagged by:
A group of dairy drivers argued that they deserved overtime pay for certain tasks they had completed. The company said they did not. An appeals court sided with the drivers, saying that the guidelines themselves were made too ambiguous by, you guessed it, a lack of an Oxford comma. This is what the law says about activities that do NOT merit overtime pay. Pay attention to the first sentence:
The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of:
(1) Agricultural produce;
(2) Meat and fish products; and
(3) Perishable foods.
That’s a lot of things! But if we’re getting picky, is packing for shipment its own activity, or does it only apply to the rest of that clause, i.e. the distribution of agricultural produce, etc.? See, all of this could be solved if there were an Oxford comma, clearly separating “packing for shipment” and “distribution” as separate things! According to court documents, the drivers distribute perishable food, but they don’t pack it.
Yes, this is the real argument they made. And they really won.
“Specifically, if that [list of exemptions] used a serial comma to mark off the last of the activities that it lists, then the exemption would clearly encompass an activity that the drivers perform,” the circuit judge wrote. It did not, and since the judge observed that labor laws, when ambiguous, are designed to benefit the laborers, the case was settled. “For want of a comma, we have this case,” the judge wrote.
The irony in this ruling is, there are actual state guidelines on how Maine lawmakers draw up their documents. And they do NOT include Oxford commas! The humanity! To be fair, there is also guidance on how to avoid unclear language that could, say, help an impressively pedantic group of drivers get what they were owed.
Story flagged by:
Ben Zimmer, language columnist for The Wall Street Journal, returns to discuss how dictionaries are making themselves relevant again through social media and other digital tools. Merriam Webster has recently experienced a surge in popularity on social media in response to their tweets about politics and “alternative facts.” As Jesse Sheidlower said in a recent The New York Times article, “In times of stress, people will go to things that will provide answers. The Bible, the dictionary or alcohol.”
Hear the interview on the Leonard Lopate Show >>
Story flagged by:
How much energy and brain power do we devote to learning how to spell? Language evolves over time, and with it the way we spell — is it worth it to spend so much time memorizing rules that are filled with endless exceptions? Literary scholar Karina Galperin suggests that it may be time for an update in the way we think about and record language.
View the TED talk (in Spanish with English subtitles) >>
Story flagged by:
The Google Translate app already lets you snap a photo of Japanese text and get a translation for it in English. But it’s a whole lot more convenient if you can just point your camera and instantly translate text on the go. With Word Lens, you just need to fire up the Translate app, point your camera at the Japanese text, and the English translations will appear overlaid on your screen—even if you don’t have an Internet or data connection.
Read more >>
Story flagged by:
Net-Translators, a leading provider of website translation services, announced today that it has partnered with WPML, the leading multilingual plugin to create a website in more than one language.
The recent partnership with WPML, a product of OnTheGoSystems, allows users of WordPress, the most downloaded website and blog content management system (CMS) available, to author content and easily translate it into different languages without any coding. Once the plugin is installed, anyone on WordPress can connect directly with Net-Translators to start a website translation project. With thousands of professional translators, proofreaders and editors from around the globe, Net-Translators offers translation services into more than 60 languages. The plugin also includes advanced features for translation management and an interface for our translators.
“We decided to team up with WPML because we want to provide the millions of WordPress users with an easy, seamless way to get their websites translated,” notes Shy Avni, CEO and co-founder of Net-Translators. He continues: “The plugin can be installed by anyone in order to turn their website into a multilingual version. This revolutionary new way of translating websites is in line with our ongoing commitment to develop and offer the most efficient localization tools and technologies to our customers.”
“We are excited to work with Net-Translators and offer their service to WPML clients,” said Amir Helzer, OnTheGoSystems’ Founder and CEO. “Net-Translators offers expertise and quality that our clients need. This partnership allows each of us to focus on our expertise and provide complete value to clients.”
Additional information including step-by-step instructions on how to get started with WPML and Net-Translators is available by visiting: https://wpml.org/translation-service/net-translators.
Story flagged by:
Accuracy in the Life Sciences field is one of the most challenging areas for professional translators and LSPs. And there are several reasons why.
A translation error in a medical device or other medical-related materials can literally mean the difference between life and death. As a result, Life Sciences translation and localization are as regulated as they are specialized.
What are the implications of regulation in translation? How strictly is it enforced? What does it take to become a professional translator in Life Sciences? Who actually qualifies and who doesn’t?
These are just a few of the questions Renato Beninatto and Michael Stevens discuss with Jeff Gerhardt, this week’s guest on Globally Speaking. With nearly 20 years of experience in the Life Sciences space, Jeff Gerhardt is the founder and principal of Centix Life Technologies, and was formerly a director of Global Labeling at Edwards Life Sciences.
Topics covered include:
- What Life Sciences and medical device companies look for—and require—from LSPs
- The need for tightly monitored processes that minimize translation mistakes and catch errors before a medical product actually gets released
- The costs of retranslating or even making slight grammatical changes after a medical device is already on the market
- How strategic translation and labeling decisions can help prevent inventory bottlenecks
Listen to the podcast here >>
Story flagged by:
Poet and scholar Esther Schor joins us to discuss her book, Bridge of Words: Esperanto and the Dream of a Universal Language, which details the history of a constructed language called Esperanto. She tells the story of Ludwig Lazarus Zamenhof, a Polish Jew, who in 1887 had the utopian dream of creating a universal language that would end political and ethnic conflict, and enable everyone to communicate.
Listen to the interview on the Leonard Lopate Show >>
Story flagged by:
A Video Remote Interpreteing (VRI) pilot project is underway for courts in the US state of California, and is set for its trial run of six months starting in July 2017. From the California Courts website:
Video Remote Interpreting uses videoconferencing technology to provide court users with a qualified interpreter, when an onsite interpreter is not readily available. In June 2016, the Judicial Council approved a VRI pilot project to evaluate and test VRI technology in the courts, pursuant to recommendations in the Judicial Council’s Strategic Plan for Language Access in the California Courts (the Language Access Plan, or LAP). This pilot project aims to expand language access within the California courts by testing different VRI equipment solutions. The pilot will include input from the public and court stakeholders to help the branch evaluate how and when VRI may be appropriate for different types of case events (short matters). On an individual basis, the court will determine if each case event is appropriate for VRI. For a quick review of VRI, download the Video Remote Interpreting Fact Sheet.
Potential Benefits of VRI include:
- Increased access to qualified (certified and registered) interpreters, especially in languages of lesser diffusion.
- Allowing court users to see and talk to an interpreter in their language without extended delay, despite not being in the same room, or even the same city.
- Allowing court users to resolve short, non-evidentiary, non-complex and uncontested hearings, even when on-site interpreters are unavailable, lowering the need to reschedule court visits.
- Private and confidential VRI conversations, similar to in-person interpreting.
See the project outline >>
Story flagged by:
Last year Microsoft announced the release of its Neural Network based translation system for 10 languages: Arabic, Chinese, English, French, German, Italian, Japanese, Portuguese, Russian, and Spanish. Today, Korean is being added to the list.
At a high level, Neural Network translation works in two stages:
- The first stage models the word that needs to be translated based on the context of this word (and its possible translations) within the full sentence, whether the sentence is 5 words or 20 words long.
- The second stage then translates this word model (not the word itself but the model the neural network has built), within the context of the sentence, into the other language.
Neural Network translation uses models of word translations based on what it knows from both languages about a word and the sentence context to find the most appropriate word as well as the most suitable position for this translated word in the sentence.
One way to think about neural network-based translation is to think of a fluent English and French speaker that would read the word “dog” in a sentence: “The dog is happy”. This would create in his or her brain the image of a dog. This image would be associated with “le chien” in French. The Neural Network would intrinsically know that the word “chien” is masculine in French (“le” not “la”). But, if the sentence were to be “the dog just gave birth to six puppies”, it would picture the same dog with puppies nursing and then automatically use “la chienne” (female form of “le chien”) when translating the sentence.
Here’s an example of the benefits of this new technology used in the following sentence: (one of the randomly proposed on our try and compare site: http://translate.ai)
M277dw에 종이 문서를 올려놓고, 스마트폰으로 스캔 명령을 내린 뒤 해당 파일을 스마트폰에 즉시 저장할 수 있다.
Traditional Statistical Machine Translation would offer this translation:
“M277dw, point to the document, the paper off the file scan command Smartphone smartphones can store immediately.”
Neural Network translation, in comparison, generates this clear and fluent sentence:
“You can place a paper document on M277DW, and then save the file to your smartphone immediately after the scan command.”
Read more >>
Story flagged by:
Have you ever been called mardy, been mithered, complained of someone being nesh, labelled them a numpty or had people look at you blankly because a word you have used since childhood does not form part of their vocabulary?
If any of the above sounds familiar then congratulations: you are living proof that the death of dialect is greatly exaggerated.
Dialect has been mourned for a while now. It is well over 20 years since the term “estuary English” was first coined, while a more recent report concluded that “talking to machines and listening to Americans” could spell the death of regional accents and much-cherished dialect words within the next 50 years.
This fear does not, however, extend to the British Library where linguists continue to chronicle words used in different places and, where possible, preserve them by recording people using them.
Jonnie Robinson, lead curator of spoken English at the British Library and the author of the Evolving English WordBank, says the exercise – which saw ordinary people across the country “donate” words in special recording booths between 2010 and 2011 – proves that dialect words are far from being extinct.
“A lot of people feel dialect is dwindling but actually, although it’s changing … you can find examples of continuity,” Robinson says. The Evolving English WordBank contains 1,500 contributions to date, many of which are dialect words.
Some have shown incredible longevity. Robinson points to the word “puggle”, a word donated by a woman in Birmingham in 2010 which she defined as having “a poke about” or having “bit of of a look” for something.
“I don’t know where it comes from,” the well-spoken woman in her early 30s said in her contribution. “I always thought it was a real word and it turns out it’s not.”
Yet when Robinson looked into it he found puggle in the 19th-century English Dialect Dictionary, one of two major linguistic projects examining how geography and social class affects vocabulary (the other is The Survey of English Dialects, a collection of more than 1,300 words from 300 locations across England in the 1950s).
“The word puggle has been used in the home counties for at least 100 years,” Robinson says, “and here it is being used today, somewhat self-consciously, but used nonetheless by a middle-class young female in the south of England.”
Other submissions are instantly recognisable, either because they are still commonly used or because they have been popularised, or both. “Mardy” (meaning moody or irritable), a word chronicled more than a century ago, is still widely used in the north and Midlands of England. Its further popularisation through the Arctic Monkeys song Mardy Bum helped make it one of the most commonly donated words to the WordBank.
The collection also captures once common words that now survive in just a few geographical pockets. For example “owt” (meaning anything) was widespread in Old English. Now it only persists in certain areas in the north and Midlands, including Yorkshire.
Dialect words can be a way of establishing a person’s shared roots and the basis for unusual social bonds: one woman told the story of a work colleague who, on finding out she was from Grimsby, immediately asked if she knew what “spoggy” meant (chewing gum).
However, words are not necessarily unique to one location – dialect tends to turn up in different locations. A common example is that words and phrases that originated in Scotland often appear in Northern Ireland because of the strong historical connections between the two places.
So children are still being called “thrawn” (difficult or contrary) in Northern Ireland more than 500 years after its first documented use in the Oxford English Dictionary, while the same child might be told to “hold your whisht” (be quiet) over 200 years after Robert Burns used the line in verse.
Of course words do die. The distinguished linguist David Crystal has produced a book and website chronicling disappearing words, while Bradwell Books’ county series of dialect glossaries features many old word forms that are no longer with us.
Robinson is not blind to the evolution of language, but he does not believe that younger generations not using the words their parents or grandparents did spells the end of dialect.
“It’s very easy to pick up a dialect glossary of the 1950s, give it to a group of teenagers and say: ‘How many of these words from your town do you know?’ Many teenagers might not know them but that would have been the case if you had carried out the same exercise in the 1960s. Language is constantly changing.”
Research shows people are most likely to use dialect in their formative, playground years and again in their later years once they have left the professional sphere. This is partly because, in the work environment, people tend to gravitate to a “very mainstream vocabulary” to ensure they are understood.
He says the growing tendency for people to grow up in one area, then move to another for educational purposes and somewhere different for work also has an impact. “The fact is that people now encounter different social groups and we operate across those dialectal boundaries,” he says.
“But go to a pub where a group of people who all grew up in that town are out and having a non-self-conscious conversation among themselves [and] you’ll capture dialect,” he says, adding that this in itself is evidence that helps unpick the “urban myth that we are all beginning to sound the same”.
Read more >>
Story flagged by:
Alon Lavie, who heads up the Amazon Machine Translation Research and Development Group, said in a recent Globally Speaking podcast, that neural machine translation “makes very, very strange types of mistakes … because it’s not a direct matching between the source language and the target language in terms of the words and the sequences of words…” Thing is, the strangeness can get masked by the smoothness.
Recently I found myself feeding this line of text through Google Translate.
|For these products, please use 視覚化 not 可視化 based on the definition at the following URL:
It was aimed at linguists, instructing them to use the term shikakuka instead of kashika depending on context. Both words mean “visualization” but have slightly different nuances. And the results were so humorous I thought it would be a shame not to share them.
The neural cogs went whizzing and immediately gave me this:
You might expect that the two Japanese terms 視覚化 and 可視化 in the source would make it through to the target text. After all, there’s no need to translate them. But no. Instead, it produced 視覚障害, a big red flag. Just hit the reverse translate button (always a good idea to do that) to see what it means…
Okay, is this even close to what I wanted to say? Avoid visual impairment? Did I want to discriminate against someone? Of course not. Problem is, the Japanese text is so fluent that it reads like I really mean to be really mean.
Alon was absolutely right. Very strange. What is going on inside those neural networks of theirs? Can I pre-edit the source to help the MT to produce a better output? Maybe that unnatural colon at the end of the sentence is wreaking some sort of unexpected havoc? Let’s change it to a period.
Nope. It’s still giving us that problematic 視覚障害 but followed by some different wording.
Uh, yeah. So changing two dots ( : ) to one ( . ) takes us from “avoid visual impairment” to “confirm that there is no visual impairment”. Help!
Read the full article >>
Story flagged by:
While some argue that the infiltration of American English is constantly speeding up, Lynne Murphy, an American linguist living in the UK and a reader in linguistics at the University of Sussex, says that in fact the great era of American English as the language of the world was the 20th century, and it’s over.
“American culture (and words) could easily spread in the 20th century because it was hard to produce and distribute recorded entertainment, but the US had the capacity and the economy and the marketing savvy to do so,” Murphy wrote in a recent blog. What’s changed in the 21st century, she suggests, is that the internet has re-formed our relationship with media, making audiences less purely receptive, and more able to seek out the content that interests them. Ultimately, she argues, there’s more “exchange of words between people, rather than just reception of words from the media.”
“[P]opular songs are less universally popular, because people have more access to more different kinds of music on download,” she writes. “Instead of two or three or four choices on television, there are hundreds. And if you don’t like what you’re seeing you can go on YouTube or SoundCloud…and find all sorts of people doing all sorts of things.”
Geopolitics is also involved. During the last century, two world wars and the Cold War saw Americans posted all over the world, “using their slang in the presence of young recruits from other countries,” she writes. American manufactured goods were widely exported and advertised. With the election of Donald Trump as US president, however, the country’s rhetoric has become decidedly more isolationist. Murphy asks both whether its words and its culture will flow so freely abroad as before, and whether the rest of the world will be as receptive to them.
Not everyone agrees. Matthew Engel, author of forthcoming book That’s the Way it Crumbles—The Americanization of British English, argues that America’s cultural and technological strength globally make it hard for other languages—including French, German, and Italian as well as British English— not to metamorphose under its weight.
Stay informed on what is happening in the industry, by sharing and discussing translation industry news stories.
ProZ.com Translation News daily digest is an e-mail I always look forward to receiving and enjoy reading!
I read the daily digest of ProZ.com translation news to get the essential part of what happens out there!
I receive the daily digest, some interesting and many times useful articles!