ProZ.com global directory of translation services
 The translation workplace
Ideas

Translation industry news

Adult language learning comes closer to native speech than previously thought

Source: University of California
Story flagged by: Jared Tabor

A new study reveals that adults are capable of learning and processing a new language in a way that resembles native speaker language use.

“Learning a second language as an adult is a difficult task,” said UC Riverside affiliate psychology professor Elenora Rossi, who was on the research team. “For years, scientists have believed that only the brains of very young children were pliable enough to allow for successful learning of a second language, while that was thought to be impossible for adults.”

In the past two decades, the advance of testing methodologies and revolutionary neuroimaging methods have allowed language processing to be studied in real-time in a non-invasive way, opening the doors to a better understanding of how our brains process linguistic information in two languages.

In the study, the team looked at how native English speakers, who learned Spanish as a second language as adults, understood sentences in Spanish that contained subtle aspects of Spanish grammar that do not exist in English. Participants in the study were already advanced in Spanish, but not native speakers. The goal was to test them on aspects of Spanish that are typically difficult to learn because they don’t exist in the structure of English grammar. Errors were purposely introduced and participants were asked whether they could detect the errors.

“Counter to the long-standing assumption that learning a second language and becoming bilingual past early childhood is impossible, we found that English speakers who learned Spanish as adults were able to understand these special aspects of Spanish,” said Judith Kroll, a UC Riverside psychology professor who was also on the research team. “The results suggest that adults are capable of learning and processing a new language in a way that resembles native speaker language use.”

The research team also included Pennsylvania State University faculty members Michele Diaz, psychology professor, and Paola Dussia, professor of Spanish, Italian and Portuguese.

The authors of the paper, published in Frontiers in Psychology, are part of a larger research effort between UC Riverside and Penn State to study the bilingual mind and brain. The research is conducted in collaboration, and supported by a National Science Foundation Partnerships for International Research and Education grant. Future research by the team will target understanding how an intensive but short period of new language learning may shape adult minds.

Localizers in the automotive industry face new challenges in delivering for connected cars

Source: Common Sense Advisory
Story flagged by: Jared Tabor

The automotive landscape remains in constant flux as ride-sharing services implement autonomous driving platforms and driverless cars and trucks appear on the roads. Billions of dollars and euros are flooding the sector as chip companies (Intel and Qualcomm) buy vehicle systems companies (Mobileye and NXP Semiconductors), traditional car manufacturers (Daimler, Ford, and GM) put money into driverless taxis (Lyft and Uber), and ride-hailing services (Uber) purchase self-driving truck technology (Ottomotto).

In the process, vehicles have morphed into computers – if not supercomputers – on wheels. Software now controls the engines as well as the dashboards. BlackBerry’s QNX operating system and middleware run in more than 60 million vehicles worldwide, while Apple and Google continue developing their own underlying software platforms. That means user experience design is just as important as body or parts design was in the past. At the same time, vehicle ownership continues to be a rite of passage in many countries as people enter the middle class and aspire to continue moving up. These customers expect the same personal attention they see in every other market. Language, of course, enables a more intimate level of experience.

However, drivers can’t be distracted by Google Translate or stumped by poor translation when they are lost at night on dark streets or traveling at 140 kph on Beijing’s 6th Ring Road. Dashboards must look familiar and resemble the screens on their phones, and be accurate and responsive – and for some drivers – integrate with their preferred wearable or digital personal assistant. These requirements mean that localizers in the automotive industry face new challenges as they adjust to delivering what are, in essence, very large mobile devices:

  • Design focus has shifted from autobodies to software and connectivity. Car manufacturers now compete against well-funded and experienced software companies such as Apple and Google. Dashboard design, and the software that runs it, have become top criteria for many buyers, whose expectations come from their everyday use of smartphones. No one wants to learn a new interface, especially if it’s clunky or diverts attention from driving. In this context, localization quality becomes a critical issue. Getting internationalization right for these components is essential.
  • Infotainment screens are just one of several components to be localized. Software now runs drivetrains, tires, and various engine components. Embedded sensors report data via the internet to help technicians focus only on what they need to review, fix, or replace in order to speed up service delivery times. As a result, documentation for technicians must evolve as their functions change.
  • Vehicles integrate more deeply with the world around them. Anyone who has purchased a new car within the last 24 months is driving a device that is connected to the internet: AT&T alone reported 11.8 million connected cars as of Q4 2016, up from eight million in Q1. This connectivity serves multiple purposes: 1) providing vehicle-to-vehicle communication; 2) enabling vehicles to connect to infrastructure, such as when Audis talk to traffic signals; 3) facilitating telematics to help track vehicles; 4) supporting personal digital assistant, smart home, entertainment, and security apps; 5) delivering over-the-air updates; and 6) creating built-in hotspots. These new scenarios have profound implications for localization, including increased demand for multilingual speech integration, manipulation of multimedia formats, adaptation for local regulations, terminology rationalization, and software testing.
  • Automotive content continues to iterate at faster rates and in smaller pieces. Customers want intelligent cars now, at affordable prices, and in their local languages. Auto manufacturers have been scrambling to get the design right for their infotainment screens. Connected vehicles raise the possibility of continuous upgrades and improvements post-sale. Not all brands are quite there yet, but their focus should now turn to iterating their enhancements faster in all languages. When they do, localization teams must be ready to support Agile workflows.

Fortunately, the localization managers in charge of multilingual content and code production don’t have to reinvent the wheel. They can pick up the baton from colleagues who have already figured out how to localize for the small screen. They can also benchmark themselves against competitors such as Apple, BlackBerry, Google, and Intel by applying the same CMMI-based benchmarking methodology used by those companies: CSA Research’s Localization Maturity Model(TM).

Language – whether expressed as text, speech, or gesture – will only become more essential for enhancing customer experience for drivers worldwide. As software, hardware, and user data are more tightly integrated through vehicular connections to the Internet of Things (IoT), localizers in other industries may be able to learn a thing or two from the automotive sector over the next few years.

Common Sense Advisory >>

Interview with the linguist behind the film Arrival

By: Jared Tabor

In the 2016 film Arrival – an adaptation of a short story by Ted Chiang, Story of Your Life – Earth is visited by extraterrestrials, known by humans as “heptapods”. They appear in huge, black spacecraft and, although they don’t attack mankind, various leaders of the world view them as a threat. Unable to communicate with the aliens, Dr Banks, a linguistics teacher, is employed by the US army to translate their language into English.

Jessica Coon, an associate professor in the Department of Linguistics at McGill University, Montreal, acted as a consultant on Arrival, helping director Denis Villeneuve and Actor Amy Adams accurately bring Dr Banks to life. As well as providing pointers to what the character’s office would look like, Coon looked over the film’s script, discussing with the filmmakers how a linguist – a person who studies linguistics, defined as “the scientific study of human language” – would go about communicating with an alien life form.

“There were a lot of things the film got really right when it comes to doing fieldwork,” Coon says. “Earlier on in the film, she’s the first person to take off her helmet and really try to interact with the heptapods in a meaningful way. As linguists, we’re interested in the more abstract properties of languages, but you can’t get at those directly. You have to interact with speakers of those languages, whether that be human language or alien languages.”

Another prominent point the filmmakers get right is how Banks asks simple questions at first, rather than complex. “You have to understand the smaller parts first because there’s so much room for miscommunication and certainly – in this case – the stakes are very high. You want to make sure you understand what’s being communicated, and what the possible ambiguities are.”

In many ways, Coon explains, the way Banks translates the alien language is similar to how we would translate another human language into our own. First, you have to establish that both parties are trying to communicate with each other. One starting point is then looking at common objects and attempting to interpret how each group communicates what that thing is. For instance, the scientists in Arrival names the two heptapods Abbott and Costello. After learning how the aliens say these, Banks can act out walking and get the sentence “Costello is walking” from them. By taking away the known word for “Costello”, the scientists can work out the action itself.

While building from simple to complex sentences is a tactic used when communicating between unknown languages, when it comes to human languages we have a huge head start. “Human languages share certain things in common,” Coon says. “We know how to find certain patterns, and when we find one common property we are able to find others. Human language seems to be very directly linked to other more general aspects of human cognition.”

“Humans are born ready to learn human languages and humans can do this effortlessly. When it comes to alien languages, we do not have this luxury. It would be very surprising, actually, if they were similar-to-human language because, really, human languages are directly tied to out genes – to our humanness – and so we can expect alien languages to differ hugely from our own.”

Read the full article >>

New film shows the rich history of African American Speech

Source: mental_floss
Story flagged by: Jared Tabor

The Language and Life Project of North Carolina State University has promoted research and education about the languages and dialects of North Carolina and the United States for more than 20 years. They’ve produced wonderful films on the “hoi toide” dialect of the Outer Banks (The Carolina Brogue), the Cherokee community’s fight to save their language (First Language), and the language of southern Appalachia (Mountain Talk), among others. Their new film, Talking Black in America, is an in-depth look at one of the most politically charged and misunderstood varieties of American English.

Executive producer Walt Wolfram, a linguist who has studied the subject for more than 50 years, says “there has never been a documentary devoted exclusively to African American speech, even though it’s the most researched—and controversial—collection of dialects in the United States and has contributed more than any other variety to American English.” The film aims to address important issues like linguistic profiling and discrimination while also showing that “understanding African-American speech is absolutely critical to understanding the way we talk today.”

Talking Black in America will premiere at 7 p.m. on March 23 at the James B. Hunt Jr. Library on North Carolina State’s Centennial Campus. Admission is free and open to the public. There will be public showings at other campuses through the spring.

Trailer for Talking Black in America >>

Machine learning search engine makes it easier for patients to find medical providers

Source: Fortune
Story flagged by: Jared Tabor

Zocdoc, the online doctor-locating and medical appointments platform, has launched a new feature on desktop and mobile devices that it dubs the “Patient-Powered Search.” The firm describes this new engine as a “more intuitive search experience, built specifically to bridge the gap between healthcare industry and human speak.”

The logic behind Patient-Powered Search, which harnesses AI and machine learning capabilities, is that people don’t think in terms of a medical textbook when they’re looking for a doctor to treat their ailments. But many doctor search services require that sort of precise terminology to find the appropriate physician.

Instead, Patient-Powered Search is able to decipher what it is that a patient is seeking. It forgives common spelling errors and gauges what a user’s actual intent is.

For instance, “gyno” would be understood to be OB-GYN and the misspelled “hemroids” would be mapped to hemorrhoids; searching for “anxiety” or “depression” will bring up the variety of possible medical professional who may be able to help a patient with those mental health conditions. And by constantly learning the types of real-world medical searches that people conduct, it can continually adjust to shifting trends.

See more >>

Indian Sign Language dictionary aims to help bridge the communication gap

Source: The Economic Times
Story flagged by: Jared Tabor

The Indian Sign Language (ISL) dictionary, which is being developed by the Indian Sign Language Research and Training Centre (ISLRTC), has so far compiled 6,032 Hindi and English words and their corresponding graphic representation of the signs which are used in daily life. The dictionary is being developed in both print and video format.

“A comprehensive Indian Sign Language Dictionary is the need of the hour to facilitate communication between the hearing and speech impaired and create a basic database for further policy making,” Union Social Justice and Empowerment Minister Thaawarchand Gehlot said today.

“Presently, the sign languages in a diverse country like India vary from region to region. Because of this, people from a region face difficulty in communicating with those in the other region,” he said at the inauguration of a two-day national conference titled ‘Empowering Deaf through Indian Sign Language’.

This dictionary will help bridge the communication gap, Gehlot said.

EU Parliament’s Irish interpreter jobs not being filled

Source: The Irish Times
Story flagged by: Jared Tabor

The majority of full-time positions advertised for Irish-language interpreters in the European Parliament have not been filled, Fine Gael MEP Deirdre Clune has disclosed.

Ms Clune has pointed to a report in the online news website, Politico.eu, which disclosed that 23 vacancies exist in the Irish translation unit at the parliament. She said it was a “shame” the posts were not being filled.

However, the leading Irish academic institution for training translators, NUI Galway, has said that a number of factors were responsible for positions not being filled, including the relative unfamiliarity of Irish applications with the EU’s stringent “Concours” test and with psychometric tests.

Chief executive of Acadamh na hOllscolaíochta at NUI Dónall Ó Braonáin said 2020 was the deadline for most vacancies to be filled to meet the need of more EU parliament business being conducted in Irish, and more documents being translated. He said he was confident that NUIG and other Irish third-level institutions would be able to supply suitable graduates.

A competition for Irish-language interpreters was conducted by the European Parliament Selection Office between April and July last year. In all, 26 positions were available but many were not filled.

At present, there are some 14 freelance interpreters working on Irish translation in the parliament, with smaller numbers available to the commission and council.

Ireland won recognition as an official EU language in 2007 but there is a derogation in place at present, that does not make it a requirement that all documents are translated. That derogation will end in 2022 at which time Irish will become a full working language.

See more>>

The Man Booker International Prize longlist announced

By: Jared Tabor

The Man Booker Internation Prize, which recognizes works of fiction translated into English and published in the UK, announced its longlist of contenders for the prize on March 15th. Novels and collections of short stories are eligible for the International, as long as they have been translated into English and are published in the UK. The prize is £50,000, to be split equally between the winning author and translator, and all shortlisted authors and translators will receive £1,000 each. As of 2016, the Man Booker International Prize will be awarded annually. This year’s longlist is:

Author (nationality), Translator, Title (imprint)

The shortlist is scheduled to be announced on April 20th, and the winner will be announced on June 14th.

See more >>

The man behind a 9,000-page, eight-volume Kannada dictionary that took 54 years to write

Source: Scroll.in
Story flagged by: Jared Tabor

A couple of years ago, when there was talk of politician ND Tiwari and the result of a certain DNA test, a Kannada newspaper reporting the story found itself unable to come up with a term for “biological son”. It did what writers, translators and students dealing with Kannada-related linguistic crises have done now for decades: it asked Professor G Venkatasubbiah, a man whose name has become synonymous with Kannada usage and lexicography. There wasn’t a precise equivalent, he said, and then went on to suggest a phrase the newspaper could use instead.

Now in his 104th year, GV – known by his initials as teachers often are – is a towering figure in the world of Kannada letters (and, as it happens, words). He’s had a distinguished working life as a college teacher and principal, as an editor, as a translator who has made works by Kabir, Shankaracharya, RL Stevenson and J Krishnamurthi available in Kannada, and as author of a large shelf’s worth of literary history and criticism. His monumental achievement though remains the stewardship of the 54-year-long project which brought into being the Kannada Sahitya Parishat’s Nighanu – an eight-volume, 9,000-page monolingual dictionary.

“It happened this way,” he began, at his austerely appointed home in south Bengaluru. Writers and researchers had long been feeling the need for an authoritative and comprehensive Kannada-Kannada dictionary when the matter came up for discussion in December 1941 at the annual meeting of the Kannada Sahitya Parishat. The Parishat, a non-profit that serves to promote Kannada, resolved to create such a dictionary. Their model was to be the Oxford English Dictionary, in part because the “historical principles” approach, where the evolution of word meanings is traced, was appropriate for a language as old as Kannada. Also because, GV said, “It was the best dictionary. It is still the best dictionary.”

“Unfortunately, no linguistic survey had been done in Kannada,” GV said. Words would have to be gathered from written sources. The editors identified 903 (later expanded to 1,750) works of literature from different periods – the 10th century Pampa and Ranna, the 15th century Kumaravyasa, the 17th century Lakshmisha. They chose works of contemporary stalwarts such as KV Puttappa, Shivaram Karanth and others, making sure that different parts of the state were represented: “We wanted to collect words from Udupi, from Raichur, from Mysore, from Madikeri.” Then, there were words from nearly 10,000 Kannada inscriptions dating from the 4th century to the 18th century, and of course, words from all previously existing Kannada dictionaries.

It took around 10 years to collect words and another three to arrange them in alphabetical order before the writing of the dictionary could begin in earnest.

Read the full article >>

Wikitongues moves forward with Poly, a tool to create and share language pair dictionaries

By: Jared Tabor

Wikitongues is a non-profit dedicated to language preservation and learning:

Wikitongues collects video oral histories from each of the world’s more than 7,000 language communities, preserving our common cultural heritage and amplifying stories from around the world. We publish our videos under a creative commons license to facilitate free educational use and raise awareness about the vast sum of human experience.

We compile word lists, phrasebooks, and dictionaries, a crucial step toward ensuring that every language is well documented, preserving it for future generations. We work to guarantee that students always have access, academics always have data, and activists always have resources to sustain and defend their cultures.

After a Kickstarter campain in 2016, Wikitongues is moving forward with Poly, a tool designed to streamline the process of creating and sharing dictionaries between any two languages. Speakers of languages without a written standard, including the world’s more than 200 sign languages, are supported by native video functionality. Poly is an open source and open data platform.

See more at Wikitongues and Poly.

Netflix introduces Hermes, a platform for screening its translators

By: Jared Tabor

Netflix has introduced a translator screening test for its original content called Hermes. Via Hermes, translators can take a test to be rated qualified to translate Netflix content. The test is scored on a scale of 1 to 100, with 80 being the minimum score to be eligible. Hermes is the result of efforts to improve Netflix’s translated content.

Enter the Hermes platform here >>

Rates for translating at Netflix are published here (pdf)

Building bots that can learn to chat in their own language

Source: Wired
Story flagged by: Jared Tabor

Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. The 31-year-old is now a visiting researcher at OpenAI, the artificial intelligence lab started by Tesla founder Elon Musk and Y combinator president Sam Altman. There, Mordatch is exploring a new path to machines that can not only converse with humans, but with each other. He’s building virtual worlds where software bots learn to create their own language out of necessity.

As detailed in a research paper published by OpenAI this week, Mordatch and his collaborators created a world where bots are charged with completing certain tasks, like moving themselves to a particular landmark. The world is simple, just a big white square—all of two dimensions—and the bots are colored shapes: a green, red, or blue circle. But the point of this universe is more complex. The world allows the bots to create their own language as a way collaborating, helping each other complete those tasks.

All this happens through what’s called reinforcement learning, the same fundamental technique that underpinned AlphaGo, the machine from Google’s DeepMind AI lab that cracked the ancient game of Go. Basically, the bots navigate their world through extreme trial and error, carefully keeping track of what works and what doesn’t as they reach for a reward, like arriving at a landmark. If a particular action helps them achieve that reward, they know to keep doing it. In this same way, they learn to build their own language. Telling each other where to go helps them all get gets places more quickly.

As Mordatch says: “We can reduce the success of dialogue to: Did you end up getting to the green can or not?”

To build their language, the bots assign random abstract characters to simple concepts they learn as they navigate their virtual world. They assign characters to each other, to locations or objects in the virtual world, and to actions like “go to” or “look at.” Mordatch and his colleagues hope that as these bot languages become more complex, related techniques can then translate them into languages like English. That is a long way off—at least as a practical piece of software—but another OpenAI researcher is already working on this kind of “translator bot.”

Ultimately, Mordatch says, these methods can give machines a deeper grasp of language, actually show them why language exists—and that provides a springboard to real conversation, a computer interface that computer scientists have long dreamed of but never actually pulled off.

These methods are a significant departure from most of the latest AI research related to language. Today, top researchers typically exploring methods that seek to mimic human language, not create a new language. One example is work centered on deep neural networks. In recent years, deep neural nets—complex mathematical systems that can learn tasks by finding patterns in vast amounts of data—have proven to be an enormously effective way of recognizing objects in photos, identifying commands spoken into smartphones, and more. Now, researchers at places like Google, Facebook, and Microsoft are applying similar methods to language understanding, looking to identify patterns in English conversation, so far with limited success.

Mordatch and his collaborators, including OpenAI researcher and University of California, Berkeley professor Pieter Abbeel, question whether that approach can ever work, so they’re starting from a completely different place. “For agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient,” their paper reads. “An agent possesses an understanding of language when it can use language (along with other tools such as non-verbal communication or physical acts) to accomplish goals in its environment.”

In the end, success will likely come from a combination of techniques, not just one. And Mordatch is proposing yet another technique—one where bots don’t just learn to chat. They learn to chat in a language of their own making. As humans have shown, that is a powerful idea.

Read more >>

An Oxford comma changed this court case completely

Source: CNN
Story flagged by: Jared Tabor

A group of dairy drivers argued that they deserved overtime pay for certain tasks they had completed. The company said they did not. An appeals court sided with the drivers, saying that the guidelines themselves were made too ambiguous by, you guessed it, a lack of an Oxford comma. This is what the law says about activities that do NOT merit overtime pay. Pay attention to the first sentence:

The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of:
(1) Agricultural produce;
(2) Meat and fish products; and
(3) Perishable foods.

That’s a lot of things! But if we’re getting picky, is packing for shipment its own activity, or does it only apply to the rest of that clause, i.e. the distribution of agricultural produce, etc.? See, all of this could be solved if there were an Oxford comma, clearly separating “packing for shipment” and “distribution” as separate things! According to court documents, the drivers distribute perishable food, but they don’t pack it.

Yes, this is the real argument they made. And they really won.

“Specifically, if that [list of exemptions] used a serial comma to mark off the last of the activities that it lists, then the exemption would clearly encompass an activity that the drivers perform,” the circuit judge wrote. It did not, and since the judge observed that labor laws, when ambiguous, are designed to benefit the laborers, the case was settled. “For want of a comma, we have this case,” the judge wrote.

The irony in this ruling is, there are actual state guidelines on how Maine lawmakers draw up their documents. And they do NOT include Oxford commas! The humanity! To be fair, there is also guidance on how to avoid unclear language that could, say, help an impressively pedantic group of drivers get what they were owed.

Dictionaries making a “comeback”

Source: WNYC
Story flagged by: Jared Tabor

Ben Zimmer, language columnist for The Wall Street Journal, returns to discuss how dictionaries are making themselves relevant again through social media and other digital tools. Merriam Webster has recently experienced a surge in popularity on social media in response to their tweets about politics and “alternative facts.” As Jesse Sheidlower said in a recent The New York Times article, “In times of stress, people will go to things that will provide answers. The Bible, the dictionary or alcohol.”

Hear the interview on the Leonard Lopate Show >>

Google Translate with Word Lens allows you to point and translate from Japanese with your phone

Source: Google
Story flagged by: Jared Tabor

The Google Translate app already lets you snap a photo of Japanese text and get a translation for it in English. But it’s a whole lot more convenient if you can just point your camera and instantly translate text on the go. With Word Lens, you just need to fire up the Translate app, point your camera at the Japanese text, and the English translations will appear overlaid on your screen—even if you don’t have an Internet or data connection.

Read more >>

Net-Translators announces partnership with WPML

Source: prweb
Story flagged by: Jared Tabor

Net-Translators, a leading provider of website translation services, announced today that it has partnered with WPML, the leading multilingual plugin to create a website in more than one language.

The recent partnership with WPML, a product of OnTheGoSystems, allows users of WordPress, the most downloaded website and blog content management system (CMS) available, to author content and easily translate it into different languages without any coding. Once the plugin is installed, anyone on WordPress can connect directly with Net-Translators to start a website translation project. With thousands of professional translators, proofreaders and editors from around the globe, Net-Translators offers translation services into more than 60 languages. The plugin also includes advanced features for translation management and an interface for our translators.

“We decided to team up with WPML because we want to provide the millions of WordPress users with an easy, seamless way to get their websites translated,” notes Shy Avni, CEO and co-founder of Net-Translators. He continues: “The plugin can be installed by anyone in order to turn their website into a multilingual version. This revolutionary new way of translating websites is in line with our ongoing commitment to develop and offer the most efficient localization tools and technologies to our customers.”

“We are excited to work with Net-Translators and offer their service to WPML clients,” said Amir Helzer, OnTheGoSystems’ Founder and CEO. “Net-Translators offers expertise and quality that our clients need. This partnership allows each of us to focus on our expertise and provide complete value to clients.”

Additional information including step-by-step instructions on how to get started with WPML and Net-Translators is available by visiting: https://wpml.org/translation-service/net-translators.

Regulation, Process and Profit: A Look at Localization in Life Sciences [Podcast]

Source: Moravia
Story flagged by: Jared Tabor

Accuracy in the Life Sciences field is one of the most challenging areas for professional translators and LSPs. And there are several reasons why.

A translation error in a medical device or other medical-related materials can literally mean the difference between life and death. As a result, Life Sciences translation and localization are as regulated as they are specialized.

What are the implications of regulation in translation? How strictly is it enforced? What does it take to become a professional translator in Life Sciences? Who actually qualifies and who doesn’t?

These are just a few of the questions Renato Beninatto and Michael Stevens discuss with Jeff Gerhardt, this week’s guest on Globally Speaking. With nearly 20 years of experience in the Life Sciences space, Jeff Gerhardt is the founder and principal of Centix Life Technologies, and was formerly a director of Global Labeling at Edwards Life Sciences.

Topics covered include:

  • What Life Sciences and medical device companies look for—and require—from LSPs
  • The need for tightly monitored processes that minimize translation mistakes and catch errors before a medical product actually gets released
  • The costs of retranslating or even making slight grammatical changes after a medical device is already on the market
  • How strategic translation and labeling decisions can help prevent inventory bottlenecks

Listen to the podcast here >>

Esther Schor on the history of Esperanto

Source: WNYC
Story flagged by: Jared Tabor

Poet and scholar Esther Schor joins us to discuss her book, Bridge of Words: Esperanto and the Dream of a Universal Language, which details the history of a constructed language called Esperanto. She tells the story of Ludwig Lazarus Zamenhof, a Polish Jew, who in 1887 had the utopian dream of creating a universal language that would end political and ethnic conflict, and enable everyone to communicate.

Listen to the interview on the Leonard Lopate Show >>

Video remote interpreting pilot project in California courts

Source: California Courts website
Story flagged by: Jared Tabor

A Video Remote Interpreteing (VRI) pilot project is underway for courts in the US state of California, and is set for its trial run of six months starting in July 2017. From the California Courts website:

Video Remote Interpreting uses videoconferencing technology to provide court users with a qualified interpreter, when an onsite interpreter is not readily available. In June 2016, the Judicial Council approved a VRI pilot project to evaluate and test VRI technology in the courts, pursuant to recommendations in the Judicial Council’s Strategic Plan for Language Access in the California Courts (the Language Access Plan, or LAP). This pilot project aims to expand language access within the California courts by testing different VRI equipment solutions. The pilot will include input from the public and court stakeholders to help the branch evaluate how and when VRI may be appropriate for different types of case events (short matters). On an individual basis, the court will determine if each case event is appropriate for VRI. For a quick review of VRI, download the Video Remote Interpreting Fact Sheet.

Potential Benefits of VRI include:

  • Increased access to qualified (certified and registered) interpreters, especially in languages of lesser diffusion.
  • Allowing court users to see and talk to an interpreter in their language without extended delay, despite not being in the same room, or even the same city.
  • Allowing court users to resolve short, non-evidentiary, non-complex and uncontested hearings, even when on-site interpreters are unavailable, lowering the need to reschedule court visits.
  • Private and confidential VRI conversations, similar to in-person interpreting.

See the project outline >>

Korean becomes Microsoft Translator’s 11th neural network translation language

Source: Microsoft
Story flagged by: Jared Tabor

Last year Microsoft announced the release of its Neural Network based translation system for 10 languages: Arabic, Chinese, English, French, German, Italian, Japanese, Portuguese, Russian, and Spanish. Today, Korean is being added to the list.

how-it-works

At a high level, Neural Network translation works in two stages:

  1. The first stage models the word that needs to be translated based on the context of this word (and its possible translations) within the full sentence, whether the sentence is 5 words or 20 words long.
  2. The second stage then translates this word model (not the word itself but the model the neural network has built), within the context of the sentence, into the other language.

Neural Network translation uses models of word translations based on what it knows from both languages about a word and the sentence context to find the most appropriate word as well as the most suitable position for this translated word in the sentence.

One way to think about neural network-based translation is to think of a fluent English and French speaker that would read the word “dog” in a sentence: “The dog is happy”. This would create in his or her brain the image of a dog. This image would be associated with “le chien” in French. The Neural Network would intrinsically know that the word “chien” is masculine in French (“le” not “la”). But, if the sentence were to be “the dog just gave birth to six puppies”, it would picture the same dog with puppies nursing and then automatically use “la chienne” (female form of “le chien”) when translating the sentence.

Here’s an example of the benefits of this new technology used in the following sentence: (one of the randomly proposed on our try and compare site: http://translate.ai)

M277dw에 종이 문서를 올려놓고, 스마트폰으로 스캔 명령을 내린 뒤 해당 파일을 스마트폰에 즉시 저장할 수 있다.

Traditional Statistical Machine Translation would offer this translation:

“M277dw, point to the document, the paper off the file scan command Smartphone smartphones can store immediately.”

Neural Network translation, in comparison, generates this clear and fluent sentence:

“You can place a paper document on M277DW, and then save the file to your smartphone immediately after the scan command.”

Read more >>



Translation news
Stay informed on what is happening in the industry, by sharing and discussing translation industry news stories.

Search



I receive the daily digest, some interesting and many times useful articles!

ProZ.com Translation News daily digest is an e-mail I always look forward to receiving and enjoy reading!

I read the daily digest of ProZ.com translation news to get the essential part of what happens out there!