A new online magazine called Connections has released its first issue. Connections collects interviews, articles and other contributions from translation professionals, and is a product of the members of the Standing Out Mastermind group.
This first issue of Connections contains contributions from, among others, the following ProZ.com members:
Pharmacist &Writer,Naturally Translated!
16 years´ experience, MA in education
Psychology, Finance and Law
A network of professional translators
You can read the full magazine by clicking on the link or image below:
Read online >>
Story flagged by:
The Consumer Technology Association (CTA) has announced self-driving vehicle terminology designed to enable a common lexicon among the technology industry and better explain to consumers the terms and concepts of this rapidly innovating sector.
The definitions were developed and approved by CTA’s recently-formed Self-Driving Vehicles Working Group – chaired by Daimler North America and Waymo, and comprised of 34 companies – which also supports driverless vehicle consumer research and policy advocacy.
Among the terms and concepts addressed within the self-driving vehicle terminology:
- Advanced Driver Assistance Systems (ADAS) or “Driver-Assist” Features: Onboard systems developed to improve safety and performance – examples include lane departure warnings, collision avoidance, adaptive cruise control and automatic braking
- Aftermarket Technology: Technology services or upgrades provided by companies – unaffiliated with the vehicle manufacturer – added after a vehicle is sold or leased
- Driving Environment Sensing: The capturing, processing and analysis of sensor data (e.g., cameras, radar, LIDAR) to enhance or replace what a human driver senses
- MaaS (Mobility as a Service): The shift from personal ownership of transportation modes to shared transportation systems and services
- Platooning: Synchronous operation of multiple vehicles, often in a convoy, to increase road capacity and efficiency
- Self-Driving Vehicle: A vehicle capable of fully modeling its environment through an array of sensors, maps and other data in order to navigate and drive without human interaction
- Urban Mobility: The ability for people in urban and suburban areas to access all modes and forms of transportation.
Read more >>
Story flagged by:
William McComas’ first doctoral students at the University of Arkansas and at the University of Southern California where he formerly taught were both from Saudi Arabia. So, it’s perhaps no surprise that the idea for one of his recent books was suggested by these Saudi contacts.
“My colleagues in Saudi Arabia wondered if there was a resource that would allow them better access to the literature of science education. They had encountered terms that didn’t translate in the way they were being used in science education. You could look them up in a dictionary but that definition didn’t make sense in the science education context. Essentially, they asked me to write a book to fill this very special need,” said McComas, who holds the Parks Family Professorship in Science Education in the College of Education and Health Professions at the U of A.
This conversation encouraged McComas to produce The Language of Science Education: An Expanded Glossary of Key Terms in Science Teaching and Learning, first published in 2014 by Sense Publishers. That was in English. Now, a new edition has come out from King Saud University Press, which published a new version side-by-side in English and Arabic.
“Every discipline uses words in a context-specific fashion,” McComas said. “For instance, the term ‘informal science learning’ could be confusing because it has a unique meaning in our discipline. Even terms such as ‘laboratory’ and ‘inquiry learning’ could require explanation.”
His Saudi collaborators suggested a list of terms used specifically in science education, and McComas sent these to other science educators for review and to make suggestions for additions. Then, he worked with a team of graduate students and together they researched primary sources to create definitions. Each term in the book has a simple, one- or two-sentence definition followed by a more in-depth discussion of its origin and use in the context of science education.
The original book has been well-received and is cited frequently, McComas said, so he decided to expand on the idea. Now, he and Conra Gist, an assistant professor of curriculum and instruction, are working on one that will be a glossary of the special language of curriculum studies.
See more >>
Story flagged by:
The European Language Industry Association (Elia) has launched a new membership initiative for language service companies and independent professionals to connect and join forces with the ultimate goal of better serving end clients, strengthening the language industry in the process.
Companies and individuals who believe in the power of positive working practices are invited to sign up as Founding Members before 31 May 2017 and help shape the development of Elia Engage and its activities.
Building on the aims of Elia’s annual Together conference, which provides the venue for both parties to come together in person for open discussion and constructive dialogue, Elia Engage is the interactive forum for companies to meet and develop connections with skilled independent language professionals and, together, establish best practices for enduring, long-term, mutually beneficial partnerships.
Elia Engage will be led by committee, with independent professionals and language service companies equally represented. Founding Members will have numerous opportunities to influence the initiative and will be able to join various working groups that will focus on developing key aspects to help both parties achieve success and fulfilment in their businesses. This is an unheard of opportunity to be part of something from the ground up and to contribute directly to creating a legacy for the language industry built on respect and positivity.
All Elia Engage members will get a profile on the Elia Engage website to promote their services, the opportunity to communicate across borders, access to resources and services, and more. In addition, individuals will receive a 10% discount to attend future Together events.
Elia Full Member companies receive access to Elia Engage as part of their Elia membership and simply need to sign up and commit to the aims of Elia Engage and actively support positive working relationships. The membership fee for independent professionals such as translators, interpreters, localisers, consultants and project managers is €110 for 12 months. Register as a Founding Member by 31 May 2017 and receive the prestige of being an early adopter and the chance to influence the industry; independent language professionals who sign up as Founding Members will benefit from extended membership until 31 December 2018 for a special rate of €90.
See more >>
Story flagged by:
In the past few months free online translators have suddenly got much better. This may come as a surprise to those who have tried to make use of them in the past. But in November Google unveiled a new version of Translate. The old version, called “phrase-based” machine translation, worked on hunks of a sentence separately, with an output that was usually choppy and often inaccurate.
The new system still makes mistakes, but these are now relatively rare, where once they were ubiquitous. It uses an artificial neural network, linking digital “neurons” in several layers, each one feeding its output to the next layer, in an approach that is loosely modeled on the human brain. Neural-translation systems, like the phrase-based systems before them, are first “trained” by huge volumes of text translated by humans. But the neural version takes each word, and uses the surrounding context to turn it into a kind of abstract digital representation. It then tries to find the closest matching representation in the target language, based on what it has learned before. Neural translation handles long sentences much better than previous versions did.
The new Google Translate began by translating eight languages to and from English, most of them European. It is much easier for machines (and humans) to translate between closely related languages. But Google has also extended its neural engine to languages like Chinese (included in the first batch) and, more recently, to Arabic, Hebrew, Russian and Vietnamese, an exciting leap forward for these languages that are both important and difficult. On April 25th Google extended neural translation to nine Indian languages. Microsoft also has a neural system for several hard languages.
Google Translate does still occasionally garble sentences. The introduction to a Haaretz story in Hebrew had text that Google translated as: “According to the results of the truth in the first round of the presidential elections, Macaron and Le Pen went to the second round on May 7. In third place are Francois Peyon of the Right and Jean-Luc of Lanschon on the far left.” If you don’t know what this is about, it is nigh on useless. But if you know that it is about the French election, you can see that the engine has badly translated “samples of the official results” as “results of the truth”. It has also given odd transliterations for (Emmanuel) Macron and (François) Fillon (P and F can be the same letter in Hebrew). And it has done something particularly funny with Jean-Luc Mélenchon’s surname. “Me-” can mean “of” in Hebrew. The system is “dumb”, having no way of knowing that Mr Mélenchon is a French politician. It has merely been trained on lots of text previously translated from Hebrew to English.
Such fairly predictable errors should gradually be winnowed out as the programmers improve the system. But some “mistakes” from neural-translation systems can seem mysterious. Users have found that typing in random characters in languages such as Thai, for example, results in Google producing oddly surreal “translations” like: “There are six sparks in the sky, each with six spheres. The sphere of the sphere is the sphere of the sphere.”
Although this might put a few postmodern poets out of work, neural-translation systems aren’t ready to replace humans any time soon. Literature requires far too supple an understanding of the author’s intentions and culture for machines to do the job. And for critical work—technical, financial or legal, say—small mistakes (of which even the best systems still produce plenty) are unacceptable; a human will at the very least have to be at the wheel to vet and edit the output of automatic systems.
Online translating is of great benefit to the globally curious. Many people long to see what other cultures are reading and talking about, but have no time to learn the languages. Though still finding its feet, the new generation of translation software dangles the promise of being able to do just that.
We’re at the doors of May already, if you can believe it. Here are some of the highlights in Translation News for the month of April 2017:
Translation / Interpreting
From the blog…
Story flagged by:
Myria can’t remember exactly when she found out about Final Fantasy’s number problem—it was either 1996 or 1997—but she does recall seeing an advertisement for Final Fantasy VII. “We’re like, ‘Huh, seven?’” she said, echoing the thoughts of RPG fans across the United States. Just a few years earlier, in 1994, Squaresoft had released Final Fantasy III on the Super Nintendo. How’d they get from three to seven?
As it turns out, Square was holding out on North America. The venerable publisher had passed on localizing both Final Fantasy II and Final Fantasy III on the Nintendo Entertainment System, so when it came time to bring Final Fantasy IV to the west, they called it Final Fantasy II. Then, Square decided to skip Final Fantasy V, although they briefly considered releasing it here with a different name, according to their head localizer, Ted Woolsey. When they brought over Final Fantasy VI, they called it Final Fantasy III.
As Myria started to research Square’s weird localization choices, she started thinking about getting involved with unofficial fan projects. She’d always been obsessed with RPGs, and she’d noticed that Final Fantasy IV (II)’s script was particularly messy, full of clunky sentences and awkward word choices. “I wanted to redo that game,” Myria said. “It was a horrible mess in terms of its translation.”
While browsing the internet one day in the late 90s, Myria stumbled upon a group of likeminded geeks that called themselves RPGe. Hanging out in an IRC channel, they’d talk about their favorite Japanese role-playing games and make ambitious plans to write English translations for the ones that never made it west. When she found them, they were talking about localizing Final Fantasy V, which they’d do by cracking open a Japanese version of the game’s ROM file and translating the script to English. Myria was intrigued, putting aside her hopes of redoing FFIV. Final Fantasy V sounded way cooler. (A group called J2E would later retranslate FFIV to subpar results, as documented by Clyde Mandelin on his Legends of Localization website.)
Unlike the two NES games that we’d missed out on, Final Fantasy V was by all accounts excellent. People lucky enough to understand FFV in Japanese reported that it was a blast to play, with a solid story and an elaborate class-changing system that allowed players to customize their party in creative ways. It could be difficult, which was one of the reasons Square hadn’t brought it west, but RPG fans wanted to check it out nonetheless.
Problem was, RPGe’s methods were flawed. Nobody had done anything like this before, so there was no institutional knowledge about how to handle fan translations. The RPGe crew had dug up a Japanese ROM of Final Fantasy V, then cracked it open and started editing the text files, directly translating chunks of the game from Japanese to English. But these files were finicky and tough to handle. When you changed a line of Japanese to English in the ROM, it wouldn’t display neatly in the game, because Japanese characters were rendered so much differently than English ones. Japanese characters are bigger than English letters, and one sentence that takes 12 characters in English (“how are you?”) might just take three characters in Japanese (“元気?”). Final Fantasy V capped each line of dialogue at 16 characters, which looked fine in Japanese but would make an English translation garbled and hard to read.
What they needed to do, Myria realized, was edit not just the text files but also the code that Final Fantasy V used to handle those text files. “I really felt they had the wrong approach,” she said. “That was really my big insight to the ROM hacking community, that you can’t just modify the data of the game to make an effective translation—you have to modify the code as well.”
In order to localize a Japanese game in English and make it readable, Myria decided they would need to reprogram the game. Their version of Final Fantasy V would need to understand that English letters, unlike Japanese characters, have different sizes. They’d need to teach the game that each dialogue box should allow more English characters (including those pesky spaces) than it does Japanese kanji or kana.
Myria (who at the time went by the internet handle Barubary; both names are references to Breath of Fire) started talking with SoM2freak, a Japanese-English translator she met online, about splitting off from the rest of RPGe. By mid-1997, they were making plans to start their own translation of Final Fantasy V, done properly instead of hacked together. “I ignored those people who I felt didn’t know what they were doing,” she said. “We started our own sub group within [RPGe] because I felt they were not able to do this.”
As SoM2freak translated lines of Final Fantasy V’s Japanese dialogue to English, Myria tried to figure out the best way to implement them into the game. She downloaded a disassembler to break down Final Fantasy V’s code, turning it into a file so massive, she needed a special text management program called XTree Gold just to parse it. Then she started changing variables, using trial and error to discern what each line of code actually did. “There were no references on most of this stuff at all,” Myria said. “I just kind of figured out what to do.”
Perhaps the most controversial of the team’s translation decisions was the main character’s name. If you ask Square Enix, they’ll tell you that the star of Final Fantasy V is a man named Bartz. But if you played the fan translation, you’d see a different name: Butz.
It’s a name that’s elicited plenty of snickers over the years, but by all accounts it was the most accurate translation, and Myria stands by it. The alliterative translation of the Japanese name, バッツ, is Battsu, or Butz for short. “There were documents in Japan, for a strategy guide for example, and also these little silver statue things that had Butz the way we’d written it,” she said. “We used those kind of things as reference for intended translation.”
On October 17, 1997, Myria and crew released “v0.96,” the first public version of FFV’s fan translation. It went viral, making its way across IRC channels and message boards as RPG fans began to discover that there was a cool new Final Fantasy that none of them had gotten to play. Although SNES emulators were nascent and rough, it wasn’t too tough to get your hands on one. It was also simple for the average gamer to get a copy of Final Fantasy V and the English patch, which you could apply by following a set of simple instructions in the Readme file. “[The patch] really just spread on its own,” said Myria. “It quickly got news in the emulation community and people started playing it at that point. We didn’t have to market it at all.”
Final Fantasy publisher Squaresoft never contacted RPGe about their translation, according to Myria, even though their U.S. offices were in Costa Mesa, just a few miles away from her parents’ house. But in September of 1999, an official English version of Final Fantasy V finally made its way to North America. This version, bundled with Final Fantasy VI in a PS1 compilation called Final Fantasy Anthology, was a mess.
In the PS1 translation of Final Fantasy V, main character Faris insisted on speaking like a pirate for the entire game.
“We were laughing so hard,” said Myria, “because the translation was absolutely awful. We were like, ‘OK, a couple kids in high school over four months did a better job than Square. It probably took them at least a year. We were just laughing so hard.”
It wasn’t until the 2006 Game Boy Advance port—Final Fantasy V: Advance—that Square would finally release a decently localized version of the mistreated role-playing game, although the main character’s name remained Bartz. “When the Game Boy Advance version came out, I was like, ‘Oh my god, they finally beat us,’” said Myria. “It took them eight years, but they finally did a better translation than ours.”
Read the full article >>
Story flagged by:
KPMG and Google published a report on on April 25, 2017 detailing the Indian language ecosystem. The report, “Indian Languages – Defining India’s Internet,” states that nine out of every 10 new internet users in India over the next five years will likely be Indian language users.
Defined as ‘Indian language literates who prefer their primary language over English to read, write and converse with each other,’ this user base is forecast to grow to 536 million by 2021 at 18% CAGR from 234 million in 2016. In comparison, the English internet user base is likely to grow to only 199 million at 3% CAGR for the same forecast period.
With this growth, the report predicts a corresponding rise in user-generated original content online, increase in time spend of Indian language internet users on different internet platforms, and what it calls the “hyper local consumption of local content.”
In turn, this will drive increased investment by businesses to establish digital Indian channels, the rise of digital advertisements in local languages, and spur more Indian languages enablement of digital payment interfaces and platforms such as mobile compatible content for applications and websites.
It is interesting to note that the KPMG-Google report included “content localization companies” and freelance translators in its Indian language ecosystem chart, recognizing their role in the grand scheme of things. A chart shows freelancers and LSPs at the intersection between content creators and content developers. Recognizing the need for human translation and service providers is remarkable in a report co-authored by the world’s most ardent proponent of machine translation.
Overall, the report confirms what Slator reported in January that the world’s second most populous nation may be a tough nut to crack with 22 official languages, but it is arguably the next frontier in language services.
Read more >>
Story flagged by:
With its Creators Update for Windows 10, Microsoft promised that users would have the option to postpone future updates for a limited period of time and many rejoiced. But now that the update has started rolling out, it’s become apparent that there are still some stability issues and performing a manual installation isn’t recommended right now.
In a blog post, Microsoft’s director of program management explained that the latest update has been rolling out slowly because there are known issues that could be a problem for anyone who isn’t an advanced user. The post doesn’t go in depth on what those issues are but it appears that all the bugs haven’t been ironed out for certain devices. For instance, PCs that use a certain type of Broadcom radio were having connectivity problems with Bluetooth devices.
If you aren’t the type to manually install updates, this probably isn’t your problem. Windows 10 has automatically pushed updates to users since it debuted. The Creators Update has a lot of cool little features, but the most useful one is that it offers a simple way to pause installing updates for up to seven days. Updates are good for security but Windows has had an insidious way of suddenly deciding it’s time to install that latest patch and restart right when you’re in the middle of something important.
Microsoft is still automatically updating users this time around and if you encounter problems, you can find instructions for rolling back the update here. If you’re the cavalier type who doesn’t care about warnings and just wants to start making 3D dogs in MS Paint, you can manually download the update here.
A new voice-transcription software, named Trint, can listen to an audio recording or a video of two or more speakers engaged in a natural conversation, then provide a written transcript of what each person said.
Trint’s technology is still nascent, but it could eventually give new life to vast swaths of non-text-based media on the internet, like videos and podcasts, by making them readable to both humans and search engines. People could read podcasts they lack the time or ability to listen to. YouTube videos could be indexed with a time-coded transcript, then searched for text terms. There are other applications too: Filmmakers could index their footage for better organization, and journalists, researchers, and lawyers could save the many hours it takes to transcribe long interviews.
As machine learning and automation technologies continue to transform the 21st century, voice recognition remains a pesky speed bump. Transcription in particular is a technology that some have spent decades pursuing and others deemed outright impossible in our lifetimes. While news organizations and social media outlets alike have invested heavily in video content, the ability to optimize those clips for search engines remains elusive. And with younger readers still preferring print to video anyway, the value of transcribed text remains high.
Based in London and launched in autumn 2016, Trint is a web app built on two separate but entwined elements. The company’s transcription algorithm feeds text into a browser interface for editing, which links the words in a transcript directly to the corresponding points in the recording. While the accuracy is hardly perfect (as Trint’s founders are the first to admit), the system almost always produces a transcript that’s clean enough for searching and editing. At roughly 25 cents per minute (or $15 per hour), Trint’s software-as-service costs a quarter of the $1 per minute rate offered by competitors. There’s a reason Trint is so cheap: Those other services, like Casting Words and 3Play, use humans to clean up automated transcripts or to do the actual transcribing. Trint is all machines.
Microsoft has released voice recognition toolkits for programmers to experiment with, and Google just last week added multi-voice recognition to its Google Home smart speaker. But Trint’s software was the first public-facing commercial product to serve this space.
According to lead engineer Simon Turvey, Trint users report an error rate of between five and 10 percent for cleanly recorded audio. Though this is close to the eight percent industry standard estimated last year by veteran Microsoft scientist Xuedong Huang, the Trint founders consider their product’s editing function the thing that gives them a stronger competitive edge. Trint’s time-coded transcript and the web-based editor allows users to quickly find and work on the quotes they need.
Trint can currently understand 13 languages, including several varieties of English accents. Since it’s a cloud-based application, Trint’s voice transcription algorithm can be updated frequently to add new languages, new accents (Cuban-accented English is tough), and fresh batches of proper nouns.
Read the full article >>
Story flagged by:
On April 19–20, 2017, Necip Fazil Ayan, Engineering Manager at Facebook, gave a 20-minute update at the F8 Developer Conference about the current state of the art of machine translation at the social networking giant.
Slator reported in June 2016 on Facebook’s big expectations for NMT. Then, Alan Packer, Engineering Director and head of the Language Technology team at Facebook, predicted that “statistical or phrase-based MT has kind of reached the end of its natural life” and the way to go was NMT.
Ten months on and Facebook says it is halfway there. The company claims that more than 50% of machine translations across the company’s three platforms — Facebook, Instagram, and Workplace — are powered by NMT today.
Facebook says it started exploring migrating from phrase-based MT to neural MT two years ago and deployed the first system (German to English) using the neural net architecture in June 2016.
Since then, Ayan said 15 systems (from high-traffic language pairs like English to Spanish, English to French, and Turkish to English) have been deployed.
No tech presentation would be complete without a healthy dose of very large numbers. Ayan said Facebook now supports translation in more than 45 languages (2,000 language combination), generates two billion “translation impressions” per day, serves translations to 500 million people daily and 1.3 billion monthly (that is, everyone, basically).
Ayan admitted that translation continues to be a very hard problem. He pointed to informal language as being one of the biggest obstacles, highlighting odd spellings, hashtags, urban slang, dialects, hybrid words, and emoticons as issues that can throw language identification and machine translation systems off balance.
Another key challenge for Facebook: low resources languages. Ayan admitted Facebook has very limited resources for the majority of the languages it translates.
“For most of these languages, we don’t have enough data,” he said — parallel data or high quality translation corpora, that is. What is available even for many low resource languages are large corpora of monolingual data.
Read the full article >>
Story flagged by:
What do a vice-presidential debate, the discovery of Richard III’s bones or the 9/11 attacks have in common? According to Peter Sokolowski, editor for Merriam-Webster, these can be considered ‘vocabulary events’ that make readers run to their dictionaries.
In 1996 the company that had published the largest and most popular college dictionary decided to make available some of their content online. Since then, Merriam-Webster Inc. has been monitoring what words readers search for and discovered that there was an increase in the searches for specific words during major news events.
This started after the death of Princess Diana. According to Sokolowski, “the royal tragedy triggered searches on the Merriam-Webster website for ‘paparazzi’ and ‘cortege’”. Another example is the word ‘admonish’, which became the most looked-up word after the White House said it would ‘admonish’ Representative Joe Wilson for interrupting a speech by President Obama.
Certainly none of this tracking would be possible without the transition from print to digital era. Some of the leading publishers such as Macmillan Education have already announced that they will no longer make printed dictionaries and others are looking for partnerships with Amazon or Apple. This means that, whether you are using your computer, e-book, tablet or smartphone, any dictionary is just a click away.
And what is the purpose of monitoring dictionary searches?
Every time you look up a word in the Merriam-Webster website you give valuable information to lexicographers about terms that could be added or that need to be updated in their dictionary. The most looked-up word also provides data about the public’s strongest interest. This approach can also be found in other online dictionaries that are open to receive suggestions on new words or new usages of old words, the same way as James Murray and his team did with the first Oxford English Dictionary in the 19th century.
In other words it is ‘crowdsourcing’ applied to lexicography.
Even though there are many advantages in using online dictionaries, some will still miss the feeling of searching through the pages of a printed version or finding a random word. However, the digital era gives us the possibility to update information progressively as needed. A similar attitude is found in proactive terminology, which encourages terminologists to identify the topics that are likely to come up so they can provide translators with the terminology that will be needed.
So, the answer is yes! Somehow our dictionaries are reading us.
See original article >>
From the website:
The global languages industry is evolving apace and there’s huge opportunity for candidates aiming to build a career in this space. But the question arises… with whom?
Adaptive Globalization engages with hundreds of applicants working within the localization and translation industry every week and provides them with advice and information on prospective employers.
We manage a global job-seeker community of over 30,000 translation, localization and language technology professionals, together with a constant influx of new ‘out-of-industry’ talent and entry-level professionals. Our candidates are always keen to learn which employers may offer them the most progression and fulfilment in their careers, as well as the best employee benefits, compensation and rewards.
Why not enter your company for a BELA 2017 − an opportunity to gain widespread industry recognition as a leading employer and attract the best talent for your business?
To select the BELA 2017 winners we will analyze information provided by every LSP that submits data to us, choosing one winner in each of five categories:
- Best Language Service Provider for Employee Well-being
- Best Language Service Provider for Employee Retention
- Best Language Service Provider for Career Progression
- Best Language Service Provider for Employee Benefits
- Best Client-side Localization Employer
See more and enter >>
Story flagged by:
Last year, Google Translate introduced neural machine translation, which uses deep neural networks to translate entire sentences, rather than just phrases, to figure out the most relevant translation. Since then we’ve been gradually making these improvements available for Chrome’s built-in translation for select language pairs. The result is higher-quality, full-page translations that are more accurate and easier to read.
Today, neural machine translation improvement is coming to Translate in Chrome for nine more language pairs. Neural machine translation will be used for most pages to and from English for Indonesian and eight Indian languages: Bengali, Gujarati, Kannada, Malayalam, Marathi, Punjabi, Tamil and Telugu. This means higher quality translations on pages containing everything from song lyrics to news articles to cricket discussions.
From left: A webpage in Indonesian; the page translated into English without neural machine translation; the page translated into English with neural machine translation. As you can see, the translations after neural machine translation are more fluid and natural.
The addition of these nine languages brings the total number of languages enabled with neural machine translations in Chrome to more than 20. You can already translate to and from English for Chinese, French, German, Hebrew, Hindi, Japanese, Korean, Portuguese, Thai, Turkish, Vietnamese, and one-way from Spanish to English.
Every year since 2013, the ProZ.com community choice awards have been held to recognize language professionals who are active, influential or otherwise outstanding in various media throughout the industry. Nominees and winners are decided entirely by the ProZ.com community.
Nominations are now open for the 2017 edition of the awards. You can see how the awards work and submit your nominations here >>
Story flagged by:
Why might some languages be easier to identify than others? Are some languages more often confused for others? Researchers sought to investigate these questions by analyzing data from The Great Language Game, a popular online game where players listen to an audio speech sample and guess which language they think they are hearing, selecting from two or more options.
It turned out that cultural and linguistic factors influenced whether a language was identified correctly. The researchers found that participants were better able to distinguish between languages that were geographically farther apart and had different associated sounds. Additionally, if the language was the official language in more countries, had a name associated with its geographical location, and was spoken by many people, then it was more likely to be identified correctly.
“We didn’t expect these results,” says first author Hedvig Skirgård, “but we found that people were probably listening for distinctive sounds, and perhaps they were hearing something in these languages that linguists have yet to discover.”
While the current game only contains 78 languages, mostly from European countries, it does provide insight into why some languages might be confused for others. In their future research, Skirgård and colleagues hope to expand their analysis to lesser-known languages.
Read the paper here >>
Story flagged by:
New app Wemogee uses the ideograms to help people with aphasia, a language-processing disorder that makes it difficult to read, write or talk.
Created by Samsung Electronics Italia (the company’s Italian subsidiary) and speech therapist Francesca Polini, Wemogee replaces text phrases with emoji combinations and can be used as a messaging app or in face-to-face interactions. It supports English and Italian and will be available for Android on April 28, with an iOS version slated for future release.
The developers of Wemogee claim that it is “the first emoji-based chat application designed to enable people with aphasia to communicate.” The app has two modes: visual and textual. An aphasic users sees emojis, which are arranged to convey more than 140 phrases that have been organized into six categories. Wemogee translates the emoji combinations into text for non-aphasic users, and then translates their responses back into emojis.
Read more >>
Story flagged by:
Language I/O has released LinguistNow Chat, enabling companies to provide real-time, multilingual chat support inside several major platforms, including Salesforce.com and Oracle Service Cloud.
The LinguistNow product suite works within the Oracle and Salesforce customer relationship management (CRM) systems. It enables companies to provide customer support in any language over any support channel. Using a hybrid of machine and human translation services, LinguistNow let’s [sic] monolingual agents provide support in any language simply by clicking a button.
“With LinguistNow, companies can receive outstanding translations for self-help articles, ticket/email responses, and chat,” said Kaarina Kvaavik, co-founder of Language I/O, in a statement. “Our customers are already seeing tremendous cost reductions by using our existing products. Some of them have seen a more than 40 percent reduction on customer support costs.
“We use a unique combination of human and machine translation, which is why our translations are both fast and accurate,” Kvaavik continued. “We help companies improve their quality of customer support while also reducing their costs. First, we allow customers to answer their own questions by providing outstanding article translations. Second, we allow agents to accurately and quickly respond to emails and chat in the customer’s native language.”
Story flagged by:
The National Council on Interpreting in Health Care (NCIHC) is proud to announce the results of its 2017 Board of Director elections. The newly elected Board Members are as follows:
They will join current members:
Enrica J. Ardemagni, PhD, President
Lisa Morris, Treasurer
Allison Squires, Ph.D.
NCIHC is a multidisciplinary organization whose mission is to promote and enhance language access in health care in the United States. The newly elected Board Directors will join the other directors to round out a national group of experts in the language services industry.
Story flagged by:
A Spanish rail company has failed in its appeal against a decision of the European Union Intellectual Property Office (EUIPO) related to its logo because it failed to lodge its initial application in English. EURACTIV Spain reports.
The European Court of Justice (ECJ) dismissed Renfe’s appeal on April 5th, citing the company’s failure to submit its application in English to the EUIPO, the Court said in a statement.
On 4 June 2010, EUIPO registered the anagram ‘AVE’, which also included a bird motif, but a German businessman, Stephen Hahn, filed an application for cancellation against the logo, when it is used on methods of transportation.
The EUIPO upheld Hahn’s request.
Renfe filed its appeal against the decision in Spanish but the body informed the rail company that, under EU law, it should be lodged in the language of original case, i.e. English.
EUIPO informed Renfe that it had one month to submit a translated version of its appeal. But the Spanish outfit failed to do so and the property office decided its case was inadmissible.
Read more >>
Stay informed on what is happening in the industry, by sharing and discussing translation industry news stories.
ProZ.com Translation News daily digest is an e-mail I always look forward to receiving and enjoy reading!
The translation news daily digest is my daily 'signal' to stop work and find out what's going on in the world of translation before heading back into the world at large! It provides a great overview that I could never get on my own.
I receive the daily digest, some interesting and many times useful articles!