Story flagged by:
With its Creators Update for Windows 10, Microsoft promised that users would have the option to postpone future updates for a limited period of time and many rejoiced. But now that the update has started rolling out, it’s become apparent that there are still some stability issues and performing a manual installation isn’t recommended right now.
In a blog post, Microsoft’s director of program management explained that the latest update has been rolling out slowly because there are known issues that could be a problem for anyone who isn’t an advanced user. The post doesn’t go in depth on what those issues are but it appears that all the bugs haven’t been ironed out for certain devices. For instance, PCs that use a certain type of Broadcom radio were having connectivity problems with Bluetooth devices.
If you aren’t the type to manually install updates, this probably isn’t your problem. Windows 10 has automatically pushed updates to users since it debuted. The Creators Update has a lot of cool little features, but the most useful one is that it offers a simple way to pause installing updates for up to seven days. Updates are good for security but Windows has had an insidious way of suddenly deciding it’s time to install that latest patch and restart right when you’re in the middle of something important.
Microsoft is still automatically updating users this time around and if you encounter problems, you can find instructions for rolling back the update here. If you’re the cavalier type who doesn’t care about warnings and just wants to start making 3D dogs in MS Paint, you can manually download the update here.
A new voice-transcription software, named Trint, can listen to an audio recording or a video of two or more speakers engaged in a natural conversation, then provide a written transcript of what each person said.
Trint’s technology is still nascent, but it could eventually give new life to vast swaths of non-text-based media on the internet, like videos and podcasts, by making them readable to both humans and search engines. People could read podcasts they lack the time or ability to listen to. YouTube videos could be indexed with a time-coded transcript, then searched for text terms. There are other applications too: Filmmakers could index their footage for better organization, and journalists, researchers, and lawyers could save the many hours it takes to transcribe long interviews.
As machine learning and automation technologies continue to transform the 21st century, voice recognition remains a pesky speed bump. Transcription in particular is a technology that some have spent decades pursuing and others deemed outright impossible in our lifetimes. While news organizations and social media outlets alike have invested heavily in video content, the ability to optimize those clips for search engines remains elusive. And with younger readers still preferring print to video anyway, the value of transcribed text remains high.
Based in London and launched in autumn 2016, Trint is a web app built on two separate but entwined elements. The company’s transcription algorithm feeds text into a browser interface for editing, which links the words in a transcript directly to the corresponding points in the recording. While the accuracy is hardly perfect (as Trint’s founders are the first to admit), the system almost always produces a transcript that’s clean enough for searching and editing. At roughly 25 cents per minute (or $15 per hour), Trint’s software-as-service costs a quarter of the $1 per minute rate offered by competitors. There’s a reason Trint is so cheap: Those other services, like Casting Words and 3Play, use humans to clean up automated transcripts or to do the actual transcribing. Trint is all machines.
Microsoft has released voice recognition toolkits for programmers to experiment with, and Google just last week added multi-voice recognition to its Google Home smart speaker. But Trint’s software was the first public-facing commercial product to serve this space.
According to lead engineer Simon Turvey, Trint users report an error rate of between five and 10 percent for cleanly recorded audio. Though this is close to the eight percent industry standard estimated last year by veteran Microsoft scientist Xuedong Huang, the Trint founders consider their product’s editing function the thing that gives them a stronger competitive edge. Trint’s time-coded transcript and the web-based editor allows users to quickly find and work on the quotes they need.
Trint can currently understand 13 languages, including several varieties of English accents. Since it’s a cloud-based application, Trint’s voice transcription algorithm can be updated frequently to add new languages, new accents (Cuban-accented English is tough), and fresh batches of proper nouns.
Read the full article >>
Story flagged by:
On April 19–20, 2017, Necip Fazil Ayan, Engineering Manager at Facebook, gave a 20-minute update at the F8 Developer Conference about the current state of the art of machine translation at the social networking giant.
Slator reported in June 2016 on Facebook’s big expectations for NMT. Then, Alan Packer, Engineering Director and head of the Language Technology team at Facebook, predicted that “statistical or phrase-based MT has kind of reached the end of its natural life” and the way to go was NMT.
Ten months on and Facebook says it is halfway there. The company claims that more than 50% of machine translations across the company’s three platforms — Facebook, Instagram, and Workplace — are powered by NMT today.
Facebook says it started exploring migrating from phrase-based MT to neural MT two years ago and deployed the first system (German to English) using the neural net architecture in June 2016.
Since then, Ayan said 15 systems (from high-traffic language pairs like English to Spanish, English to French, and Turkish to English) have been deployed.
No tech presentation would be complete without a healthy dose of very large numbers. Ayan said Facebook now supports translation in more than 45 languages (2,000 language combination), generates two billion “translation impressions” per day, serves translations to 500 million people daily and 1.3 billion monthly (that is, everyone, basically).
Ayan admitted that translation continues to be a very hard problem. He pointed to informal language as being one of the biggest obstacles, highlighting odd spellings, hashtags, urban slang, dialects, hybrid words, and emoticons as issues that can throw language identification and machine translation systems off balance.
Another key challenge for Facebook: low resources languages. Ayan admitted Facebook has very limited resources for the majority of the languages it translates.
“For most of these languages, we don’t have enough data,” he said — parallel data or high quality translation corpora, that is. What is available even for many low resource languages are large corpora of monolingual data.
Read the full article >>
Story flagged by:
What do a vice-presidential debate, the discovery of Richard III’s bones or the 9/11 attacks have in common? According to Peter Sokolowski, editor for Merriam-Webster, these can be considered ‘vocabulary events’ that make readers run to their dictionaries.
In 1996 the company that had published the largest and most popular college dictionary decided to make available some of their content online. Since then, Merriam-Webster Inc. has been monitoring what words readers search for and discovered that there was an increase in the searches for specific words during major news events.
This started after the death of Princess Diana. According to Sokolowski, “the royal tragedy triggered searches on the Merriam-Webster website for ‘paparazzi’ and ‘cortege’”. Another example is the word ‘admonish’, which became the most looked-up word after the White House said it would ‘admonish’ Representative Joe Wilson for interrupting a speech by President Obama.
Certainly none of this tracking would be possible without the transition from print to digital era. Some of the leading publishers such as Macmillan Education have already announced that they will no longer make printed dictionaries and others are looking for partnerships with Amazon or Apple. This means that, whether you are using your computer, e-book, tablet or smartphone, any dictionary is just a click away.
And what is the purpose of monitoring dictionary searches?
Every time you look up a word in the Merriam-Webster website you give valuable information to lexicographers about terms that could be added or that need to be updated in their dictionary. The most looked-up word also provides data about the public’s strongest interest. This approach can also be found in other online dictionaries that are open to receive suggestions on new words or new usages of old words, the same way as James Murray and his team did with the first Oxford English Dictionary in the 19th century.
In other words it is ‘crowdsourcing’ applied to lexicography.
Even though there are many advantages in using online dictionaries, some will still miss the feeling of searching through the pages of a printed version or finding a random word. However, the digital era gives us the possibility to update information progressively as needed. A similar attitude is found in proactive terminology, which encourages terminologists to identify the topics that are likely to come up so they can provide translators with the terminology that will be needed.
So, the answer is yes! Somehow our dictionaries are reading us.
See original article >>
From the website:
The global languages industry is evolving apace and there’s huge opportunity for candidates aiming to build a career in this space. But the question arises… with whom?
Adaptive Globalization engages with hundreds of applicants working within the localization and translation industry every week and provides them with advice and information on prospective employers.
We manage a global job-seeker community of over 30,000 translation, localization and language technology professionals, together with a constant influx of new ‘out-of-industry’ talent and entry-level professionals. Our candidates are always keen to learn which employers may offer them the most progression and fulfilment in their careers, as well as the best employee benefits, compensation and rewards.
Why not enter your company for a BELA 2017 − an opportunity to gain widespread industry recognition as a leading employer and attract the best talent for your business?
To select the BELA 2017 winners we will analyze information provided by every LSP that submits data to us, choosing one winner in each of five categories:
- Best Language Service Provider for Employee Well-being
- Best Language Service Provider for Employee Retention
- Best Language Service Provider for Career Progression
- Best Language Service Provider for Employee Benefits
- Best Client-side Localization Employer
See more and enter >>
Story flagged by:
Last year, Google Translate introduced neural machine translation, which uses deep neural networks to translate entire sentences, rather than just phrases, to figure out the most relevant translation. Since then we’ve been gradually making these improvements available for Chrome’s built-in translation for select language pairs. The result is higher-quality, full-page translations that are more accurate and easier to read.
Today, neural machine translation improvement is coming to Translate in Chrome for nine more language pairs. Neural machine translation will be used for most pages to and from English for Indonesian and eight Indian languages: Bengali, Gujarati, Kannada, Malayalam, Marathi, Punjabi, Tamil and Telugu. This means higher quality translations on pages containing everything from song lyrics to news articles to cricket discussions.
From left: A webpage in Indonesian; the page translated into English without neural machine translation; the page translated into English with neural machine translation. As you can see, the translations after neural machine translation are more fluid and natural.
The addition of these nine languages brings the total number of languages enabled with neural machine translations in Chrome to more than 20. You can already translate to and from English for Chinese, French, German, Hebrew, Hindi, Japanese, Korean, Portuguese, Thai, Turkish, Vietnamese, and one-way from Spanish to English.
Every year since 2013, the ProZ.com community choice awards have been held to recognize language professionals who are active, influential or otherwise outstanding in various media throughout the industry. Nominees and winners are decided entirely by the ProZ.com community.
Nominations are now open for the 2017 edition of the awards. You can see how the awards work and submit your nominations here >>
Story flagged by:
Why might some languages be easier to identify than others? Are some languages more often confused for others? Researchers sought to investigate these questions by analyzing data from The Great Language Game, a popular online game where players listen to an audio speech sample and guess which language they think they are hearing, selecting from two or more options.
It turned out that cultural and linguistic factors influenced whether a language was identified correctly. The researchers found that participants were better able to distinguish between languages that were geographically farther apart and had different associated sounds. Additionally, if the language was the official language in more countries, had a name associated with its geographical location, and was spoken by many people, then it was more likely to be identified correctly.
“We didn’t expect these results,” says first author Hedvig Skirgård, “but we found that people were probably listening for distinctive sounds, and perhaps they were hearing something in these languages that linguists have yet to discover.”
While the current game only contains 78 languages, mostly from European countries, it does provide insight into why some languages might be confused for others. In their future research, Skirgård and colleagues hope to expand their analysis to lesser-known languages.
Read the paper here >>
Story flagged by:
New app Wemogee uses the ideograms to help people with aphasia, a language-processing disorder that makes it difficult to read, write or talk.
Created by Samsung Electronics Italia (the company’s Italian subsidiary) and speech therapist Francesca Polini, Wemogee replaces text phrases with emoji combinations and can be used as a messaging app or in face-to-face interactions. It supports English and Italian and will be available for Android on April 28, with an iOS version slated for future release.
The developers of Wemogee claim that it is “the first emoji-based chat application designed to enable people with aphasia to communicate.” The app has two modes: visual and textual. An aphasic users sees emojis, which are arranged to convey more than 140 phrases that have been organized into six categories. Wemogee translates the emoji combinations into text for non-aphasic users, and then translates their responses back into emojis.
Read more >>
Story flagged by:
Language I/O has released LinguistNow Chat, enabling companies to provide real-time, multilingual chat support inside several major platforms, including Salesforce.com and Oracle Service Cloud.
The LinguistNow product suite works within the Oracle and Salesforce customer relationship management (CRM) systems. It enables companies to provide customer support in any language over any support channel. Using a hybrid of machine and human translation services, LinguistNow let’s [sic] monolingual agents provide support in any language simply by clicking a button.
“With LinguistNow, companies can receive outstanding translations for self-help articles, ticket/email responses, and chat,” said Kaarina Kvaavik, co-founder of Language I/O, in a statement. “Our customers are already seeing tremendous cost reductions by using our existing products. Some of them have seen a more than 40 percent reduction on customer support costs.
“We use a unique combination of human and machine translation, which is why our translations are both fast and accurate,” Kvaavik continued. “We help companies improve their quality of customer support while also reducing their costs. First, we allow customers to answer their own questions by providing outstanding article translations. Second, we allow agents to accurately and quickly respond to emails and chat in the customer’s native language.”
Story flagged by:
The National Council on Interpreting in Health Care (NCIHC) is proud to announce the results of its 2017 Board of Director elections. The newly elected Board Members are as follows:
They will join current members:
Enrica J. Ardemagni, PhD, President
Lisa Morris, Treasurer
Allison Squires, Ph.D.
NCIHC is a multidisciplinary organization whose mission is to promote and enhance language access in health care in the United States. The newly elected Board Directors will join the other directors to round out a national group of experts in the language services industry.
Story flagged by:
A Spanish rail company has failed in its appeal against a decision of the European Union Intellectual Property Office (EUIPO) related to its logo because it failed to lodge its initial application in English. EURACTIV Spain reports.
The European Court of Justice (ECJ) dismissed Renfe’s appeal on April 5th, citing the company’s failure to submit its application in English to the EUIPO, the Court said in a statement.
On 4 June 2010, EUIPO registered the anagram ‘AVE’, which also included a bird motif, but a German businessman, Stephen Hahn, filed an application for cancellation against the logo, when it is used on methods of transportation.
The EUIPO upheld Hahn’s request.
Renfe filed its appeal against the decision in Spanish but the body informed the rail company that, under EU law, it should be lodged in the language of original case, i.e. English.
EUIPO informed Renfe that it had one month to submit a translated version of its appeal. But the Spanish outfit failed to do so and the property office decided its case was inadmissible.
Read more >>
Story flagged by:
The Sideways Dictionary uses analogies and metaphors to explain tricky technology jargon. Try it out with something like “2 Factor Authentication”:
Some tech terms are supported by multiple analogies and metaphors. You can scroll to read them all and get the general idea of what the whole thingamajig is all about. For instance, in the case of “2 Factor Authentication,” I like the second analogy more than the first. Go further and try something like “phishing” or “doxing”.
The Sideways Dictionary is meant to be a crowdsourced project. Google Jigsaw and Washington Post started out with 75 words but are now inviting contributors to add more analogies.
Log in with your Facebook, Google, or Twitter credentials and see if you have the clarity to explain complicated technobabble in simple words. You can share analogies and upvote or downvote the analogies you like. All submissions are moderated by editors.
See more >>
Visit the Sideways Dictionary >>
Story flagged by:
The Icelandic language, seen by many as a source of identity and pride, is being undermined by the widespread use of English, both for mass tourism and in the voice-controlled artificial intelligence devices coming into vogue.
Linguistics experts, studying the future of a language spoken by fewer than 400,000 people in an increasingly globalized world, wonder if this is the beginning of the end for the Icelandic tongue.
Teachers are already sensing a change among students in the scope of their Icelandic vocabulary and reading comprehension.
Anna Jonsdottir, a teaching consultant, said she often hears teenagers speak English among themselves when she visits schools in Reykjavik, the capital.
She said 15-year-old students are no longer assigned a volume from the Sagas of Icelanders, the medieval literature chronicling the early settlers of Iceland. Icelanders have long prided themselves of being able to fluently read the epic tales originally penned on calfskin.
Most high schools are also waiting until senior year to read author Halldor Laxness, the 1955 winner of the Nobel Prize in literature, who rests in a small cemetery near his farm in West Iceland.
A number of factors combine to make the future of the Icelandic language uncertain. Tourism has exploded in recent years, becoming the country’s single biggest employer, and analysts at Arion Bank say one in two new jobs is being filled by foreign labor.
That is increasing the use of English as a universal communicator and diminishing the role of Icelandic, experts say.
The problem is compounded because many new computer devices are designed to recognize English but they do not understand Icelandic.
Icelandic ranks among the weakest and least-supported language in terms of digital technology — along with Irish Gaelic, Latvian, Maltese and Lithuanian — according to a report by the Multilingual Europe Technology Alliance assessing 30 European languages.
Iceland’s Ministry of Education estimates about 1 billion Icelandic krona, or $8.8 million, is needed for seed funding for an open-access database to help tech developers adapt Icelandic as a language option.
Read full article >>
Story flagged by:
The folks over at SDL, along with some well-known experts in translation and the translation business, have put together a set of free resources for language professionals. Check out the collection of webinars and articles at http://www.translationzone.com/landing/translator/grow-your-freelance-translation-business.html
Story flagged by:
Decades ago, when David Costa first started to unravel the mystery of Myaamia, the language of the Miami tribe, it felt like hunting for an invisible iceberg. There are no sound recordings, no speakers of the language, no fellow linguists engaged in the same search—in short, nothing that could attract his attention in an obvious way, like a tall tower of ice poking out of the water. But with some hunting, he discovered astonishing remnants hidden below the surface: written documents spanning thousands of pages and hundreds of years.
For Daryl Baldwin, a member of the tribe that lost all native speakers, the language wasn’t an elusive iceberg; it was a gaping void. Baldwin grew up with knowledge of his cultural heritage and some ancestral names, but nothing more linguistically substantial. “I felt that knowing my language would deepen my experience and knowledge of this heritage that I claim, Myaamia,” Baldwin says. So in the early 1990s Baldwin went back to school for linguistics so he could better understand the challenge facing him. His search was fortuitously timed—Costa’s PhD dissertation on the language was published in 1994.
United by their work on the disappearing language, Costa and Baldwin are now well into the task of resurrecting it. So far Costa, a linguist and the program director for the Language Research Office at the Myaamia Center, has spent 30 years of his life on it. He anticipates it’ll be another 30 or 40 before the puzzle is complete and all the historical records of the language are translated, digitally assembled, and made available to members of the tribe.
Costa and Baldwin’s work is itself one part of a much larger puzzle: 90 percent of the 175 Native American languages that managed to survive the European invasion have no child speakers. Globally, linguists estimate that up to 90 percent of the planet’s 6,000 languages will go extinct or become severely endangered within a century.
“Most linguistic work is still field work with speakers,” Costa says. “When I first started, projects like mine [that draw exclusively on written materials] were pretty rare. Sadly, they’re going to become more and more common as the languages start losing their speakers.”
Despite the threat of language extinction, despite the brutal history of genocide and forced removals, this is a story of hope. It’s about reversing time and making that which has sunk below the surface visible once more. This is the story of how a disappearing language came back to life—and how it’s bringing other lost languages with it.
The Miami people traditionally lived in parts of Indiana, Illinois, Ohio, Michigan and Wisconsin. The language they spoke when French Jesuit missionaries first came to the region and documented it in the mid-1600s was one of several dialects that belong to the Miami-Illinois language (called Myaamia in the language itself, which is also the name for the Miami tribe—the plural form is Myaamiaki). Miami-Illinois belongs to a larger group of indigenous languages spoken across North America called Algonquian. Algonquian languages include everything from Ojibwe to Cheyenne to Narragansett.
Think of languages as the spoken equivalent of the taxonomic hierarchy. Just as all living things have common ancestors, moving from domain down to species, languages evolve in relation to one another. Algonquian is the genus, Miami-Illinois is the species, and it was once spoken by members of multiple tribes, who had their own dialects—something like a sub-species of Miami-Illinois. Today only one dialect of the language is studied, and it is generally referred to as Miami, or Myaamia.
Like cognates between English and Spanish (which are due in part to their common descent from the Indo-European language family), there are similarities between Miami and other Algonquian languages. These likenesses would prove invaluable to Baldwin and Costa’s reconstruction efforts.
But before we get to that, a quick recap of how the Miami people ended up unable to speak their own language. It’s a familiar narrative, but its commonness shouldn’t diminish the pain felt by those who lived through it.
The Miami tribe signed 13 treaties with the U.S. government, which led to the loss of the majority of their homelands. In 1840, the Treaty of the Forks of the Wabash required they give up 500,000 acres (almost 800 square miles) in north-central Indiana in exchange for a reservation of equal size in the Unorganized Indian Territory—what was soon to become Kansas. The last members of the tribe were forcibly removed in 1846, just eight years before the Kansas-Nebraska Act sent white settlers running for the territory. By 1867 the Miami people were sent on another forced migration, this time to Oklahoma where a number of other small tribes had been relocated, whose members spoke different languages. As the tribe shifted to English with each new migration, their language withered into disuse. By the 1960s there were no more speakers among the 10,000 individuals who can claim Miami heritage (members are spread across the country, but the main population centers are Oklahoma, Kansas and Indiana). When Costa first visited the tribe in Oklahoma in 1989, that discovery was a shock.
“Most languages of tribes that got removed to Oklahoma did still have some speakers in the late 80s,” Costa says. “Now it’s an epidemic. Native languages of Oklahoma are severely endangered everywhere, but at that time, Miami was worse than most.”
When Baldwin came to the decision to learn more of the Miami language in order to share it with his children, there was little to draw on. Most of it was word lists that he’d found through the tribe in Oklahoma and in his family’s personal collection. Baldwin’s interest coincided with a growing interest in the language among members of the Miami Tribe of Oklahoma, which produced its first unpublished Myaamia phrase book in 1997. Baldwin had lists of words taped around the home to help his kids engage with the language, teaching them animal names and basic greetings, but he struggled with pronunciation and grammar. That’s where Costa’s work came in.
“David can really be credited with discovering the vast amount of materials that we work with,” Baldwin says. “I began to realize that there were other community members who also wanted to learn [from them].”
Together, the men assembled resources for other Miami people to learn their language, with the assistance of tribal leadership in Oklahoma and Miami University in southern Ohio. In 2001 the university (which owes its name to the tribe) collaborated with the tribe to start the Myaamia Project, which took on a larger staff and a new title (the Myaamia Center) in 2013.
When Baldwin first started as director of the Myaamia Center in 2001, following completion of his Master’s degree in linguistics, he had an office just big enough for a desk and two chairs. “I found myself on campus thinking, ok, now what?” But it didn’t take him long to get his bearings. Soon he organized a summer youth program with a specific curriculum that could be taught in Oklahoma and Indiana, and he implemented a program at Miami University for tribal students to take classes together that focus on the language, cultural history and issues for Native Americans in the modern world. Baldwin’s children all speak the language and teach it at summer camps. He’s even heard them talk in their sleep using Myaamia.
To emphasize the importance of indigenous languages, Baldwin and others researched the health impact of speaking a native language. They found that for indigenous bands in British Columbia, those who had at least 50 percent of the population fluent in the language saw 1/6 the rate of youth suicides compared to those with lower rates of spoken language. In the Southwestern U.S., tribes where the native language was spoken widely only had around 14 percent of the population that smoked, while that rate was 50 percent in the Northern Plains tribes, which have much lower language usage. Then there are the results they saw at Miami University: while graduation rates for tribal students were 44 percent in the 1990s, since the implementation of the language study program that rate has jumped to 77 percent.
“When we speak Myaamia we’re connecting to each other in a really unique way that strengthens our identity. At the very core of our educational philosophy is the fact that we as Myaamia people are kin,” Baldwin says.
While Baldwin worked on sharing the language with members of his generation, and the younger generation, Costa focused on the technical side of the language: dissecting the grammar, syntax and pronunciation. While the grammar is fairly alien to English speakers—word order is unimportant to give a sentence meaning, and subjects and objects are reflected by changes to the verbs—the pronunciation was really the more complicated problem. How do you speak a language when no one knows what it should sound like? All the people who recorded the language in writing, from French missionaries to an amateur linguist from Indiana, had varying levels of skill and knowledge about linguistics. Some of their notes reflect pronunciation accurately, but the majority of what’s written is haphazard and inconsistent.
This is where knowledge of other Algonquian languages comes into play, Costa says. Knowing the rules Algonquian languages have about long versus short vowels and aspiration (making an h-sound) means they can apply some of that knowledge to Miami. But it would be an overstatement to say all the languages are the same; just because Spanish and Italian share similarities, doesn’t mean they’re the same language.
“One of the slight hazards of extensively using comparative data is you run the risk of overstating how similar that language is,” Costa says. “You have to be especially careful to detect what the real differences are.”
Read more >>
Story flagged by:
Localization editor Connor Krammer has released a website that analyzes localization errors in the English version of the popular role playing video game Persona 5. You can check out the site here: http://www.personaproblems.com/ (you can toggle the theme to save your eyes a bit).
Can you think of other translations that deserve their own website reviews like this?
Story flagged by:
Omniscien Technologies (formerly Asia Online) has announced the release of its new version of Language Studio™ with next-generation hybrid Neural Machine Translation (NMT) technology.
With this latest release of Language Studio™, Omniscien Technologies has combined both Statistical Machine Translation (SMT) and next-generation, machine learning based Neural Machine Translation technology in a single platform for all 548 Language Pairs supported.
“By offering a choice of technologies at the same price point in our secure Cloud, customers are free to choose the solution that best fits their specific use cases and requirements, guided by Omniscien Technologies’ experts where needed. We don’t believe in merely releasing the latest technology in support of the most recent development trends. We prefer to focus on quality, choice, compatibility, value and expert advice to ensure that our customers can achieve their goals”, says Andrew Rufener, CEO of Omniscien Technologies.
Prof. Philipp Koehn, Omniscien Technologies’ Chief Scientist, adds: “Neural Machine Translation is an evolving technology. In many cases NMT does very well. However, there are still a number of limitations with a pure NMT-only solution. With that in mind, during the development of the new version of Language Studio, our R&D teams focused on the inherent weaknesses in the existing NMT technologies that had not yet been solved by academia or commercial NMT solutions. While we will continue to make significant progress in the future, we have now solved the most significant challenges. In doing so, we have developed a unique hybrid NMT, SMT, Syntax and Rules based solution that provides unprecedented translation quality and control, and the new system is ready for production grade deployment now.”
See full press release >>
From the Microsoft Dynamics site:
The purpose of the forum is to give our partners and users the opportunity to give feedback on our existing terminology and translations for future products.
Our professional translators have defined the list you will see in the forum.
Participation is completely voluntary. You may participate as much or as little as you wish, and you may stop participating at any time.
- Follow the easy registration steps, then review the glossary and vote for the suggestions listed, or give your own suggestions.
- Don’t forget to come back and vote more! Before the program closes why not come back and vote for the suggestions that came later.
- April 20 – 27th: Suggestions & voting accepted anytime during these dates.
The site works as a discussion forum where you can vote for the suggestions submitted, submit your own suggestion, or comment on other participants’ suggestions.
We have included a proposed translation for each of the source term. [sic]
See more >>
Story flagged by:
On March 22-24 (2017), fifty people came together in a former clandestine church in Amsterdam to break their heads on the question how the translation industry will have changed in 2022. The story that came out can be read as an ordinary battle between man and machine, with a victory for the latter. But at a deeper layer, there is a fascinating intrigue with many threads about game-changing technologies and trends and an outcome that is perplexing even for all of us who think that they are behind the wheel today. Be careful what you wish for.
The translation companies of today will not be the same in 2022. We’ll see a split in translation tech and the creative networks, the data factories and the storytelling, the platforms and the boutiques, perhaps sometimes still operating under the same umbrella, but clearly separated in functions. Sounds familiar, this story? Perhaps you are thinking about the paradigm shift in the advertising and marketing industry. Once thought to be so creative, it had its own unique place in an environment of factory and office automation. But now, after a few decades of data storms, the business of the prestigious advertising agencies has changed, fundamentally.
Marketing is automated and driven by data and clicks. The incredibly rapid rise of online ads, razor-sharp marketing, and pay-per-view through companies like Google and Facebook has turned the landscape upside down. Legendary names like Saatchi and Saatchi, McCann Erickson, J. Walter Thompson give us sweet memories of the days of Mad Men, but the creative directors now all report to giant holding companies acting under dull names like Omnicon, WPP, Interpublic and Publicis.
Similar mergers and acquisitions are likely to happen in the relative small translation industry in the coming five years and a convergence with that other creative sector that has fallen victim to data storms – the advertising and marketing industry – would make a lot of sense.
But before we get there, let’s look at the story that developed in Amsterdam just a few weeks ago. The story is broken down into ten chapters, all interconnected, like in every good novel.
Read the full article >>
Stay informed on what is happening in the industry, by sharing and discussing translation industry news stories.
I read the daily digest of ProZ.com translation news to get the essential part of what happens out there!
The translation news daily digest is my daily 'signal' to stop work and find out what's going on in the world of translation before heading back into the world at large! It provides a great overview that I could never get on my own.
I receive the daily digest, some interesting and many times useful articles!