OmegaT and asian languages
Thread poster: Pierret Adrien

Pierret Adrien  Identity Verified
China
Local time: 11:58
Chinese to French
+ ...
Mar 20, 2013

Hello everyone,

So I just got started with OmegaT and Chinese (simplified), and quickly discovered that Fuzzy Matches and Glossary don't seem to work as they should.

Nothing ever shows up on both windows, even after adding entries on the glossary.
I created a test file with sentences of different similarities and lenghts to test the software's behavior, and it seems like only 100% matches are correctly detected & replaced.

I tried to run the software using Applocale (the Windows launcher for non-unicode systems) with no change. I checked the project files, TMXes and the glossary file are correctly created and filled. So I don't know where would be the problem. Maybe a char display issue ?


 

Didier Briel  Identity Verified
France
Local time: 05:58
Member (2007)
English to French
+ ...
Use a tokenizer Mar 20, 2013

Pierret Adrien wrote:
So I just got started with OmegaT and Chinese (simplified), and quickly discovered that Fuzzy Matches and Glossary don't seem to work as they should.

Nothing ever shows up on both windows, even after adding entries on the glossary.
I created a test file with sentences of different similarities and lenghts to test the software's behavior, and it seems like only 100% matches are correctly detected & replaced.

I tried to run the software using Applocale (the Windows launcher for non-unicode systems) with no change. I checked the project files, TMXes and the glossary file are correctly created and filled. So I don't know where would be the problem. Maybe a char display issue ?

For glossaries, it could be an encoding issue. You could test with an English or French source document and glossary to check everything is working as it should.
(As long as you use correctly UTF-8 glossaries, not system-encoded ones, everything should be fine.)

For fuzzy matches, it's very unlikely.

By default, OmegaT uses Java tokenizer, which can only detect words when they are separated by a space. Of course, it doesn't work for CJK languages.

That's why we provide also tokenizers:
http://www.omegat.org/en/howtos/tokenizer.php

For Chinese, LuceneSmartChineseTokenizer seems to be the better one.
Do not forget to use also a target tokenizer, so that you don't have issues with spellchecking in European target languages.

Didier


 

Pierret Adrien  Identity Verified
China
Local time: 11:58
Chinese to French
+ ...
TOPIC STARTER
You were right Mar 20, 2013

I must have missed something the first time, I re-set up the tokenizer launcher acording to instructions, and now it works, both fuzzy matches and glossary.

Thank you, and sorry for the trouble.


 


There is no moderator assigned specifically to this forum.
To report site rules violations or get help, please contact site staff »


OmegaT and asian languages

Advanced search






PerfectIt consistency checker
Faster Checking, Greater Accuracy

PerfectIt helps deliver error-free documents. It improves consistency, ensures quality and helps to enforce style guides. It’s a powerful tool for pro users, and comes with the assurance of a 30-day money back guarantee.

More info »
CafeTran Espresso
You've never met a CAT tool this clever!

Translate faster & easier, using a sophisticated CAT tool built by a translator / developer. Accept jobs from clients who use SDL Trados, MemoQ, Wordfast & major CAT tools. Download and start using CafeTran Espresso -- for free

More info »



Forums
  • All of ProZ.com
  • Term search
  • Jobs
  • Forums
  • Multiple search