Mobile menu

MT, Voice Recognition and the New Net
Thread poster: Ursula Peter-Czichi

Ursula Peter-Czichi  Identity Verified
United States
Local time: 11:43
German to English
+ ...
Apr 20, 2002

With an eye on everything new in the area of machine translation, I found this comment. Maybe, it is of interest to some other ProZ members as well. Some \"idea people\" are getting restless!



Is there any effort on the part of translators to be creative and maybe even profit from the development (not the use) of the new technology?

Any suggestions?



Here is the article:



http://forums.zdnet.com/group/zd.Tech.Update/it/itupdatetb.tpt/@thread@1198@F@1@D-,D@ALL/@article@1533?EXP=ALL&VWM=hr&ROS=1



\"Your article touches a very interesting topic. Speech recognition is failing not due to digital sound processing techniques and CPU power, but due to content understanding.



The following



\"When something almost works, developers continue with the same thinking that got them to \"almost,\" rather than starting over with new ideas. We are now stuck in a blind alley. \"



couldn\'t have been stated any better. We are using the same formula to to try and fix the old problem. The problem with SR, translators and the sort is the programming technique not the CPU power.



You need an AI (Artificial Intelligence) to do a translation or a speech recognition. Why? Speech and written language are expressions of our minds and they are not bound by rules or theorems. You can use rules to model them, but eventually they (our minds) will break beyond the model. That is what you perceive when you \"train\" the voice recognition software. You try to couple the model (software variables) to the reality (your speech). It will hold true for you or someone similar to you. As you go beyond the limits of the simulation it becomes harder for the model to follow your speech.



Obviously as your models become better they will be able to understand more people. They should (the models) hit a wall trying to understand people from one geographical region to another. Well that even happens even to us, not all English is easily understandable by everyone everywhere. To break beyond the single user approach the program must interact with more that one user. It must interact with the whole world. The base system should be programmed to learn and to develop its own rules as it goes along. What am I talking about?



With all the JDC, Sun Open Net Environment, .NET, SOAP, WSDL, UDDI and the list goes on and on, we should focus on developing applications which will harness the real virtues of interconnectivity. We are being spoon fed the same old apps all over again only we can now use them anywhere. That is great, no doubt about it, but there are more possibilities. Imagine a speech recognition or translation software which is given the basic rules to learn and the develops its own criteria for translating from interacting on the internet. By \"listening\" to milions of users it finally develops the criteria to do it right. The key here is no strict rules and an open dynamic model which is always learning. It would break the limitations of shrink wrapped versions.



You mention Google and its problems. I agree with you that it has flaws. It is a great tool for translating pages and the folks at Google should be proud, very proud it was a smart idea. Yet it doesn\'t get feedback. We, the users, can\'t tell Google how to get better. Imagine if we did, what a quick learning curve they would have.



The two titans clashing for net dominance, Java and C# are (I believe and correct me if I\'m wrong) targeted at developing the same type of programs, but in a distributed manner. We need a programming approach that will allow us to develop cognitive programs or we\'ll be stuck with an internet full of apps which we will either have trouble finding or determining which one is useful. We will relive the days when search engines where born out of the need to sort out data in the internet. As a matter of fact we haven\'t solved this issue on information in the net much less apps in the net.



The bottom line is we need to start thinking of ways to solve these problems (speech recognition and translation) by new means. I think the internet allows for a great deal many opportunities in terms of AI and learning programs. I think those people involved in these areas should look to take advantage of this.



Saludos

Gerardo\"



Hope, this is of interest to you,

Ursula


Direct link Reply with quote
 


To report site rules violations or get help, contact a site moderator:


You can also contact site staff by submitting a support request »

MT, Voice Recognition and the New Net

Advanced search


Translation news





TransPDF.com PDF Translation
Fast and Reliable PDF Translation - reduce your workload while maintaining quality

Unlock the potential of PDFs, taking your original PDF, replacing the text with your translations from your preferred tools and in minutes, get a high quality, ready-to-use, translated PDF. Try it now - your first 50 pages are free.

More info »
SDL MultiTerm 2015
Guarantee a unified, consistent and high-quality translation with terminology software.

SDL MultiTerm 2015 allows translators to create one central location to store and manage multilingual terminology, and with SDL MultiTerm Extract 2015 you can automatically create term lists from your existing documentation to save time.

More info »



All of ProZ.com
  • All of ProZ.com
  • Term search
  • Jobs