The concept of fully automated machine translation was originally conceived in the 17th century but was actually pioneered in the 1950s. Whereas, centuries ago it was just a philosophical concept, for the last few decades a lot of success has been achieved in making it a reality and in underpinning its practical application.

The practical application comes soon enough – actually, the idea for less human involvement in translating texts constitutes the triggering factor for introducing machine translation as an option. Human translation can be summarized in the following two points:

  1. Decoding” the meaning of the source text;
  2. Rendering the contents into the target language.

This seemingly simple procedure actually constitutes a profound and complicated cognitive process. In order to adequately decode and transfer the contents, one should have in-depth knowledge of grammar, semantics, syntax, idiomatic expressions, etc. Of course this is a stumbling block for machine translation, as computers are not capable of processing information as well as the human brain. The main challenge for people involved in the development of machine translations is to invent a computer which is able to process information differently and enter new text which is adequate enough so no one could tell apart machine translation from the human version thereof. For modern technology this proves impossible. It is quite difficult to create a machine text that comes even remotely close to the quality of human translation. Nonetheless, various approaches deliver promising results.

  1. Rule based machine translation. This approach can be applied to translations in very similar language pairs, e.g. Bulgarian/Serbian, Swedish/Norwegian, Finnish/Estonian. This method allows for making relatively adequate translations based on the matching linguistic and grammar rules and forms of the relevant languages.
  2. Interlinguistic method. This is maybe the most commonly used method. It is processing of the source text and transferring it in an “intermediate language” – a program selecting the most common grammatical and other types of rules valid for the greatest number of languages. The target text is the result of this intermediate process.
  3. Dictionary based machine translation. No unnecessary treatment of information – here we’re not looking for the specific meaning of a given word but we translate the meaning referred to in the dictionary.

There are also other approaches, for instance, the statistical and the hybrid approach, but they are less widespread. Various methods can become useful in different ways when it comes to different types of translations. Take, for example, the statistical approach – it works best in translation of documents and legal terminology as a whole, as it respects the formal way of expressing your point.

Yet another challenge is the translation of non-standard languages. Translation of dialect and slang words and phrases can become a real catastrophe if you don’t do everything possible to avoid ambiguities. None of these conditions, however, is enough to make the machine translation sound like human translation, no matter how much you adhere to the correct meaning.

Despite the apparent superiority of human intellectual work over the machine writing and translations, the successful application of MT opens up new technological horizons. Work facilitation needs new ideas in the field of programming and understanding language as a whole. This action is practically impossible for the machine – when we work this out we will be one step closer to discovering artificial intelligence.