|
As global interaction increase, the translation market is rapidly growing. In the decade of 2008 to 2018, the estimated GDP increase is 2.1% per year, but the estimated growth of the translation market is 4.7% per year. In addition, the entire language services market rose from $23.9 billion in 2009 to $38.2 billion in 2015, steadily growing at the rate of 8% per year. Combined with the popularity of the internet, the capacity of human translation struggles to meet the market’s needs, thus the advent of the role of machine translation, or automatic translation – as of April 2016, Google Translate has translated up to one hundred billion words per day1. Machine translation has since evolved from the early rule-based machine translation to the recent statistical machine translation – in 2007, Google Translated replaced the older rule-based machine translation with its own statistical machine translation, thus opening the new era of statistical machine translation. Statistical machine translation is capable of a) automatically learning translation rules from bilingual parallel corpora and b) automatically adjusting parameters to fit the dataset. However, if errors appear during the preparatory sentence alignment process, sending in mismatched bilingual corpora, the quality of the engine will be reduced. Furthermore, automatic parameter adjustment still depends on the engine’s degree of fitting in the test set. Thus, the core function of the aforementioned a) and b) is to review the degree of fitting between source text and target text, in other words, “automatic evaluation of translation.” So far the general method of automatic evaluation of translation is Bilingual Evaluation Understudy (BLEU). BLEU concerns only the n-gram precision of the text, thus, the BLEU score of a correctly translated sentence may be extremely low or even nil if there exists many lexical differences compared to the reference sentence. On the other hand, completely wrong, grammatically inaccurate sentences may achieve a relatively higher score if it contains a few accurate key words. BLEU’s evaluation method cannot accurately assess the quality of translation. Hence, the first aim of this thesis is to propose an automatic evaluation of translation based on artificial neural network / semantics, which is capable of executing translation evaluation that accommodates lexical difference. This thesis proposes an original training method: We make a dataset by combining human translation and sentences generated by a better translation engine and sentences by a poor engine, and train the dataset for quality classification using artificial neural network (ANN). Experiments show that our system can indeed evaluate translation based on semantics and not lexis. Secondly, since machine translation often fails to reach the quality of human translation, machine translations often require human post-editing. Thus if the quality of machine translation is extremely poor, the post-editing process may take the same amount of time compared to pure human translation, even more. Thus, this thesis proposes an Interactive Machine Translation process, in which human and machine co-create target texts to reduce or even remove the time required for post-editing. Our method first commands the machine to create N-best translations from the source sentence, and present it in a succinct fashion to the user. Then once the user makes a change in the chosen target sentence, the system will regenrate the rest of the target sentence by looking up N-best sentences according to that change. This thesis also proposes a new type of Domain-specific translation. The meaning of a same English word may differ in different domains. For example, the meaning of “movable” in general corpora may mean “capable of being moved,” but in the law domain, it means “property or possessions not including land or buildings.” Even though it is possible to train a domain-specific translation engine using domain-specific parallel corpus, each individual domain may contain no more than three million sentences, insufficient if one desires to train a mature engine. Our proposed method is to train translation models from a large corpus belonging to the general domain, then extract bilingual terminology databases from a smaller corpus. We use the a general corpus during preparatory processes, and displace technical terms onto specific tags. Thus even when the specific domain itself provides insufficient corpora, the engine can still generate satisfactory domain-specific machine translation. Our experiment demonstrates that our system is capable of providing different translated versions of the word “movable” when one changes the target domain. Finally, this research construct an actual translation web-platform for business using the various methods listed above.
|