Зворотний зв'язок

Machine Translation: Past, Present and Future

Preface

Now it is time to analyze what has happened in the 50 years since machine translation began, review the present situation, and speculate on what the future may bring. Progress in the basic processes of computerized translation has not been as striking as developments in computer technology and software. There is still much scope for the improvement of the linguistic quality of machine translation output, which hopefully developments in both rule-based and corpus-based methods can bring. Greater impact on the future machine translation scenario will probably come from the expected huge increase in demand for on-line real-time communication in many languages, where quality may be less important than accessibility and usability.

Machine Translation: The First 40 Years, 1949-1989

About fifty years ago, Warren Weaver, a former director of the division of natural sciences at the Rockefeller Institute (1932-55), wrote his famous memorandum which had launched research on machine translation at first primarily in the United States but before the end of the 1950s throughout the world.

In those early days and for many years afterwards, computers were quite different from those that we have today. They were very expensive machines disposed in large rooms with reinforced flooring and ventilation systems to reduce excess heat. They required a huge number of maintenance engineers and a dedicated staff of operators and programmers. Most of the work was mathematical in fact, either directly for military institutions or for university departments of physics and applied mathematics with strong links to the armed forces. It was perhaps natural in these circumstances that much of the earliest work on machine translation was supported by military or intelligence funds directly or indirectly, and was destined for usage by such organizations – hence the emphasis in the United States on Russian-to-English translation, and in the Soviet Union on English-to-Russian translation.

Although machine translation attracted a great deal of funding in the 1950s and 1960s, particularly when the arms and space races began in earnest after the launch of the first satellite in 1957, and the first space flight by Gagarin in 1961, the results of this period of activity were disappointing. US was even going to close the research after the publication of the shattering ALPAC (Automatic Language Processing Advisory Committee) report (1966) which concluded that the United States had no need of machine translation even if the prospect of reasonable translations were realistic – which then seemed unlikely. The authors of the report had compared unfavourably the quality of the output produced by current systems with the artificially high quality of the first public demonstration of machine translation in 1954 – the Russian-English program developed jointly by IBM and Georgetown University. The linguistic problems encountered by machine translation researchers had proved to be much greater than anticipated, and that progress had been painfully slow. It should be mentioned that just over five years earlier Joshua Bar-Hillel, one of the first enthusiasts for machine translation who had been disabused of his work, had published his critical review of machine translation research in which he had rejected the implicit aim of fully automatic high quality translation (FAHQT). Indeed he provided a proof of its "non-feasibility". The writers of the ALPAC report agreed with this diagnosis and recommended that research on fully automatic systems should stop and that attention should be directed to lower-level aids for translators.

For some years after ALPAC, research continued on a much-reduced financing. By the mid 1970s, some success could be shown: in 1970 the US Air Force began to use the Systran system for Russian-English translations, in 1976 the Canadians began public use of weather reports translated by the Meteo sublanguage machine translation system, and the Commission of the European Communities applied the English-French version of Systran for helping it with its heavy translation burden – which soon was followed by the development of systems for other European languages. In the 1980s, machine translation rose from its post-ALPAC low spirits: activity began again all over the world – most notably in Japan – with new ideas for research (particularly on knowledge-based and interlingua-based systems), new sources of financial support (the European Union, computer companies), and in particular with the appearance of the first commercial machine translation systems on the market.Initially, however, attention to the renewed activity was still almost focuses on automatic translation with human assistance, both before (pre-editing), during (interactive solution of problems) and after (post-editing) the translation process itself. The development of computer-based aids or tools for use by human translators was still relatively neglected – despite the explicit requests of translators.

Nearly all research activities in the 1980s were devoted to the exploration of methods of linguistic analysis in order to create generation of programs based on traditional rule-based transfer and interlingua (AI-type knowledge bases representing the more innovative tendency). The needs of translators were left to commercial interests: software for terminology management became available and ALPNET produced a series of translator tools during the 1980s – among them it may be noted was an early version of a program "Translation Memory" (a bilingual database).

Machine Translation in 1990s

The real emergence of translator aids came in the early 1990s with the "translator workstation", among them were such programs as "Trados Translator Workbench", "IBM Translation Manager 2", "STAR Transit", "Eurolang Optimizer", which combined sophisticated text processing and publishing software, terminology management and translation memories.

In the early 1990s, research on machine translation was reinforced by the coming of corpus-based methods, especially by the introduction of statistical methods ("IBM Candide") and of example-based translation. Statistical (stochastic) techniques have brought a reliase from the increasingly evident limitations and inadequacies of previous exclusively rule-based (often syntax-oriented) approaches. Problems of disambiguation, refraining from repetition and more idiomatic generation have become more solvable with corpusbased techniques. On their own, statistical methods are no more the answer in contrast to rule-based methods, but there are now prospects of improved output quality which did not seem reachable 15 years ago. As many observers have indicated, the most promising approaches will probably integrate rule-based and corpus-based methods. Even outside research environments integration is already evident: many commercial machine translation systems now incorporate translation memories, and many translation memory systems are being enriched by machine translation methods.

The main feature of the 1990s has been the rapid increase in the use of machine translation and translation tools. The globalization of commerce and information is placing increasing demands upon the provision of translations. It means not only continuing (maybe even accelerating) growth of the use by multinational companies and translation services of systems to assist in the production of good quality documentation in many languages – by the use of machine translation and translation memory systems or by multilingual document authoring systems, or by combinations of both. Until recent times, the production of translations has been seen essentially as a self-contained activity. For large users, the appearance of translation systems has stimulated the integration of translation and documentation (technical writing and publishing) processes. Translation is now seen as one stage in the processes of communication and getting information. Future products for such kind will not be separate independent machine translation systems, translator workstations or translation tools, but multilingual documentation software complexes combining document creation, translation and revision, document archiving, information analysis, restoration and extraction, etc. in order to satisfy the specific needs of companies.


Реферати!

У нас ви зможете знайти і ознайомитися з рефератами на будь-яку тему.







Не знайшли потрібний реферат ?

Замовте написання реферату на потрібну Вам тему

Замовити реферат