Human Interpreter Beats Translation Programs Hands Down

  • By Park Keon-hyung

    February 22, 2017 13:22

    A face-off between auto-translation programs against a professional human interpreter on Tuesday ended in a resounding victory for the human race.

    Proto-artificial intelligence -- essentially computer programs capable of "learning" -- have emerged victorious in contests against humans in chess, current affairs trivia, go and poker. But AI had still not evolved enough to surpass the linguistic and emotional intelligence of humans.

    The contest, which was sponsored by Sejong Cyber University and the International Interpretation and Translation Association of Korea, aimed to gauge just how close auto-translate programs that learn from their errors have come to human translation skills.

    Four professional translators faced off against Google Translate, Systran's translation program and Naver's papago app.

    The challenge was not enormous. The Korean-to-English translation lifted an excerpt from the novel "Mothers and Daughters," and a newspaper column, while the English-to-Korean competition was of a newspaper column on Apple founder Steve Jobs and an article from Fox News.

    Kwak Joong-chol at Hankuk University of Foreign Studies, who handled the selection and judged the results, said, "We chose pieces of writing that had never been translated before and focused on assessing how smoothly the programs were able to comprehend the feel and metaphoric meaning of sentences."

    Translators were given 50 minutes for each piece. Organizers claimed the pieces contained specialized terminology, which perhaps made conditions more favorable for AI.

    But the computers flunked the test. While the humans scored on average 24 out of 30 points in the Korean-to-English challenge, the programs scored just eight to 13. And humans scored on average 25 points in English-to-Korean and the machines only nine to 15.

    "We noticed a common failure of the computer programs to put the words in the proper order. They just lined them in linear fashion." 

    However, when it came to business writing, which includes a lot of numbers and stock phrases, these programs occasionally got a sentence perfectly right.

    There were also big gaps between different programs. Google Translate was far more accurate than Systran and papago. Kwak said the reason is that vastly more people use Google, so the data input it can learn from is much bigger.

    Auto-translation programs have come under the spotlight recently. Shin Suk-hwan, vice president of Saltlux, which produces computer-assisted translation programs, said, "We were able to confirm that it will take more time to develop AI capable of understanding human languages. Words and expressions encompass emotions and feelings, so it's uncertain whether computers will ever reach a level of completely comprehending them."

    • Copyright © Chosunilbo & Chosun.com
    Previous Next
    All Headlines Back to Top