Skip to content

Latest commit

 

History

History
56 lines (47 loc) · 2.65 KB

hw6.md

File metadata and controls

56 lines (47 loc) · 2.65 KB

Please hand in your report by April 30th 11:59pm

The task for homework 6 is to use the sample sentences you chose for homework 1 and translate them with the interactive translation prediction tool Lilt.

Preparing the source data and signing up for Lilt (20 points)

  1. Copy and paste each of the source sentences you used in HW1 to a text file with one sentence per line and save the file in UTF-8 encoding. Try not to look at the Microsoft machine translations and your edited English translation in order to not influence your translating.
  2. Sign up for a 14-day trial of Lilt using your Georgetown Google account
  3. If your source language was Japanese: please contact Lilt support and ask them to enable Japanese for your account
  4. Create a new project in Lilt and upload your source text. Make sure that on import you are indeed getting 20 segments, if not, use the segment editing tool to ensure one sentence per segment.

Interactive Translation Prediction (20 sentences at 3 points each = 60 points)

Without looking at the machine translations and edited translations from HW1 use Lilt to create translations for each of the 20 sentences. Make sure that your look at the results from Quality Assurance and address the issues raised by it (if appropriate).

In your report include the following data and analysis:

  1. The source text
  2. The target text created with Lilt (your final edited translation)
  3. The target text you created in HW1
  4. An analysis of the differences between 2. and 3. Point out any errors you notice during analysis in either of the translation (referring to the MQM categories, if possible). Please clearly identify which translation you are referring to. Of course both translations can be different and still completely correct/appropriate.
  5. Your judgement which translation you prefer: Lilt (2.), PEMT (3.) or "No preference"

Summary (20 points)

Provide a summary of your impressions of translating with Lilt vs. post-editing the machine translation manually in HW1 (keeping in mind that we didn't use any CAT tool in HW1). Did using the interactive translation prediction tool allow you to produce better translations? Why? Where you more efficient? Why? Provide some pros and cons for either approach. How could research into machine translation help to improve either post-editing or interactive translation prediction? Is there maybe a third approach that could help humans work with MT to produce high-quality translations?