A fluency error categorization scheme to guide automated machine translation evaluation

D Elliott, A Hartley, E Atwell - Machine Translation: From Real Users to …, 2004 - Springer
Existing automated MT evaluation methods often require expert human translations. These
are produced for every language pair evaluated and, due to this expense, subsequent …

Rationale for a multilingual corpus for machine translation evaluation

D Elliott, A Hartley, ES Atwell - Proceedings of CL2003 …, 2003 - eprints.whiterose.ac.uk
An overview of research to date in human and automated machine translation (MT)
evaluation (Elliott 2002) points to a growing interest in the investigation of new automated …

[PDF][PDF] Automatic Ranking of MT Systems.

M Rajman, A Hartley - LREC, 2002 - academia.edu
In earlier work, we succeeded in automatically predicting the relative rankings of MT systems
derived from human judgments on the Fluency, Adequacy or Informativeness of their output …

A fine-grained evaluation framework for machine translation system development

N Correa - Proceedings of Machine Translation Summit IX: Papers, 2003 - aclanthology.org
Intelligibility and fidelity are the two key notions in machine translation system evaluation,
but do not always provide enough information for system development. Detailed information …

System description: A highly interactive speech-to-speech translation system

M Dillinger, M Seligman - Conference of the Association for Machine …, 2004 - Springer
Abstract Spoken Translation, Inc.(STI) of Berkeley, CA has developed a commercial system
for interactive speech-to-speech machine translation designed for both high accuracy and …

[PDF][PDF] Work-in-Progress project report: CESTA-Machine Translation Evaluation Campaign

A Hartley, A Popescu-Belis - access.archive-ouverte.unige.ch
CESTA, the first European Campaign dedicated to MT Evaluation, is a project labelled by
the French Technolangue action. CESTA provides an evaluation of six commercial and …

A statistical analysis of automated mt evaluation metrics for assessments in task-based mt evaluation

CR Tate - Proceedings of the 8th Conference of the Association …, 2008 - aclanthology.org
This paper applies nonparametric statistical techniques to Machine Translation (MT)
Evaluation using data from a large scale task-based study. In particular, the relationship …

Work-in-progress project report: CESTA-machine translation evaluation campaign

W Mustafa El Hadi, M Dabbadie, I Timimi… - Proceedings of the …, 2004 - dl.acm.org
CESTA, the first European Campaign dedicated to MT Evaluation, is a project labelled by
the French Technolangue action. CESTA provides an evaluation of six commercial and …

[PDF][PDF] Corpus linguistics, machine learning and evaluation: Views from Leeds

E Atwell, BA Shawar, B Babych, D Elliott… - … OF LEEDS SCHOOL …, 2003 - academia.edu
This collection of short papers is a bird's eye view of current research in Corpus Linguistics,
Machine Learning and Evaluation at Leeds University. The papers are extended abstracts …

Is Arabic Machine Translation a Dream or a Reality? A Quality Assessment of Three Arabic Systems.

YH Hannouna - Translation Quarterly, 2010 - search.ebscohost.com
The present study investigates the overall quality of three currently available English-into-
Arabic machine translation (MT) systems. The evaluation deals with selected quality …