[PDF][PDF] Assessing inter-annotator agreement for translation error annotation
MTE: Workshop on Automatic and Manual Metrics for Operational …, 2014•mte2014.github.io
… ■ no single objectively correct translation of a given text ■ no single correct error type
for a number of translation errors ⇒ inter-annotator agreement (IAA) * this work: error
classification … Only high quality translations were annotated in order to minimise effects of
overlapping errors : …
for a number of translation errors ⇒ inter-annotator agreement (IAA) * this work: error
classification … Only high quality translations were annotated in order to minimise effects of
overlapping errors : …
Typical ways of using human knowledge for assessing machine translation output:
■ generating reference translations■ rating MT output based on quality■ post-editing MT output (implicit error markup)■ error classification (explicit error markup)
mte2014.github.io
以上显示的是最相近的搜索结果。 查看全部搜索结果