Publicación:
MT Evaluation : human-like vs. human acceptable

Cargando...
Miniatura
Fecha
2006-07-17
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Proyectos de investigación
Unidades organizativas
Número de la revista
Resumen
We present a comparative study on Machine Translation Evaluation according to two different criteria: Human Likeness and Human Acceptability. We provide empirical evidence that there is a relationship between these two kinds of evaluation: Human Likeness implies Human Acceptability but the reverse is not true. From the point of view of automatic evaluation this implies that metrics based on Human Likeness are more reliable for system tuning. Our results also show that current evaluation metrics are not always able to distinguish between automatic and human translations. In order to improve the descriptive power of current metrics we propose the use of additional syntax-based metrics, and metric combinations inside the QARLA Framework.
Descripción
Categorías UNESCO
Palabras clave
Citación
Centro
E.T.S. de Ingeniería Informática
Departamento
Lenguajes y Sistemas Informáticos
Grupo de investigación
Grupo de innovación
Programa de doctorado
Cátedra
DOI