Publicación:
MT Evaluation : human-like vs. human acceptable

dc.contributor.authorGiménez, Jesús
dc.contributor.authorMàrquez, Lluís
dc.contributor.authorAmigo Cabrera, Enrique
dc.contributor.authorGonzalo Arroyo, Julio Antonio
dc.date.accessioned2024-05-21T13:03:36Z
dc.date.available2024-05-21T13:03:36Z
dc.date.issued2006-07-17
dc.description.abstractWe present a comparative study on Machine Translation Evaluation according to two different criteria: Human Likeness and Human Acceptability. We provide empirical evidence that there is a relationship between these two kinds of evaluation: Human Likeness implies Human Acceptability but the reverse is not true. From the point of view of automatic evaluation this implies that metrics based on Human Likeness are more reliable for system tuning. Our results also show that current evaluation metrics are not always able to distinguish between automatic and human translations. In order to improve the descriptive power of current metrics we propose the use of additional syntax-based metrics, and metric combinations inside the QARLA Framework.es
dc.description.versionversión publicada
dc.identifier.urihttps://hdl.handle.net/20.500.14468/19991
dc.language.isoen
dc.relation.centerE.T.S. de Ingeniería Informática
dc.relation.departmentLenguajes y Sistemas Informáticos
dc.rightsAtribución-NoComercial-SinDerivadas 4.0 Internacional
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.titleMT Evaluation : human-like vs. human acceptablees
dc.typeactas de congresoes
dc.typeconference proceedingsen
dspace.entity.typePublication
relation.isAuthorOfPublicationf96c6e59-3a7a-4b0c-9b10-deec22f8c06b
relation.isAuthorOfPublication0e0d6c85-2d8e-4fb3-9640-8ad17e875fcc
relation.isAuthorOfPublication.latestForDiscoveryf96c6e59-3a7a-4b0c-9b10-deec22f8c06b
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Documento.pdf
Tamaño:
174.76 KB
Formato:
Adobe Portable Document Format