Publicación: Intrinsic Semantic Spaces for the representation of documents and semantic annotated data
Cargando...
Archivos
Fecha
2014-09-29
Autores
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Lenguajes y Sistemas Informáticos
Resumen
Esta tésis presenta dos nuevos espacios de representación semántica para doc- umentos de texto y datos anotados semánticamente, los cuales se basan en un en- foque de geometría intrínseca, así como otros resultados, entre los cuales tenemos: una nueva distancia semántica sobre ontologías denominada distancia ponderada de Jiang-Conrath, una distribución normal generalizada sobre variades diferenciables que denominamos distribución normal geodésica, la cual nos conduce a la de nición de la distancia geodésica de Mahalanobis. Por último, probamos que cualquier clasi- cador de Bayes sobre una variedad induce un diagrama de Voronoi dual sobre su dominio. El modelo de recuperación de la información (RI) basado en ontologías parece prometedor, a pesar de aún no haber sido evaluado experimentalmente. Por otro lado, el clasi cador de texto ha arrojado un primer resultado desesperanzador debido a ciertas di cultades en el entrenamiento del modelo. El hilo conductor de nuestra investigación es el uso de nociones de geometría diferencial e invarianza geométrica como medio para cubrir algunas oprtunidades de mejora y problemas encontrados en los modelos actuales encontrados en la bib- liografía. Tanto el modelo RI basado en ontologías, como el clasi cador de texto propuestos en esta tésis, son inspirados por un enfoque geométrico, cuya principal idea es la integración de las estructuras geométricas del problema en los espacios de representación semántica de la información. En suma, nuestro enfoque intenta construir mejores modelos de espacios semánticos mediante la incorporación de las propiedades y restricciones de los objetos matemáticos involucrados en su de nición. La primera parte de la tésis presenta un nuevo modelo RI basado en ontologías que de ne una inmersión de una ontología poblada en un espacio métrico, la cual es denominada Espacios Intrínsecos Ontológicos y tiene como principal propiedad la preservación de las estructuras codi cadas en las ontologías. En la segunda parte, presentamos un nuevo clasi cador de documentos de texto, denominado Clasi cador Intrínseco de Bayes-Voronoi, el cual se basa en la representación de los vectores de documento mediante un modelo generativo expresado sobre una variedad diferen- ciable, cuya función de distribución es de nida sobre la hiperesfera unitaria, en vez de sobre el espacio euclídeo ambiente. Los Espacios Intrínsecos Ontológicos introducen un nuevo modelo téorico de recuperación de la información que parece prometedor, si bien, como ya hemos señalado, éste aún no ha sido evaluado experimentalmente. El modelo propuesto es descrito en profundidad y validado con respecto a nuestros axiomas de diseño. La motivación detrás de nuestro modelo es el descubrimiento de un conjunto de inconsistencias geométricas en algunos modelos RI basados en ontologías, las cuales se derivan de ciertas propiedades pasadas por alto en sus adaptaciones del modelo de espacio vectorial (VSM). En esencia, nuestro modelo refuta el uso irre exivo del modelo VSM en toda clase de tareas semánticas en el ámbito del procesamiento natural del lenguaje. A pesar de que el enfoque teórico es intersante por sí mismo, nuestra principal hipótesis es que el enfoque invariante que proponemos en el modelo, debería con- ducirnos a mejorar la calidad de clasi cación, así como las medidas de precisión y cobertura en los sistemas de recuperación de información de tipo semántico. Los Espacios Intrínsecos Ontológicos son, hasta donde alcanza nuestro conoci- miento, el primer modelo de recuperación de la información basado en ontologías donde cada componente del sistema ha sido diseñado basado en la ontología base para preservar todas las estructuras intrínsecas presentes. La geometría intrínseca de una ontología es de nida por tres estructuras algebraicas: (1) la relación de orden de la taxonomía, (2) la relación de inclusión de conjuntos, y (3) su métrica semántica intrínseca. De esta forma, los métodos para la representación de las consultas, unid- ades de información, funciones de pesado, clasi cación por relevancia y recuperación, han sido diseñados a partir de axiomas fundamentados en principios geométricos, con el objetivo de capturar todo el conocimiento codi cado en la ontología base. Empleando el lenguaje de la teoría de categorías, nuestro modelo construye una equivalencia natural, o mor smo, entre la ontología poblada de entrada y el espacio de representación para las unidades de información indexadas. Finally, ell clasi cador de Bayes-Voronoi, introducido en la segunda parte de la tésis, emplea un modelo generativo para representar documentos de texto, el cual es de nido por una distribución normal vectorial sobre la hiperesfera unitaria, la cual denominamos distribución normal geodésica. Dicha distribución es de nida sobre la hiperesfera unitaria, vista como una variedad diferenciable, en vez de sobre el espacio euclídeo ambiente. La idea clave es la observación de que los vectores normalizados están contenidos en la hiperesfera unitaria, en de vez de sobre el espacio euclídeo am- biente, y el modelo propuesto integra de forma explícita dicha propiedad. El modelo reduce una unidad la dimensión de los vectores normalizados, la cual corresponde a la proyección de los vectores de datos sobre la hiperesfera unitaria (normalización). Asimismo, la distribución normal geodésica nos conduce a la de nición de la dis- tancia de Mahalanobis sobre una variedad diferenciable, distancia que denominamos distancia geodésica de Mahalanobis. Por último, probamos que cualquier clasi cador de Bayes sobre una variedad induce un diagrama de Voronoi dual sobre su dominio.
This thesis introduces two novel semantic representation spaces for text documents and semantically annotated data, which are based in an intrinsic geometry approach, as well as other results, among which we have: (1) a novel ontology-based semantic distance, that we call weighted Jiang-Conrath, and (2) generalized normal distri- bution on di¤erential manifolds, called geodesic normal distribution, what lead us to the de nition of the geodesic Mahalanobis distance. By last, we prove that any Bayes classi er on a manifold de nes a dual Voronoi diagram on it. The ontology-based IR model looks promising, but it has not been evaluated experimentally yet. By other hand, the text document classi er yielded a rst discouraging result due to the di¢ culties for the training of the model. The common thread of our research is the use of notions of intrinsic di¤erential geometry and geometric invariance, as means to bridge some gaps in the literature. The ontology-based IR model, as well as the text classi er proposed in this thesis, is inspired by a geometric approach, whose core idea is the integration of the geometric structures of the problem in the semantic representation spaces of the information. In summary, our approach attempts to build better models of semantic spaces by incorporating the properties and constraints of the mathematical objects involved in its de nition. The rst part of the thesis introduces a novel ontology-based IR model based in a structure-preserving embedding of a populated ontology into a metric space that we call Intrinsic Ontological Spaces. The second part of the thesis introduces a novel text classi er, called Intrinsic Bayes-Voronoi, which is based in the representation of the document vectors by a manifold-based generative model, where the distribution function is de ned on the unit hypersphere, instead of the euclidean ambient space. The Intrinsic Ontological Spaces introduces a novel theoretical IR model that looks promising, although it has not even been evaluated experimentally. The pro- posed IR model is described in depth and validated with regard to our design axioms. The motivation behind of our model is the nding of a set of geometric inconsist- encies in some ontology-based IR models in the literature, which are derived from certain overlooked properties in their adaptations of the Vector Space Model (VSM). In essence, our model refutes the unre ective use of the VSM model in the elds of natural language processing (NLP) and information retrieval (IR). Despite that the theoretical approach is interesting by itself, our main hypothesis is that the structure-preserving approach proposed by our model, should lead us to improve the quality of the ranking, as well as the measures of precision and recall in the semantic information retrieval systems. The Intrinsic Ontological Spaces are, up to our knowledge, the rst ontology- based IR model to build a whole ontology-based structure-preserving representation for any sort of semantically annotated data in a populated ontology. In our model, every component has been designed with the aim to preserve the intrinsic geometry of any base ontology. The intrinsic geometry of any ontology is de ned by three algebraic structures: (1) the order relation of the taxonomy, (2) the set inclusion relation, and (3) its intrinsic semantic metric. In this way, the methods for the representation of the queries, information units, weighting, ranking and retrieval, have been designed from geometric principled-based axioms, with the aim to capture all the semantic knowledge encoded in the base ontology. Using the language of the theory of categories, our model builds a natural equivalence, or morphism, among the input populated ontology and the representation space for the indexed information units. Finally, the classi er of Bayes-Voronoi, introduced in the second part of the thesis, uses a manifold-based generative model to represent documents which is de ned by a vector normal distribution on the unit hypershere, and we have called geodesic normal distribution. The distribution is de ned on the unit hypershere, considered as a manifold, instead of the ambient space. The core idea is the ob- servation that the normalized vectors are de ned on the unit hypersphere, instead of the whole euclidean ambient space, and the proposed model explicitly integrates this constraint. The model removes one dimension to the normalized vectors, which corresponds to the projection of the data vectors on the unit hypershere (normaliza- tion). The geodesic normal distribution lead us to the de nition of the Mahalanobis distance on a di¤erential manifold, distance that we call geodesic Mahalanobis dis- tance. We also prove that any Bayes classi er on a manifold de nes a dual Voronoi diagram on it.
This thesis introduces two novel semantic representation spaces for text documents and semantically annotated data, which are based in an intrinsic geometry approach, as well as other results, among which we have: (1) a novel ontology-based semantic distance, that we call weighted Jiang-Conrath, and (2) generalized normal distri- bution on di¤erential manifolds, called geodesic normal distribution, what lead us to the de nition of the geodesic Mahalanobis distance. By last, we prove that any Bayes classi er on a manifold de nes a dual Voronoi diagram on it. The ontology-based IR model looks promising, but it has not been evaluated experimentally yet. By other hand, the text document classi er yielded a rst discouraging result due to the di¢ culties for the training of the model. The common thread of our research is the use of notions of intrinsic di¤erential geometry and geometric invariance, as means to bridge some gaps in the literature. The ontology-based IR model, as well as the text classi er proposed in this thesis, is inspired by a geometric approach, whose core idea is the integration of the geometric structures of the problem in the semantic representation spaces of the information. In summary, our approach attempts to build better models of semantic spaces by incorporating the properties and constraints of the mathematical objects involved in its de nition. The rst part of the thesis introduces a novel ontology-based IR model based in a structure-preserving embedding of a populated ontology into a metric space that we call Intrinsic Ontological Spaces. The second part of the thesis introduces a novel text classi er, called Intrinsic Bayes-Voronoi, which is based in the representation of the document vectors by a manifold-based generative model, where the distribution function is de ned on the unit hypersphere, instead of the euclidean ambient space. The Intrinsic Ontological Spaces introduces a novel theoretical IR model that looks promising, although it has not even been evaluated experimentally. The pro- posed IR model is described in depth and validated with regard to our design axioms. The motivation behind of our model is the nding of a set of geometric inconsist- encies in some ontology-based IR models in the literature, which are derived from certain overlooked properties in their adaptations of the Vector Space Model (VSM). In essence, our model refutes the unre ective use of the VSM model in the elds of natural language processing (NLP) and information retrieval (IR). Despite that the theoretical approach is interesting by itself, our main hypothesis is that the structure-preserving approach proposed by our model, should lead us to improve the quality of the ranking, as well as the measures of precision and recall in the semantic information retrieval systems. The Intrinsic Ontological Spaces are, up to our knowledge, the rst ontology- based IR model to build a whole ontology-based structure-preserving representation for any sort of semantically annotated data in a populated ontology. In our model, every component has been designed with the aim to preserve the intrinsic geometry of any base ontology. The intrinsic geometry of any ontology is de ned by three algebraic structures: (1) the order relation of the taxonomy, (2) the set inclusion relation, and (3) its intrinsic semantic metric. In this way, the methods for the representation of the queries, information units, weighting, ranking and retrieval, have been designed from geometric principled-based axioms, with the aim to capture all the semantic knowledge encoded in the base ontology. Using the language of the theory of categories, our model builds a natural equivalence, or morphism, among the input populated ontology and the representation space for the indexed information units. Finally, the classi er of Bayes-Voronoi, introduced in the second part of the thesis, uses a manifold-based generative model to represent documents which is de ned by a vector normal distribution on the unit hypershere, and we have called geodesic normal distribution. The distribution is de ned on the unit hypershere, considered as a manifold, instead of the ambient space. The core idea is the ob- servation that the normalized vectors are de ned on the unit hypersphere, instead of the whole euclidean ambient space, and the proposed model explicitly integrates this constraint. The model removes one dimension to the normalized vectors, which corresponds to the projection of the data vectors on the unit hypershere (normaliza- tion). The geodesic normal distribution lead us to the de nition of the Mahalanobis distance on a di¤erential manifold, distance that we call geodesic Mahalanobis dis- tance. We also prove that any Bayes classi er on a manifold de nes a dual Voronoi diagram on it.
Descripción
Categorías UNESCO
Palabras clave
ontology-based IR models, ontology-based semantic distances, semantic information retrieval, taxonomic semantic spaces, vector semantic spaces, semantic distances, Jiang-Conrath distance, valuation metrics, geodesic Mahalanobis distance, Hausdorff distance, semantic metric spaces, manifold-based distribution, text classier
Citación
Centro
Facultades y escuelas::E.T.S. de Ingeniería Informática
Departamento
Lenguajes y Sistemas Informáticos