Persona:
Martín Gutiérrez, Sergio

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0002-4118-0234
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Martín Gutiérrez
Nombre de pila
Sergio
Nombre

Resultados de la búsqueda

Mostrando 1 - 2 de 2
  • Publicación
    Human stability assessment and fall detection based on dynamic descriptors
    (Wiley, 2023-06-14) Gutiérrez, Jesús; Martín Gutiérrez, Sergio; Rodriguez, Victor
    Fall detection systems use a number of different technologies to achieve their goals, contributing, this way, to better life conditions for the elderly community. The artificial vision is one of these technologies and, within this field, it has gained momentum over the course of the last few years as a consequence of the incorporation of different artificial neural networks (ANN’s). These ANN’s share a common characteristic, they are used to extract descriptors from images and video clips that, properly processed, will determine whether a fall has taken place. However, these descriptors, which capture kinematic features associated to the fall, are inferred from datasets recorded by young volunteers or actors who simulate falls. Given the well documented differences between these falls and the real ones concerns about system performances in the real-world, out of laboratory environments, are raised. This work implements an alternative approach to the classical use of kinematic descriptors. To do it, for the first time to the best of our knowledge, we propose the introduction of human dynamic stability descriptors used in other fields to determine whether a fall has taken place. These descriptors approach the human body in terms of balance and stability, this way, differences between real and simulated falls become irrelevant, as all falls are a direct result of a fail in the continuous effort of the body to keep balance, regardless of other considerations. The descriptors are determined by using the information provided by a neural network able to estimate the body center of mass and the feet projections onto the ground plane, as well as the feet contact status. The theory behind this new approach and its validity is studied in this article with very promising results, as it is able to match or over exceed the performances of previous systems using kinematic descriptors in laboratory conditions and, given the independence of this approach from the conditions of the fall, real or simulated, it has the potential to have a better behavior in the real-world than classic systems.
  • Publicación
    Fall Detection in Low-illumination Environments from Far-infrared Images Using Pose Detection and Dynamic Descriptors
    (IEEE Xplore, 2024-03-18) Martín Gutiérrez, Sergio; Rodriguez, Victor; Albiol, Sergio; Plaza Agudo, Inmaculada; Medrano, Carlos; Martinez, Javier
    In an increasingly aging world, the effort to automate tasks associated with the care of elderly dependent individuals becomes more and more relevant if quality care provision at sustainable costs is desired. One of the tasks susceptible to automation in this field is the automatic detection of falls. The research effort undertaken to develop automatic fall detection systems has been quite substantial and has resulted in reliable fall detection systems. However, individuals who could benefit from these systems only consider their use in certain scenarios. Among them, a relevant scenario is the one associated to semi-supervised patients during the night who wake up and get out of bed, usually disoriented, feeling an urgent need to go to the toilet. Under these circumstances, usually, the person is not supervised, and a fall could go unnoticed until the next morning, delaying the arrival of urgently needed assistance. In this scenario, associated with nighttime rest, the patient prioritizes comfort, and in this situation, body-worn sensors typical of wearable systems are not a good option. Environmental systems, particularly visual-based ones with cameras deployed in the patient's environment, could be the ideal option for this scenario. However, it is necessary to work with far-infrared (FIR) images in the low-light conditions of this environment. This work develops and implements, for the first time, a fall detection system that works with FIR imagery. The system integrates the output of a human pose estimation neural network with a detection methodology which uses the relative movement of the body's most important joints in order to determine whether a fall has taken place. The pose estimation neural networks used represent the most relevant architectures in this field and have been trained using the first large public labeled FIR dataset. Thus, we have developed the first vision-based fall detection system working on FIR imagery able to operate in conditions of absolute darkness whose performance indexes are equivalent to the ones of equivalent systems working on conventional RGB images.