Persona: Castillo Cara, José Manuel
Cargando...
Dirección de correo electrónico
ORCID
0000-0002-2990-7090
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Castillo Cara
Nombre de pila
José Manuel
Nombre
12 resultados
Resultados de la búsqueda
Mostrando 1 - 10 de 12
Publicación On the relevance of the metadata used in the semantic segmentation of indoor image spaces(Elsevier, 2021) Vasquez Espinoza, Luis; Orozco Barbosa, Luis; Castillo Cara, José ManuelThe study of artificial learning processes in the area of computer vision context has mainly focused on achieving a fixed output target rather than on identifying the underlying processes as a means to develop solutions capable of performing as good as or better than the human brain. This work reviews the well-known segmentation efforts in computer vision. However, our primary focus is on the quantitative evaluation of the amount of contextual information provided to the neural network. In particular, the information used to mimic the tacit information that a human is capable of using, like a sense of unambiguous order and the capability of improving its estimation by complementing already learned information. Our results show that, after a set of pre and post-processing methods applied to both the training data and the neural network architecture, the predictions made were drastically closer to the expected output in comparison to the cases where no contextual additions were provided. Our results provide evidence that learning systems strongly rely on contextual information for the identification task process.Publicación A novel deep learning approach using blurring image techniques for Bluetooth-based indoor localisation(Elsevier, 2022-10-17) Talla Chumpitaz, Reewos; Orozco Barbosa, Luis; García Castro, Raúl; Castillo Cara, José ManuelThe growing interest in the use of IoT technologies has generated the development of numerous and diverse applications. Many of the services provided by the applications are based on knowledge of the localisation and profile of the end user. Thus, the present work aims to develop a system for indoor localisation prediction using Bluetooth-based fingerprinting using Convolutional Neural Networks (CNN). For this purpose, a novel technique was developed that simulates the diffusion behaviour of the wireless signal by transforming tidy data into images. For this transformation, we implemented the technique used in painting known as blurring, simulating the diffusion of the signal spectrum. Our proposal also includes the use and a comparative analysis of two dimensional reduction algorithms, PCA and t -SNE. Finally, an evolutionary algorithm was implemented to configure and optimise our solution with the combination of different transmission power levels. The results reported in this work present an accuracy of close to 94%, which clearly shows the great potential of this novel technique in the development of more accurate indoor localisation systems .Publicación BeeGOns!: A Wireless Sensor Node for Fog Computing in Smart City Applications(Institute of Electrical and Electronics Engineers, 2024-01) Vera Panez, Michael; Cuadros Claro, Kewin; Orozco Barbosa, Luis; Castillo Cara, José ManuelThe widespread deployment of sensors interconnected by wireless links and the management and exploitation of the data collected have given rise to the Internet of Things (IoT) concept. In this article, we undertake the design and implementation of a wireless multisensor platform following the fog computing paradigm. Our main contributions are the integration of various off-the-shelf sensors smartly packaged into an air-flow module and the evaluation of the communications services offered implemented on top of two low-power radio communications technologies. Our study is complemented by evaluating the communi- cations services over a wired link. Our results show the superiority of LoRaWAN over ZigBee in terms of power consumption despite its slightly higher computational requirements and an estimation of the gap between the resource usage of the wired link and the two wireless radio technologies.Publicación An Analysis of Computational Resources of Event-Driven Streaming Data Flow for Internet of Things: A Case Study(['Oxford University Press', 'BCS, The Chartered Institute for IT'], 2021-10-06) Tenorio Trigoso, Alonso; Mondragón Ruiz, Giovanny; Carrión, Carmen; Caminero, Blanca; Castillo Cara, José ManuelInformation and communication technologies backbone of a smart city is an Internet of Things (IoT) application that combines technologies such as low power IoT networks, device management, analytics or event stream processing. Hence, designing an efficient IoT architecture for real-time IoT applications brings technical challenges that include the integration of application network protocols and data processing. In this context, the system scalability of two architectures has been analysed: the first architecture, named as POST architecture, integrates the hyper text transfer protocol with an Extract-Transform-Load technique, and is used as baseline; the second architecture, named as MQTT-CEP, is based on a publish-subscribe protocol, i.e. message queue telemetry transport, and a complex event processor engine. In this analysis, SAVIA, a smart city citizen security application, has been deployed following both architectural approaches. Results show that the design of the network protocol and the data analytic layer impacts highly in the Quality of Service experimented by the final IoT users. The experiments show that the integrated MQTT-CEP architecture scales properly, keeps energy consumption limited and thereby, promotes the development of a distributed IoT architecture based on constraint resources. The drawback is an increase in latency, mainly caused by the loosely coupled communication pattern of MQTT, but within reasonable levels which stabilize with increasing workloads.Publicación Street images classification according to COVID-19 risk in Lima, Peru: a convolutional neural networks feasibility analysis(BMJ Publishing Group, 2022-09-19) Carrillo Larco, Rodrigo M.; Hernández Santa Cruz, José Francisco; Castillo Cara, José ManuelObjectives During the COVID-19 pandemic, convolutional neural networks (CNNs) have been used in clinical medicine (eg, X-rays classification). Whether CNNs could inform the epidemiology of COVID-19 classifying street images according to COVID-19 risk is unknown, yet it could pinpoint high-risk places and relevant features of the built environment. In a feasibility study, we trained CNNs to classify the area surrounding bus stops (Lima, Peru) into moderate or extreme COVID-19 risk. Design CNN analysis based on images from bus stops and the surrounding area. We used transfer learning and updated the output layer of five CNNs: NASNetLarge, InceptionResNetV2, Xception, ResNet152V2 and ResNet101V2. We chose the best performing CNN, which was further tuned. We used GradCam to understand the classification process. Setting Bus stops from Lima, Peru. We used five images per bus stop. Primary and secondary outcome measures Bus stop images were classified according to COVID-19 risk into two labels: moderate or extreme. Results NASNetLarge outperformed the other CNNs except in the recall metric for the moderate label and in the precision metric for the extreme label; the ResNet152V2 performed better in these two metrics (85% vs 76% and 63% vs 60%, respectively). The NASNetLarge was further tuned. The best recall (75%) and F1 score (65%) for the extreme label were reached with data augmentation techniques. Areas close to buildings or with people were often classified as extreme risk. Conclusions This feasibility study showed that CNNs have the potential to classify street images according to levels of COVID-19 risk. In addition to applications in clinical medicine, CNNs and street images could advance the epidemiology of COVID-19 at the population level.Publicación Development, validation, and application of a machine learning model to estimate salt consumption in 54 countries(eLife Sciences Publications, 2022-01-25) Guzman Vilca, Wilmer Cristobal; Carrillo Larco, Rodrigo M.; Castillo Cara, José ManuelGlobal targets to reduce salt intake have been proposed, but their monitoring is challenged by the lack of population-based data on salt consumption. We developed a machine learning (ML) model to predict salt consumption at the population level based on simple predictors and applied this model to national surveys in 54 countries. We used 21 surveys with spot urine samples for the ML model derivation and validation; we developed a supervised ML regression model based on sex, age, weight, height, and systolic and diastolic blood pressure. We applied the ML model to 54 new surveys to quantify the mean salt consumption in the population. The pooled dataset in which we developed the ML model included 49,776 people. Overall, there were no substantial differences between the observed and ML-predicted mean salt intake (p<0.001). The pooled dataset where we applied the ML model included 166,677 people; the predicted mean salt consumption ranged from 6.8 g/day (95% CI: 6.8–6.8 g/day) in Eritrea to 10.0 g/day (95% CI: 9.9–10.0 g/day) in American Samoa. The countries with the highest predicted mean salt intake were in the Western Pacific. The lowest predicted intake was found in Africa. The country-specific predicted mean salt intake was within reasonable difference from the best available evidence. An ML model based on readily available predictors estimated daily salt consumption with good accuracy. This model could be used to predict mean salt consumption in the general population where urine samples are not available.Publicación Clusters of people with type 2 diabetes in the general population: unsupervised machine learning approach using national surveys in Latin America and the Caribbean(BMJ Publishing Group, 2021-01-29) Carrillo Larco, Rodrigo M.; Anza Ramírez, Cecilia; Bernabé Ortiz, Antonio; Castillo Cara, José ManuelWe aimed to identify clusters of people with type 2 diabetes mellitus (T2DM) and to assess whether the frequency of these clusters was consistent across selected countries in Latin America and the Caribbean (LAC). Research design and methods We analyzed 13 population-based national surveys in nine countries (n=8361). We used k-means to develop a clustering model; predictors were age, sex, body mass index (BMI), waist circumference (WC), systolic/diastolic blood pressure (SBP/DBP), and T2DM family history. The training data set included all surveys, and the clusters were then predicted in each country-year data set. We used Euclidean distance, elbow and silhouette plots to select the optimal number of clusters and described each cluster according to the underlying predictors (mean and proportions). Results The optimal number of clusters was 4. Cluster 0 grouped more men and those with the highest mean SBP/DBP. Cluster 1 had the highest mean BMI and WC, as well as the largest proportion of T2DM family history. We observed the smallest values of all predictors in cluster 2. Cluster 3 had the highest mean age. When we reflected the four clusters in each country-year data set, a different distribution was observed. For example, cluster 3 was the most frequent in the training data set, and so it was in 7 out of 13 other country-year data sets. Conclusions Using unsupervised machine learning algorithms, it was possible to cluster people with T2DM from the general population in LAC; clusters showed unique profiles that could be used to identify the underlying characteristics of the T2DM population in LAC.Publicación An experimental study of fog and cloud computing in CEP-based Real-Time IoT applications(SpringerOpen, 2021-06-07) Mondragón Ruiz, Giovanny; Tenorio Trigoso, Alonso; Caminero, Blanca; Carrión, Carmen; Castillo Cara, José ManuelInternet of Things (IoT) has posed new requirements to the underlying processing architecture, specially for real-time applications, such as event-detection services. Complex Event Processing (CEP) engines provide a powerful tool to implement these services. Fog computing has raised as a solution to support IoT real-time applications, in contrast to the Cloud-based approach. This work is aimed at analysing a CEP-based Fog architecture for real-time IoT applications that uses a publish-subscribe protocol. A testbed has been developed with low-cost and local resources to verify the suitability of CEP-engines to low-cost computing resources. To assess performance we have analysed the effectiveness and cost of the proposal in terms of latency and resource usage, respectively. Results show that the fog computing architecture reduces event-detection latencies up to 35%, while the available computing resources are being used more efficiently, when compared to a Cloud deployment. Performance evaluation also identifies the communication between the CEP-engine and the final users as the most time consuming component of latency. Moreover, the latency analysis concludes that the time required by CEP-engine is related to the compute resources, but is nonlinear dependent of the number of things connected.Publicación From cloud and fog computing to federated-fog computing: A comparative analysis of computational resources in real-time IoT applications based on semantic interoperability(ELSEVIER, 2024-05-10) Huaranga, Edgar; González Gerpe, Salvador; Castillo Cara, José Manuel; Cimmino, Andrea; García Castro, Raúl; https://orcid.org/0000-0002-8087-0940; https://orcid.org/0000-0003-1550-0430; https://orcid.org/0000-0002-1823-4484; https://orcid.org/0000-0002-0421-452XIn contemporary computing paradigms, the evolution from cloud computing to fog computing and the recent emergence of federated-fog computing have introduced new challenges pertaining to semantic interoperability, particularly in the context of real-time applications. Fog computing, by shifting computational processes closer to the network edge at the local area network level, aims to mitigate latency and enhance efficiency by minimising data transfers to the cloud. Building upon this, federated-fog computing extends the paradigm by distributing computing resources across diverse organisations and locations, while maintaining centralised management and control. This research article addresses the inherent problematics in achieving semantic interoperability within the evolving architectures of cloud computing, fog computing, and federated-fog computing. Experimental investigations are conducted on a diverse node-based testbed, simulating various end-user devices, to emphasise the critical role of semantic interoperability in facilitating seamless data exchange and integration. Furthermore, the efficacy of federated-fog computing is rigorously evaluated in comparison to traditional fog and cloud computing frameworks. Specifically, the assessment focuses on critical factors such as latency time and computational resource utilisation while processing real-time data streams generated by Internet of Things (IoT) devices. The findings of this study underscore the advantages of federated-fog computing over conventional cloud and fog computing paradigms, particularly in the realm of real-time IoT applications demanding high performance (lowering CPU usage to 20%) and low latency (with picks up to 300ms). The research contributes valuable insights into the optimisation of processing architectures for contemporary computing paradigms, offering implications for the advancement of semantic interoperability in the context of emerging federated-fog computing for IoT applications.Publicación Phenotypes of non-alcoholic fatty liver disease (NAFLD) and all-cause mortality: unsupervised machine learning analysis of NHANES III(BMJ Publishing Group, 2022-11-23) Carrillo Larco, Rodrigo M.; Guzman Vilca, Wilmer Cristobal; Alvizuri Gómez, Claudia; Alqahtani, Saleh; Garcia Larsen, Vanessa; Castillo Cara, José ManuelObjectives Non- alcoholic fatty liver disease (NAFLD) is a non-communicable disease with a rising prevalence worldwide and with large burden for patients and health systems. To date, the presence of unique phenotypes in patients with NAFLD has not been studied, and their identification could inform precision medicine and public health with pragmatic implications in personalised management and care for patients with NAFLD. Design Cross-sectional and prospective (up to 31 December 2019) analysis of National Health and Nutrition Examination Survey III (1988–1994). Primary and secondary outcomes measures NAFLD diagnosis was based on liver ultrasound. The following predictors informed an unsupervised machine learning algorithm (k-means): body mass index, waist circumference, systolic blood pressure (SBP), plasma glucose, total cholesterol, triglycerides, liver enzymes alanine aminotransferase, aspartate aminotransferase and gamma glutamyl transferase. We summarised (means) and compared the predictors across clusters. We used Cox proportional hazard models to quantify the all-cause mortality risk associated with each cluster. Results 1652 patients with NAFLD (mean age 47.2 years and 51.5% women) were grouped into 3 clusters: anthro-SBP- glucose (6.36%; highest levels of anthropometrics, SBP and glucose), lipid-liver (10.35%; highest levels of lipid and liver enzymes) and average (83.29%; predictors at average levels). Compared with the average phenotype, the anthro-SBP- glucose phenotype had higher all-cause mortality risk (aHR=2.88; 95% CI: 2.26 to 3.67); the lipid-liver phenotype was not associated with higher all-cause mortality risk (aHR=1.11; 95% CI: 0.86 to 1.42). Conclusions There is heterogeneity in patients with NAFLD, whom can be divided into three phenotypes with different mortality risk. These phenotypes could guide specific interventions and management plans, thus advancing precision medicine and public health for patients with NAFLD.