Impact of nutritional factors in blood glucose prediction in type 1 diabetes through machine learning

G Annuzzi, A Apicella, P Arpaia, L Bozzetto… - IEEE …, 2023 - ieeexplore.ieee.org
Type 1 Diabetes (T1D) is an autoimmune disease that affects millions of people worldwide.
A critical issue in T1D patients is the managing of Postprandial Glucose Response (PGR) …

Toward the application of XAI methods in EEG-based systems

A Apicella, F Isgrò, A Pollastro, R Prevete - arXiv preprint arXiv …, 2022 - arxiv.org
An interesting case of the well-known Dataset Shift Problem is the classification of
Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The …

Exploring nutritional influence on blood glucose forecasting for Type 1 diabetes using explainable AI

G Annuzzi, A Apicella, P Arpaia… - IEEE journal of …, 2023 - ieeexplore.ieee.org
Type 1 diabetes mellitus (T1DM) is characterized by insulin deficiency and blood sugar
control issues. The state-of-the-art solution is the artificial pancreas (AP), which integrates …

Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems

A Apicella, S Giugliano, F Isgrò, R Prevete - Knowledge-Based Systems, 2022 - Elsevier
A central issue addressed by the rapidly growing research area of eXplainable Artificial
Intelligence (XAI) is to provide methods to give explanations for the behaviours of Machine …

Predicting and monitoring blood glucose through nutritional factors in type 1 diabetes by artificial neural networks

G Annuzzi, L Bozzetto, A Cataldo, S Criscuolo… - Acta IMEKO, 2023 - acta.imeko.org
The monitoring and management of Postprandial Glucose Response (PGR), by
administering an insulin bolus before meals, is a crucial issue in Type 1 Diabetes (T1D) …

Middle-level features for the explanation of classification systems by sparse dictionary methods

A Apicella, F Isgrò, R Prevete… - International Journal of …, 2020 - World Scientific
Machine learning (ML) systems are affected by a pervasive lack of transparency. The
eXplainable Artificial Intelligence (XAI) research area addresses this problem and the …

Strategies to exploit XAI to improve classification systems

A Apicella, L Di Lorenzo, F Isgrò, A Pollastro… - World Conference on …, 2023 - Springer
Abstract Explainable Artificial Intelligence (XAI) aims to provide insights into the decision-
making process of AI models, allowing users to understand their results beyond their …

Contrastive explanations to classification systems using sparse dictionaries

A Apicella, F Isgrò, R Prevete, G Tamburrini - Image Analysis and …, 2019 - Springer
Providing algorithmic explanations for the decisions of machine learning systems to end
users, data protection officers, and other stakeholders in the design, production …

[PDF][PDF] SHAP-based Explanations to Improve Classification Systems.

A Apicella, S Giugliano, F Isgrò, R Prevete - XAI. it@ AI* IA, 2023 - ceur-ws.org
Abstract Explainable Artificial Intelligence (XAI) is a field usually dedicated to offering
insights into the decisionmaking mechanisms of AI models. Its purpose is to enable users to …

A general approach to compute the relevance of middle-level input features

A Apicella, S Giugliano, F Isgrò, R Prevete - Pattern Recognition. ICPR …, 2021 - Springer
This work proposes a novel general framework, in the context of eXplainable Artificial
Intelligence (XAI), to construct explanations for the behaviour of Machine Learning (ML) …