Federated learning-based AI approaches in smart healthcare: concepts, taxonomies, challenges and open issues

A Rahman, MS Hossain, G Muhammad, D Kundu… - Cluster computing, 2023 - Springer
Abstract Federated Learning (FL), Artificial Intelligence (AI), and Explainable Artificial
Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare …

Logic explained networks

G Ciravegna, P Barbiero, F Giannini, M Gori, P Lió… - Artificial Intelligence, 2023 - Elsevier
The large and still increasing popularity of deep learning clashes with a major limit of neural
network architectures, that consists in their lack of capability in providing human …

Human uncertainty in concept-based ai systems

KM Collins, M Barker, M Espinosa Zarlenga… - Proceedings of the …, 2023 - dl.acm.org
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …

Attention-based interpretability with concept transformers

M Rigotti, C Miksovic, I Giurgiu, T Gschwind… - International …, 2021 - openreview.net
Attention is a mechanism that has been instrumental in driving remarkable performance
gains of deep neural network models in a host of visual, NLP and multimodal tasks. One …

Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision

S Yan, Z Yu, X Zhang, D Mahapatra… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have demonstrated promising performance on image recognition
tasks. However, they may heavily rely on confounding factors, using irrelevant artifacts or …

Interpretable neural-symbolic concept reasoning

P Barbiero, G Ciravegna, F Giannini… - International …, 2023 - proceedings.mlr.press
Deep learning methods are highly accurate, yet their opaque decision process prevents
them from earning full human trust. Concept-based models aim to address this issue by …

Global concept-based interpretability for graph neural networks via neuron analysis

H Xuanyuan, P Barbiero, D Georgiev… - Proceedings of the …, 2023 - ojs.aaai.org
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks;
however, they lack interpretability and transparency. Current explainability approaches are …

Federated learning for the internet-of-medical-things: A survey

VK Prasad, P Bhattacharya, D Maru, S Tanwar… - Mathematics, 2022 - mdpi.com
Recently, in healthcare organizations, real-time data have been collected from connected or
implantable sensors, layered protocol stacks, lightweight communication frameworks, and …

Concept-based explainable artificial intelligence: A survey

E Poeta, G Ciravegna, E Pastor, T Cerquitelli… - arXiv preprint arXiv …, 2023 - arxiv.org
The field of explainable artificial intelligence emerged in response to the growing need for
more transparent and reliable models. However, using raw features to provide explanations …

Encoding concepts in graph neural networks

LC Magister, P Barbiero, D Kazhdan, F Siciliano… - arXiv preprint arXiv …, 2022 - arxiv.org
The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing
graph network explainers attempt to address this issue by providing post-hoc explanations …