Federated learning-based AI approaches in smart healthcare: concepts, taxonomies, challenges and open issues
Abstract Federated Learning (FL), Artificial Intelligence (AI), and Explainable Artificial
Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare …
Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare …
Logic explained networks
The large and still increasing popularity of deep learning clashes with a major limit of neural
network architectures, that consists in their lack of capability in providing human …
network architectures, that consists in their lack of capability in providing human …
Human uncertainty in concept-based ai systems
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
Attention-based interpretability with concept transformers
Attention is a mechanism that has been instrumental in driving remarkable performance
gains of deep neural network models in a host of visual, NLP and multimodal tasks. One …
gains of deep neural network models in a host of visual, NLP and multimodal tasks. One …
Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision
Deep neural networks have demonstrated promising performance on image recognition
tasks. However, they may heavily rely on confounding factors, using irrelevant artifacts or …
tasks. However, they may heavily rely on confounding factors, using irrelevant artifacts or …
Interpretable neural-symbolic concept reasoning
Deep learning methods are highly accurate, yet their opaque decision process prevents
them from earning full human trust. Concept-based models aim to address this issue by …
them from earning full human trust. Concept-based models aim to address this issue by …
Global concept-based interpretability for graph neural networks via neuron analysis
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks;
however, they lack interpretability and transparency. Current explainability approaches are …
however, they lack interpretability and transparency. Current explainability approaches are …
Federated learning for the internet-of-medical-things: A survey
VK Prasad, P Bhattacharya, D Maru, S Tanwar… - Mathematics, 2022 - mdpi.com
Recently, in healthcare organizations, real-time data have been collected from connected or
implantable sensors, layered protocol stacks, lightweight communication frameworks, and …
implantable sensors, layered protocol stacks, lightweight communication frameworks, and …
Concept-based explainable artificial intelligence: A survey
The field of explainable artificial intelligence emerged in response to the growing need for
more transparent and reliable models. However, using raw features to provide explanations …
more transparent and reliable models. However, using raw features to provide explanations …
Encoding concepts in graph neural networks
The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing
graph network explainers attempt to address this issue by providing post-hoc explanations …
graph network explainers attempt to address this issue by providing post-hoc explanations …