Interpretability of deep neural networks: A review of methods, classification and hardware

T Antamis, A Drosou, T Vafeiadis, A Nizamis… - Neurocomputing, 2024 - Elsevier
Artificial intelligence, and especially deep neural networks, have evolved substantially in the
recent years, infiltrating numerous domains of applications, often greatly impactful to …

[HTML][HTML] Fuzzy decision-making framework for explainable golden multi-machine learning models for real-time adversarial attack detection in Vehicular Ad-hoc …

AS Albahri, RA Hamid, AR Abdulnabi, OS Albahri… - Information …, 2024 - Elsevier
This paper addresses various issues in the literature concerning adversarial attack detection
in Vehicular Ad-hoc Networks (VANETs). These issues include the failure to consider both …

B-LIME: An improvement of LIME for interpretable deep learning classification of cardiac arrhythmia from ECG signals

TAA Abdullah, MSM Zahid, W Ali, SU Hassan - Processes, 2023 - mdpi.com
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque
technique that is regarded as a black box. To ensure the validity of the model's prediction, it …

Assessing XAI: unveiling evaluation metrics for local explanation, taxonomies, key concepts, and practical applications

MA Kadir, A Mosavi, D Sonntag - 2023 - engrxiv.org
Within the past few years, the accuracy of deep learning and machine learning models has
been improving significantly while less attention has been paid to their responsibility …

[HTML][HTML] Interpreting convolutional neural network by joint evaluation of multiple feature maps and an improved NSGA-II algorithm

Z Wang, Y Zhou, M Han, Y Guo - Expert Systems with Applications, 2024 - Elsevier
The'black box'characteristics of Convolutional Neural Networks (CNNs) present significant
risks to their application scenarios, such as reliability, security, and division of …

A posture-based measurement adjustment method for improving the accuracy of beef cattle body size measurement based on point cloud data

J Li, W Ma, Q Bai, D Tulpan, M Gong, Y Sun, X Xue… - Biosystems …, 2023 - Elsevier
Highlights•Body size automatic measurement based on beef cattle point clouds was
achieved.•Twelve micro-pose features were defined to describe beef cattle postures.•The …

: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models

X Li, M Du, J Chen, Y Chai… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract While Explainable Artificial Intelligence (XAI) techniques have been widely studied
to explain predictions made by deep neural networks, the way to evaluate the faithfulness of …

Knowledge features enhanced intelligent fault detection with progressive adaptive sparse attention learning for high-power diesel engine

H Li, F Liu, X Kong, J Zhang, Z Jiang… - … Science and Technology, 2023 - iopscience.iop.org
High-power diesel engines are core power equipment in some key fields, and fault
diagnosis is of great significance for improving their long-term operational reliability and …

[PDF][PDF] M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models.

X Li, M Du, J Chen, Y Chai, H Lakkaraju, H Xiong - NeurIPS, 2023 - cyk1337.github.io
M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across
Metrics, Modalities, and Models Page 1 M4: A Unified XAI Benchmark for Faithfulness …

Improving semantic segmentation under hazy weather for autonomous vehicles using explainable artificial intelligence and adaptive dehazing approach

VS Saravanarajan, RC Chen, CH Hsieh… - IEEE Access, 2023 - ieeexplore.ieee.org
Haze-level discriminators are crucial for autonomous vehicles to handle segmentation tasks
successfully in hazy and foggy outdoor environments. Deep learning (DL) networks trained …