Disentangled explanations of neural network predictions by finding relevant subspaces

P Chormai, J Herrmann, KR Müller… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Explainable AI aims to overcome the black-box nature of complex ML models like neural
networks by generating explanations for their predictions. Explanations often take the form of …

Tuning LayerNorm in Attention: Towards efficient multi-modal llm finetuning

B Zhao, H Tu, C Wei, J Mei, C Xie - arXiv preprint arXiv:2312.11420, 2023 - arxiv.org
This paper introduces an efficient strategy to transform Large Language Models (LLMs) into
Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a …

Less is more: Fewer interpretable region via submodular subset selection

R Chen, H Zhang, S Liang, J Li, X Cao - arXiv preprint arXiv:2402.09164, 2024 - arxiv.org
Image attribution algorithms aim to identify important regions that are highly relevant to
model decisions. Although existing attribution solutions can effectively assign importance to …

Poisoned forgery face: Towards backdoor attacks on face forgery detection

J Liang, S Liang, A Liu, X Jia, J Kuang… - arXiv preprint arXiv …, 2024 - arxiv.org
The proliferation of face forgery techniques has raised significant concerns within society,
thereby motivating the development of face forgery detection methods. These methods aim …

Prediction with Visual Evidence: Sketch Classification Explanation via Stroke-Level Attributions

S Liu, J Li, H Zhang, L Xu, X Cao - IEEE Transactions on Image …, 2023 - ieeexplore.ieee.org
Sketch classification models have been extensively investigated by designing a task-driven
deep neural network. Despite their successful performances, few works have attempted to …

Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks

X Wang, Z Wang, H Weng, H Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Explaining deep models in a human-understandable way has been explored by many works
that mostly explain why an input causes a corresponding prediction (ie., Why P?). However …

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

S Liang, W Wang, R Chen, A Liu, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
With the emergence of foundation models, deep learning-based object detectors have
shown practical usability in closed set scenarios. However, for real-world tasks, object …

[HTML][HTML] Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data

S García-Méndez, F de Arriba-Pérez… - Expert Systems with …, 2024 - Elsevier
Social media include diverse interaction metrics related to user popularity, the most evident
example being the number of user followers. The latter has raised concerns about the …

Interpreting Object-level Foundation Models via Visual Precision Search

R Chen, S Liang, J Li, S Liu, M Li, Z Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
Advances in multimodal pre-training have propelled object-level foundation models, such as
Grounding DINO and Florence-2, in tasks like visual grounding and object detection …

IG2: Integrated Gradient on Iterative Gradient Path for Feature Attribution

Y Zhuo, Z Ge - arXiv preprint arXiv:2406.10852, 2024 - arxiv.org
Feature attribution explains Artificial Intelligence (AI) at the instance level by providing
importance scores of input features' contributions to model prediction. Integrated Gradients …