Disentangled explanations of neural network predictions by finding relevant subspaces
Explainable AI aims to overcome the black-box nature of complex ML models like neural
networks by generating explanations for their predictions. Explanations often take the form of …
networks by generating explanations for their predictions. Explanations often take the form of …
Tuning LayerNorm in Attention: Towards efficient multi-modal llm finetuning
This paper introduces an efficient strategy to transform Large Language Models (LLMs) into
Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a …
Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a …
Less is more: Fewer interpretable region via submodular subset selection
Image attribution algorithms aim to identify important regions that are highly relevant to
model decisions. Although existing attribution solutions can effectively assign importance to …
model decisions. Although existing attribution solutions can effectively assign importance to …
Poisoned forgery face: Towards backdoor attacks on face forgery detection
The proliferation of face forgery techniques has raised significant concerns within society,
thereby motivating the development of face forgery detection methods. These methods aim …
thereby motivating the development of face forgery detection methods. These methods aim …
Prediction with Visual Evidence: Sketch Classification Explanation via Stroke-Level Attributions
Sketch classification models have been extensively investigated by designing a task-driven
deep neural network. Despite their successful performances, few works have attempted to …
deep neural network. Despite their successful performances, few works have attempted to …
Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks
Explaining deep models in a human-understandable way has been explored by many works
that mostly explain why an input causes a corresponding prediction (ie., Why P?). However …
that mostly explain why an input causes a corresponding prediction (ie., Why P?). However …
Object Detectors in the Open Environment: Challenges, Solutions, and Outlook
With the emergence of foundation models, deep learning-based object detectors have
shown practical usability in closed set scenarios. However, for real-world tasks, object …
shown practical usability in closed set scenarios. However, for real-world tasks, object …
[HTML][HTML] Explainable assessment of financial experts' credibility by classifying social media forecasts and checking the predictions with actual market data
S García-Méndez, F de Arriba-Pérez… - Expert Systems with …, 2024 - Elsevier
Social media include diverse interaction metrics related to user popularity, the most evident
example being the number of user followers. The latter has raised concerns about the …
example being the number of user followers. The latter has raised concerns about the …
Interpreting Object-level Foundation Models via Visual Precision Search
Advances in multimodal pre-training have propelled object-level foundation models, such as
Grounding DINO and Florence-2, in tasks like visual grounding and object detection …
Grounding DINO and Florence-2, in tasks like visual grounding and object detection …
IG2: Integrated Gradient on Iterative Gradient Path for Feature Attribution
Feature attribution explains Artificial Intelligence (AI) at the instance level by providing
importance scores of input features' contributions to model prediction. Integrated Gradients …
importance scores of input features' contributions to model prediction. Integrated Gradients …