What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods

J Colin, T Fel, R Cadène… - Advances in neural …, 2022 - proceedings.neurips.cc
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …

Human-in-the-loop mixup

KM Collins, U Bhatt, W Liu, V Piratla… - Uncertainty in …, 2023 - proceedings.mlr.press
Aligning model representations to humans has been found to improve robustness and
generalization. However, such methods often focus on standard observational data …

Learning human-like representations to enable learning human values

AH Wynn - 2024 - search.proquest.com
How can we build AI systems that can learn any set of individual human values both quickly
and safely, avoiding causing harm or violating societal standards for acceptable behavior …

Measuring representational robustness of neural networks through shared invariances

V Nanda, T Speicher, C Kolling… - International …, 2022 - proceedings.mlr.press
A major challenge in studying robustness in deep learning is defining the set of
“meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most …

Graph-Based Similarity of Deep Neural Networks

Z Chen, Y Lu, JX Hu, Q Xuan, Z Wang, X Yang - Neurocomputing, 2025 - Elsevier
Understanding the enigmatic black-box representations within Deep Neural Networks
(DNNs) is an essential problem in the community of deep learning. An initial step towards …

A Human-factors Approach for Evaluating AI-generated Images

K Combs, TJ Bihl, A Gadre… - Proceedings of the 2024 …, 2024 - dl.acm.org
As generative artificial intelligence (AI) becomes more common in day-to-day life, AI-
generated content (AIGC) needs to be accurate, relevant, and comprehensive. These …

Measuring Human-CLIP Alignment at Different Abstraction Levels

P Hernández-Cámara, J Vila-Tomás, J Malo… - ICLR 2024 Workshop on … - openreview.net
Measuring the human alignment of trained models is gaining traction because it is not clear
to which extent artificial image representations are proper models of the visual brain …

[PDF][PDF] Improving Machine Learning Systems by Eliciting and Incorporating Additional Human Knowledge

KM Collins, U Bhatt - mlmi.eng.cam.ac.uk
Data has powered incredible advances in machine learning (ML). Yet, the kinds of data
used for training are often hard labels aggregated over humans' annotations, which fail to …