What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods
A multitude of explainability methods has been described to try to help users better
understand how modern AI systems make decisions. However, most performance metrics …
understand how modern AI systems make decisions. However, most performance metrics …
Human-in-the-loop mixup
Aligning model representations to humans has been found to improve robustness and
generalization. However, such methods often focus on standard observational data …
generalization. However, such methods often focus on standard observational data …
Learning human-like representations to enable learning human values
AH Wynn - 2024 - search.proquest.com
How can we build AI systems that can learn any set of individual human values both quickly
and safely, avoiding causing harm or violating societal standards for acceptable behavior …
and safely, avoiding causing harm or violating societal standards for acceptable behavior …
Measuring representational robustness of neural networks through shared invariances
A major challenge in studying robustness in deep learning is defining the set of
“meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most …
“meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most …
Graph-Based Similarity of Deep Neural Networks
Understanding the enigmatic black-box representations within Deep Neural Networks
(DNNs) is an essential problem in the community of deep learning. An initial step towards …
(DNNs) is an essential problem in the community of deep learning. An initial step towards …
A Human-factors Approach for Evaluating AI-generated Images
As generative artificial intelligence (AI) becomes more common in day-to-day life, AI-
generated content (AIGC) needs to be accurate, relevant, and comprehensive. These …
generated content (AIGC) needs to be accurate, relevant, and comprehensive. These …
Measuring Human-CLIP Alignment at Different Abstraction Levels
Measuring the human alignment of trained models is gaining traction because it is not clear
to which extent artificial image representations are proper models of the visual brain …
to which extent artificial image representations are proper models of the visual brain …
[PDF][PDF] Improving Machine Learning Systems by Eliciting and Incorporating Additional Human Knowledge
KM Collins, U Bhatt - mlmi.eng.cam.ac.uk
Data has powered incredible advances in machine learning (ML). Yet, the kinds of data
used for training are often hard labels aggregated over humans' annotations, which fail to …
used for training are often hard labels aggregated over humans' annotations, which fail to …