Large-scale text-to-image generation models for visual artists' creative works

HK Ko, G Park, H Jeon, J Jo, J Kim, J Seo - Proceedings of the 28th …, 2023 - dl.acm.org
Large-scale Text-to-image Generation Models (LTGMs)(eg, DALL-E), self-supervised deep
learning models trained on a huge dataset, have demonstrated the capacity for generating …

Generative AI and human–robot interaction: implications and future agenda for business, society and ethics

B Obrenovic, X Gu, G Wang, D Godinic, I Jakhongirov - AI & SOCIETY, 2024 - Springer
The revolution of artificial intelligence (AI), particularly generative AI, and its implications for
human–robot interaction (HRI) opened up the debate on crucial regulatory, business …

Investigating explainability of generative AI for code through scenario-based design

J Sun, QV Liao, M Muller, M Agarwal, S Houde… - Proceedings of the 27th …, 2022 - dl.acm.org
What does it mean for a generative AI model to be explainable? The emergent discipline of
explainable AI (XAI) has made great strides in helping people understand discriminative …

Cells, generators, and lenses: Design framework for object-oriented interaction with large language models

TS Kim, Y Lee, M Chang, J Kim - Proceedings of the 36th Annual ACM …, 2023 - dl.acm.org
Large Language Models (LLMs) have become the backbone of numerous writing interfaces
with the goal of supporting end-users across diverse writing tasks. While LLMs reduce the …

Better together? an evaluation of ai-supported code translation

JD Weisz, M Muller, SI Ross, F Martinez… - Proceedings of the 27th …, 2022 - dl.acm.org
Generative machine learning models have recently been applied to source code, for use
cases including translating code between programming languages, creating documentation …

Towards human-centered explainable ai: A survey of user studies for model explanations

Y Rong, T Leemann, TT Nguyen… - IEEE transactions on …, 2023 - ieeexplore.ieee.org
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A
better understanding of the needs of XAI users, as well as human-centered evaluations of …

On selective, mutable and dialogic XAI: A review of what users say about different types of interactive explanations

A Bertrand, T Viard, R Belloum, JR Eagan… - Proceedings of the …, 2023 - dl.acm.org
Explainability (XAI) has matured in recent years to provide more human-centered
explanations of AI-based decision systems. While static explanations remain predominant …

Being trustworthy is not enough: How untrustworthy artificial intelligence (AI) can deceive the end-users and gain their trust

N Banovic, Z Yang, A Ramesh, A Liu - … of the ACM on Human-Computer …, 2023 - dl.acm.org
Trustworthy Artificial Intelligence (AI) is characterized, among other things, by: 1)
competence, 2) transparency, and 3) fairness. However, end-users may fail to recognize …

Exploring evaluation methods for interpretable machine learning: A survey

N Alangari, M El Bachir Menai, H Mathkour… - Information, 2023 - mdpi.com
In recent times, the progress of machine learning has facilitated the development of decision
support systems that exhibit predictive accuracy, surpassing human capabilities in certain …

An HCI-Centric Survey and Taxonomy of Human-Generative-AI Interactions

J Shi, R Jain, H Doh, R Suzuki, K Ramani - arXiv preprint arXiv …, 2023 - arxiv.org
Generative AI (GenAI) has shown remarkable capabilities in generating diverse and realistic
content across different formats like images, videos, and text. In Generative AI, human …