Large language models (LLMs) and the institutionalization of misinformation

M Garry, WM Chan, J Foster, LA Henkel - Trends in cognitive sciences, 2024 - cell.com
Large language models (LLMs), such as ChatGPT, flood the Internet with true and false
information, crafted and delivered with techniques that psychological science suggests will …

Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database

CP Walker, DS Schiff, KJ Schiff - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
This article presents the Political Deepfakes Incidents Database (PDID), a collection of
politically-salient deepfakes, encompassing synthetically-created videos, images, and less …

Health Communication in an Era of Disinformation: Perceived Source Credibility Among Transgender and Gender Diverse Individuals

E Ciszek, G Dermid, M Shah, R Mocarski… - Journal of Health …, 2024 - Taylor & Francis
This study examines perceived source credibility of health information in a moment of TGD
health disinformation. Through thematic analysis of in-depth interviews with 30 transgender …

Societal Adaptation to Advanced AI

J Bernardi, G Mukobi, H Greaves, L Heim… - arXiv preprint arXiv …, 2024 - arxiv.org
Existing strategies for managing risks from advanced AI systems often focus on affecting
what AI systems are developed and how they diffuse. However, this approach becomes less …

Looks real, feels fake: conflict detection in deepfake videos

EM Janssen, YF Mutis, T van Gog - Thinking & Reasoning, 2024 - Taylor & Francis
We investigated whether people show signs of conflict detection in both more implicit and
explicit judgments about the authenticity of short video clips depicting interviews with famous …

Deepfake Labels Restore Reality, Especially for Those Who Dislike the Speaker

NL Tenhundfeld, R Weber, WI MacKenzie… - arXiv preprint arXiv …, 2024 - arxiv.org
Deepfake videos create dangerous possibilities for public misinformation. In this experiment
(N= 204), we investigated whether labeling videos as containing actual or deepfake …

AI Safety and Security

M Rahaman, P Pappachan, SM Orozco… - Challenges in Large …, 2024 - igi-global.com
The chapter “AI Safety and Security” presents a comprehensive and multi-dimensional
exploration, addressing the critical aspects of safety and security in the context of large …

[PDF][PDF] Indirect Influence: How Elite Attacks on Information Providers Affect Public Opinion Formation

E Peterson, AMN Archer, K Bhakta, S Izumisawa - 2024 - files.osf.io
Attacks from politicians can dramatically reduce public trust in sources of expertise and
information. We consider the consequences of such criticism and argue that by shaping …

" Trustworthy AI" Cannot Be Trusted: A Virtue Jurisprudence-Based Approach to Analyse Who Is Responsible for AI Errors

S Zhou - Int'l JL Ethics Tech., 2024 - HeinOnline
Erroneous results generated by artificial intelligence (AI) have opened up new questions of
who is responsible for Al errors in legal scholarship. I support the prevailing academic view …

Reading Medieval Literary Time, Emotion, and Intimacy: On Affective Temporality in an Era of Asynchronicity and Artificial Intelligence

DD Atherton - 2024 - search.proquest.com
This dissertation performs sustained readings of five different medieval texts across four
chapters by grounding its readings via three primary words: time, emotion, and intimacy. The …