A review of modern recommender systems using generative models (gen-recsys)

Y Deldjoo, Z He, J McAuley, A Korikov… - Proceedings of the 30th …, 2024 - dl.acm.org
Traditional recommender systems typically use user-item rating histories as their main data
source. However, deep generative models now have the capability to model and sample …

The ai risk repository: A comprehensive meta-review, database, and taxonomy of risks from artificial intelligence

P Slattery, AK Saeri, EAC Grundy, J Graham… - arXiv preprint arXiv …, 2024 - arxiv.org
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics,
auditors, policymakers, AI companies, and the public. However, a lack of shared …

International Scientific Report on the Safety of Advanced AI (Interim Report)

Y Bengio, S Mindermann, D Privitera… - arXiv preprint arXiv …, 2024 - arxiv.org
This is the interim publication of the first International Scientific Report on the Safety of
Advanced AI. The report synthesises the scientific understanding of general-purpose AI--AI …

Towards responsible development of generative AI for education: An evaluation-driven approach

I Jurenka, M Kunesch, KR McKee, D Gillick… - arXiv preprint arXiv …, 2024 - arxiv.org
A major challenge facing the world is the provision of equitable and universal access to
quality education. Recent advances in generative AI (gen AI) have created excitement about …

Can Editing LLMs Inject Harm?

C Chen, B Huang, Z Li, Z Chen, S Lai, X Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
Knowledge editing has been increasingly adopted to correct the false or outdated
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …

Advanced AI assistants that act on our behalf may not be ethically or legally feasible

S Milano, S Nyholm - Nature Machine Intelligence, 2024 - nature.com
Google and OpenAI have recently announced major product launches involving artificial
intelligence (AI) agents based on large language models (LLMs) and other generative …

'No, Alexa, no!': designing child-safe AI and protecting children from the risks of the 'empathy gap'in large language models

N Kurian - Learning, Media and Technology, 2024 - Taylor & Francis
Rapid advancements in large language models makes child-safe design for their youngest
users crucial. This article therefore offers child-centred AI design and policy …

Operationalizing contextual integrity in privacy-conscious assistants

S Ghalebikesabi, E Bagdasaryan, R Yi, I Yona… - arXiv preprint arXiv …, 2024 - arxiv.org
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform
complex tasks on behalf of users. While the helpfulness of such assistants can increase …

Frontier AI developers need an internal audit function

J Schuett - Risk Analysis, 2024 - Wiley Online Library
This article argues that frontier artificial intelligence (AI) developers need an internal audit
function. First, it describes the role of internal audit in corporate governance: internal audit …

Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits

T Chakrabarty, P Laban, CS Wu - arXiv preprint arXiv:2409.14509, 2024 - arxiv.org
LLM-based applications are helping people write, and LLM-generated text is making its way
into social media, journalism, and our classrooms. However, the differences between LLM …