What does it mean for a language model to preserve privacy?
Natural language reflects our private lives and identities, making its privacy concerns as
broad as those of real life. Language models lack the ability to understand the context and …
broad as those of real life. Language models lack the ability to understand the context and …
[HTML][HTML] Beyond rating scales: With targeted evaluation, language models are poised for psychological assessment
In this narrative review, we survey recent empirical evaluations of AI-based language
assessments and present a case for the technology of large language models to be poised …
assessments and present a case for the technology of large language models to be poised …
The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization
We present a novel benchmark and associated evaluation metrics for assessing the
performance of text anonymization methods. Text anonymization, defined as the task of …
performance of text anonymization methods. Text anonymization, defined as the task of …
Identifying and mitigating privacy risks stemming from language models: A survey
V Smith, AS Shamsabadi, C Ashurst… - arXiv preprint arXiv …, 2023 - arxiv.org
Rapid advancements in language models (LMs) have led to their adoption across many
sectors. Alongside the potential benefits, such models present a range of risks, including …
sectors. Alongside the potential benefits, such models present a range of risks, including …
[PDF][PDF] " It'sa Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
The widespread use of Large Language Model (LLM)-based conversational agents (CAs),
especially in high-stakes domains, raises many privacy concerns. Building ethical LLM …
especially in high-stakes domains, raises many privacy concerns. Building ethical LLM …
Preserving privacy through dememorization: An unlearning technique for mitigating memorization risks in language models
Abstract Large Language models (LLMs) are trained on vast amounts of data, including
sensitive information that poses a risk to personal privacy if exposed. LLMs have shown the …
sensitive information that poses a risk to personal privacy if exposed. LLMs have shown the …
Man vs the machine in the struggle for effective text anonymisation in the age of large language models
C Patsakis, N Lykousas - Scientific Reports, 2023 - nature.com
The collection and use of personal data are becoming more common in today's data-driven
culture. While there are many advantages to this, including better decision-making and …
culture. While there are many advantages to this, including better decision-making and …
Grandma Karl is 27 years old–research agenda for pseudonymization of research data
E Volodina, S Dobnik… - 2023 IEEE Ninth …, 2023 - ieeexplore.ieee.org
Accessibility of research data is critical for advances in many research fields, but textual data
often cannot be shared due to the personal and sensitive information which it contains, eg …
often cannot be shared due to the personal and sensitive information which it contains, eg …
Learning to unlearn: Instance-wise unlearning for pre-trained classifiers
Since the recent advent of regulations for data protection (eg, the General Data Protection
Regulation), there has been increasing demand in deleting information learned from …
Regulation), there has been increasing demand in deleting information learned from …
On text-based personality computing: Challenges and future directions
Text-based personality computing (TPC) has gained many research interests in NLP. In this
paper, we describe 15 challenges that we consider deserving the attention of the NLP …
paper, we describe 15 challenges that we consider deserving the attention of the NLP …