Securing large language models: Addressing bias, misinformation, and prompt attacks
Large Language Models (LLMs) demonstrate impressive capabilities across various fields,
yet their increasing use raises critical security concerns. This article reviews recent literature …
yet their increasing use raises critical security concerns. This article reviews recent literature …
[HTML][HTML] A survey on the use of large language models (llms) in fake news
The proliferation of fake news and fake profiles on social media platforms poses significant
threats to information integrity and societal trust. Traditional detection methods, including …
threats to information integrity and societal trust. Traditional detection methods, including …
Preference tuning with human feedback on language, speech, and vision tasks: A survey
Preference tuning is a crucial process for aligning deep generative models with human
preferences. This survey offers a thorough overview of recent advancements in preference …
preferences. This survey offers a thorough overview of recent advancements in preference …
Catching chameleons: Detecting evolving disinformation generated using large language models
Despite recent advancements in detecting disinformation generated by large language
models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this …
models (LLMs), current efforts overlook the ever-evolving nature of this disinformation. In this …
Generative monoculture in large language models
We introduce {\em generative monoculture}, a behavior observed in large language models
(LLMs) characterized by a significant narrowing of model output diversity relative to …
(LLMs) characterized by a significant narrowing of model output diversity relative to …
Model attribution in llm-generated disinformation: A domain generalization approach with supervised contrastive learning
Model attribution for LLM-generated disinformation poses a significant challenge in
understanding its origins and mitigating its spread. This task is especially challenging …
understanding its origins and mitigating its spread. This task is especially challenging …
Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News
LLMs offer valuable capabilities, yet they can be utilized by malicious users to disseminate
deceptive information and generate fake news. The growing prevalence of LLMs poses …
deceptive information and generate fake news. The growing prevalence of LLMs poses …
Safe+ Safe= Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Recent advances in Large Vision-Language Models (LVLMs) have showcased strong
reasoning abilities across multiple modalities, achieving significant breakthroughs in various …
reasoning abilities across multiple modalities, achieving significant breakthroughs in various …
Humanizing the Machine: Proxy Attacks to Mislead LLM Detectors
The advent of large language models (LLMs) has revolutionized the field of text generation,
producing outputs that closely mimic human-like writing. Although academic and industrial …
producing outputs that closely mimic human-like writing. Although academic and industrial …
Cross-attention multi-perspective fusion network based fake news censorship
Current fake news censorship models mostly use only one single semantic perspective,
which contains insufficient information and may result in biases. However, news inherently …
which contains insufficient information and may result in biases. However, news inherently …