[HTML][HTML] A literature review of textual hate speech detection methods and datasets

F Alkomah, X Ma - Information, 2022 - mdpi.com
Online toxic discourses could result in conflicts between groups or harm to online
communities. Hate speech is complex and multifaceted harmful or offensive content …

Regulating ChatGPT and other large generative AI models

P Hacker, A Engel, M Mauer - Proceedings of the 2023 ACM Conference …, 2023 - dl.acm.org
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are
rapidly transforming the way we communicate, illustrate, and create. However, AI regulation …

Octopack: Instruction tuning code large language models

N Muennighoff, Q Liu, A Zebaze, Q Zheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Finetuning large language models (LLMs) on instructions leads to vast performance
improvements on natural language tasks. We apply instruction tuning using code …

Are multimodal transformers robust to missing modality?

M Ma, J Ren, L Zhao, D Testuggine… - Proceedings of the …, 2022 - openaccess.thecvf.com
Multimodal data collected from the real world are often imperfect due to missing modalities.
Therefore multimodal models that are robust against modal-incomplete data are highly …

Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models

Y Qu, X Shen, X He, M Backes, S Zannettou… - Proceedings of the 2023 …, 2023 - dl.acm.org
State-of-the-art Text-to-Image models like Stable Diffusion and DALLE\cdot2 are
revolutionizing how people generate visual content. At the same time, society has serious …

Look before you leap: An exploratory study of uncertainty measurement for large language models

Y Huang, J Song, Z Wang, H Chen, L Ma - arXiv preprint arXiv:2307.10236, 2023 - arxiv.org
The recent performance leap of Large Language Models (LLMs) opens up new
opportunities across numerous industrial applications and domains. However, erroneous …

Generative representational instruction tuning

N Muennighoff, H Su, L Wang, N Yang, F Wei… - arXiv preprint arXiv …, 2024 - arxiv.org
All text-based language problems can be reduced to either generation or embedding.
Current models only perform well at one or the other. We introduce generative …

Detecting and understanding harmful memes: A survey

S Sharma, F Alam, MS Akhtar, D Dimitrov… - arXiv preprint arXiv …, 2022 - arxiv.org
The automatic identification of harmful content online is of major concern for social media
platforms, policymakers, and society. Researchers have studied textual, visual, and audio …

Memecap: A dataset for captioning and interpreting memes

EJ Hwang, V Shwartz - arXiv preprint arXiv:2305.13703, 2023 - arxiv.org
Memes are a widely popular tool for web users to express their thoughts using visual
metaphors. Understanding memes requires recognizing and interpreting visual metaphors …

On the evolution of (hateful) memes by means of multimodal contrastive learning

Y Qu, X He, S Pierson, M Backes… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
The dissemination of hateful memes online has adverse effects on social media platforms
and the real world. Detecting hateful memes is challenging, one of the reasons being the …