Multi-step jailbreaking privacy attacks on chatgpt
With the rapid progress of large language models (LLMs), many downstream NLP tasks can
be well solved given appropriate prompts. Though model developers and researchers work
hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to
steer AI-generated content (AIGC) for the human good. As powerful LLMs are devouring
existing text data from various domains (eg, GPT-3 is trained on 45TB texts), it is natural to
doubt whether the private information is included in the training data and what privacy …
be well solved given appropriate prompts. Though model developers and researchers work
hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to
steer AI-generated content (AIGC) for the human good. As powerful LLMs are devouring
existing text data from various domains (eg, GPT-3 is trained on 45TB texts), it is natural to
doubt whether the private information is included in the training data and what privacy …
以上显示的是最相近的搜索结果。 查看全部搜索结果