Self-conditioned embedding diffusion for text generation

R Strudel, C Tallec, F Altché, Y Du, Y Ganin… - arXiv preprint arXiv …, 2022 - arxiv.org
Can continuous diffusion models bring the same performance breakthrough on natural
language they did for image generation? To circumvent the discrete nature of text data, we
can simply project tokens in a continuous space of embeddings, as is standard in language
modeling. We propose Self-conditioned Embedding Diffusion, a continuous diffusion
mechanism that operates on token embeddings and allows to learn flexible and scalable
diffusion models for both conditional and unconditional text generation. Through qualitative …

[引用][C] Self-conditioned embedding diffusion for text generation. arXiv 2022

R Strudel, C Tallec, F Altché, Y Du, Y Ganin, A Mensch… - arXiv preprint arXiv:2211.04236
以上显示的是最相近的搜索结果。 查看全部搜索结果