Conversation disentanglement with bi-level contrastive learning

C Huang, Z Zhang, H Fei, L Liao - arXiv preprint arXiv:2210.15265, 2022 - arxiv.org
arXiv preprint arXiv:2210.15265, 2022arxiv.org
Conversation disentanglement aims to group utterances into detached sessions, which is a
fundamental task in processing multi-party conversations. Existing methods have two main
drawbacks. First, they overemphasize pairwise utterance relations but pay inadequate
attention to the utterance-to-context relation modeling. Second, huge amount of human
annotated data is required for training, which is expensive to obtain in practice. To address
these issues, we propose a general disentangle model based on bi-level contrastive …
Conversation disentanglement aims to group utterances into detached sessions, which is a fundamental task in processing multi-party conversations. Existing methods have two main drawbacks. First, they overemphasize pairwise utterance relations but pay inadequate attention to the utterance-to-context relation modeling. Second, huge amount of human annotated data is required for training, which is expensive to obtain in practice. To address these issues, we propose a general disentangle model based on bi-level contrastive learning. It brings closer utterances in the same session while encourages each utterance to be near its clustered session prototypes in the representation space. Unlike existing approaches, our disentangle model works in both supervised setting with labeled data and unsupervised setting when no such data is available. The proposed method achieves new state-of-the-art performance on both settings across several public datasets.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果