Dabert: Dual attention enhanced bert for semantic matching

S Wang, D Liang, J Song, Y Li, W Wu - arXiv preprint arXiv:2210.03454, 2022 - arxiv.org
Transformer-based pre-trained language models such as BERT have achieved remarkable
results in Semantic Sentence Matching. However, existing models still suffer from insufficient …

Dual path modeling for semantic matching by perceiving subtle conflicts

C Xue, D Liang, S Wang, J Zhang… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
Transformer-based pre-trained models have achieved great improvements in semantic
matching. However, existing models still suffer from insufficient ability to capture subtle …

Comateformer: Combined Attention Transformer for Semantic Sentence Matching

B Li, D Liang, Z Zhang - arXiv preprint arXiv:2412.07220, 2024 - arxiv.org
The Transformer-based model have made significant strides in semantic matching tasks by
capturing connections between phrase pairs. However, to assess the relevance of sentence …

Local and Global: Text Matching Via Syntax Graph Calibration

L Li, Q Liao, M Lai, D Liang… - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
Pre-trained models such as BERT have achieved remarkable results in text matching tasks.
However, existing models still suffer from the challenge of capturing local subtle differences …

[PDF][PDF] Semantic Similarity Matching Using Contextualized Representations.

F Farahnak, E Mohammadi, MR Davari… - Canadian AI, 2021 - assets.pubpub.org
Different approaches to address semantic similarity matching generally fall into one of the
two categories of interaction-based and representation-based models. While each approach …

Cssam: Code search via attention matching of code semantics and structures

Y Hu, B Cai, Y Yu - arXiv preprint arXiv:2208.03922, 2022 - arxiv.org
Despite the continuous efforts in improving both the effectiveness and efficiency of code
search, two issues remained unsolved. First, programming languages have inherent strong …

Classical Sequence Match is a Competitive Few-Shot One-Class Learner

M Hu, H Gao, Y Bai, M Liu - arXiv preprint arXiv:2209.06394, 2022 - arxiv.org
Nowadays, transformer-based models gradually become the default choice for artificial
intelligence pioneers. The models also show superiority even in the few-shot scenarios. In …