Connecting embeddings based on multiplex relational graph attention networks for knowledge graph entity typing

Y Zhao, H Zhou, A Zhang, R Xie, Q Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Knowledge graph entity typing (KGET) aims to infer missing entity typing instances in KGs,
which is a significant subtask of KG completion. Despite of its progress, however, we …

Fine-grained entity typing via label reasoning

Q Liu, H Lin, X Xiao, X Han, L Sun, H Wu - arXiv preprint arXiv:2109.05744, 2021 - arxiv.org
Conventional entity typing approaches are based on independent classification paradigms,
which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity …

Example-based named entity recognition

M Ziyadi, Y Sun, A Goswami, J Huang… - arXiv preprint arXiv …, 2020 - arxiv.org
We present a novel approach to named entity recognition (NER) in the presence of scarce
data that we call example-based NER. Our train-free few-shot learning approach takes …

End-to-end distantly supervised information extraction with retrieval augmentation

Y Zhang, H Fei, P Li - Proceedings of the 45th International ACM SIGIR …, 2022 - dl.acm.org
Distant supervision (DS) has been a prevalent approach to generating labeled data for
information extraction (IE) tasks. However, DS often suffers from noisy label problems, where …

Generative entity typing with curriculum learning

S Yuan, D Yang, J Liang, Z Li, J Liu, J Huang… - arXiv preprint arXiv …, 2022 - arxiv.org
Entity typing aims to assign types to the entity mentions in given texts. The traditional
classification-based entity typing paradigm has two unignorable drawbacks: 1) it fails to …

Divide and denoise: Learning from noisy labels in fine-grained entity typing with cluster-wise loss correction

K Pang, H Zhang, J Zhou, T Wang - … of the 60th Annual Meeting of …, 2022 - aclanthology.org
Abstract Fine-grained Entity Typing (FET) has made great progress based on distant
supervision but still suffers from label noise. Existing FET noise learning methods rely on …

Multi-hop question answering under temporal knowledge editing

K Cheng, G Lin, H Fei, L Yu, MA Ali, L Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant
attention in the era of large language models. However, existing models for MQA under KE …

Learning from sibling mentions with scalable graph inference in fine-grained entity typing

Y Chen, J Cheng, H Jiang, L Liu, H Zhang… - Proceedings of the …, 2022 - aclanthology.org
In this paper, we firstly empirically find that existing models struggle to handle hard mentions
due to their insufficient contexts, which consequently limits their overall typing performance …

Dialectical alignment: Resolving the tension of 3h and security threats of llms

S Yang, J Su, H Jiang, M Li, K Cheng, MA Ali… - arXiv preprint arXiv …, 2024 - arxiv.org
With the rise of large language models (LLMs), ensuring they embody the principles of being
helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While …

Prompt-saw: Leveraging relation-aware graphs for textual prompt compression

MA Ali, Z Li, S Yang, K Cheng, Y Cao, T Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have shown exceptional abilities for multiple different
natural language processing tasks. While prompting is a crucial tool for LLM inference, we …