Connecting embeddings based on multiplex relational graph attention networks for knowledge graph entity typing
Knowledge graph entity typing (KGET) aims to infer missing entity typing instances in KGs,
which is a significant subtask of KG completion. Despite of its progress, however, we …
which is a significant subtask of KG completion. Despite of its progress, however, we …
Fine-grained entity typing via label reasoning
Conventional entity typing approaches are based on independent classification paradigms,
which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity …
which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity …
Example-based named entity recognition
M Ziyadi, Y Sun, A Goswami, J Huang… - arXiv preprint arXiv …, 2020 - arxiv.org
We present a novel approach to named entity recognition (NER) in the presence of scarce
data that we call example-based NER. Our train-free few-shot learning approach takes …
data that we call example-based NER. Our train-free few-shot learning approach takes …
End-to-end distantly supervised information extraction with retrieval augmentation
Distant supervision (DS) has been a prevalent approach to generating labeled data for
information extraction (IE) tasks. However, DS often suffers from noisy label problems, where …
information extraction (IE) tasks. However, DS often suffers from noisy label problems, where …
Generative entity typing with curriculum learning
Entity typing aims to assign types to the entity mentions in given texts. The traditional
classification-based entity typing paradigm has two unignorable drawbacks: 1) it fails to …
classification-based entity typing paradigm has two unignorable drawbacks: 1) it fails to …
Divide and denoise: Learning from noisy labels in fine-grained entity typing with cluster-wise loss correction
K Pang, H Zhang, J Zhou, T Wang - … of the 60th Annual Meeting of …, 2022 - aclanthology.org
Abstract Fine-grained Entity Typing (FET) has made great progress based on distant
supervision but still suffers from label noise. Existing FET noise learning methods rely on …
supervision but still suffers from label noise. Existing FET noise learning methods rely on …
Multi-hop question answering under temporal knowledge editing
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant
attention in the era of large language models. However, existing models for MQA under KE …
attention in the era of large language models. However, existing models for MQA under KE …
Learning from sibling mentions with scalable graph inference in fine-grained entity typing
In this paper, we firstly empirically find that existing models struggle to handle hard mentions
due to their insufficient contexts, which consequently limits their overall typing performance …
due to their insufficient contexts, which consequently limits their overall typing performance …
Dialectical alignment: Resolving the tension of 3h and security threats of llms
With the rise of large language models (LLMs), ensuring they embody the principles of being
helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While …
helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While …
Prompt-saw: Leveraging relation-aware graphs for textual prompt compression
Large language models (LLMs) have shown exceptional abilities for multiple different
natural language processing tasks. While prompting is a crucial tool for LLM inference, we …
natural language processing tasks. While prompting is a crucial tool for LLM inference, we …