Named entity extraction for knowledge graphs: A literature overview

T Al-Moslmi, MG Ocaña, AL Opdahl, C Veres - IEEE Access, 2020 - ieeexplore.ieee.org
An enormous amount of digital information is expressed as natural-language (NL) text that is
not easily processable by computers. Knowledge Graphs (KG) offer a widely used format for …

Choosing transfer languages for cross-lingual learning

YH Lin, CY Chen, J Lee, Z Li, Y Zhang, M Xia… - arXiv preprint arXiv …, 2019 - arxiv.org
Cross-lingual transfer, where a high-resource transfer language is used to improve the
accuracy of a low-resource task language, is now an invaluable tool for improving …

Neural entity linking: A survey of models based on deep learning

Ö Sevgili, A Shelmanov, M Arkhipov… - Semantic …, 2022 - content.iospress.com
This survey presents a comprehensive description of recent neural entity linking (EL)
systems developed since 2015 as a result of the “deep learning revolution” in natural …

Entity linking in 100 languages

JA Botha, Z Shan, D Gillick - arXiv preprint arXiv:2011.02690, 2020 - arxiv.org
We propose a new formulation for multilingual entity linking, where language-specific
mentions resolve to a language-agnostic Knowledge Base. We train a dual encoder in this …

What can knowledge bring to machine learning?—a survey of low-shot learning for structured data

Y Hu, A Chapman, G Wen, DW Hall - ACM Transactions on Intelligent …, 2022 - dl.acm.org
Supervised machine learning has several drawbacks that make it difficult to use in many
situations. Drawbacks include heavy reliance on massive training data, limited …

Re-examining the Role of Schema Linking in Text-to-SQL

W Lei, W Wang, Z Ma, T Gan, W Lu… - Proceedings of the …, 2020 - aclanthology.org
In existing sophisticated text-to-SQL models, schema linking is often considered as a simple,
minor component, belying its importance. By providing a schema linking corpus based on …

When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models

B Muller, A Anastasopoulos, B Sagot… - arXiv preprint arXiv …, 2020 - arxiv.org
Transfer learning based on pretraining language models on a large amount of raw data has
become a new norm to reach state-of-the-art performance in NLP. Still, it remains unclear …

How linguistically fair are multilingual pre-trained language models?

M Choudhury, A Deshpande - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Massively multilingual pre-trained language models, such as mBERT and XLM-RoBERTa,
have received significant attention in the recent NLP literature for their excellent capability …

Cross-lingual few-shot learning on unseen languages

G Winata, S Wu, M Kulkarni, T Solorio… - Proceedings of the …, 2022 - aclanthology.org
Large pre-trained language models (LMs) have demonstrated the ability to obtain good
performance on downstream tasks with limited examples in cross-lingual settings. However …

Logic-guided semantic representation learning for zero-shot relation classification

J Li, R Wang, N Zhang, W Zhang, F Yang… - arXiv preprint arXiv …, 2020 - arxiv.org
Relation classification aims to extract semantic relations between entity pairs from the
sentences. However, most existing methods can only identify seen relation classes that …