Learning gender-neutral word embeddings

J Zhao, Y Zhou, Z Li, W Wang, KW Chang - arXiv preprint arXiv …, 2018 - arxiv.org
Word embedding models have become a fundamental component in a wide range of
Natural Language Processing (NLP) applications. However, embeddings trained on human …

From word to sense embeddings: A survey on vector representations of meaning

J Camacho-Collados, MT Pilehvar - Journal of Artificial Intelligence …, 2018 - jair.org
Over the past years, distributed semantic representations have proved to be effective and
flexible keepers of prior knowledge to be integrated into downstream applications. This …

AspEm: Embedding Learning by Aspects in Heterogeneous Information Networks

Y Shi, H Gui, Q Zhu, L Kaplan, J Han - Proceedings of the 2018 SIAM …, 2018 - SIAM
Heterogeneous information networks (HINs) are ubiquitous in real-world applications. Due
to the heterogeneity in HINs, the typed edges may not fully align with each other. In order to …

From word types to tokens and back: A survey of approaches to word meaning representation and interpretation

M Apidianaki - Computational Linguistics, 2023 - direct.mit.edu
Vector-based word representation paradigms situate lexical meaning at different levels of
abstraction. Distributional and static embedding models generate a single vector per word …

Hyperlex: A large-scale evaluation of graded lexical entailment

I Vulić, D Gerz, D Kiela, F Hill… - Computational Linguistics, 2017 - direct.mit.edu
We introduce HyperLex—a data set and evaluation resource that quantifies the extent of the
semantic category membership, that is, type-of relation, also known as hyponymy …

Embedding words and senses together via joint knowledge-enhanced training

M Mancini, J Camacho-Collados, I Iacobacci… - arXiv preprint arXiv …, 2016 - arxiv.org
Word embeddings are widely used in Natural Language Processing, mainly due to their
success in capturing semantic information from massive corpora. However, their creation …

Semantic parsing with semi-supervised sequential autoencoders

T Kočiský, G Melis, E Grefenstette, C Dyer… - arXiv preprint arXiv …, 2016 - arxiv.org
We present a novel semi-supervised approach for sequence transduction and apply it to
semantic parsing. The unsupervised component is based on a generative model in which …

Handling homographs in neural machine translation

F Liu, H Lu, G Neubig - arXiv preprint arXiv:1708.06510, 2017 - arxiv.org
Homographs, words with different meanings but the same surface form, have long caused
difficulty for machine translation systems, as it is difficult to select the correct translation …

Weakly supervised text classification using supervision signals from a language model

Z Zeng, W Ni, T Fang, X Li, X Zhao, Y Song - arXiv preprint arXiv …, 2022 - arxiv.org
Solving text classification in a weakly supervised manner is important for real-world
applications where human annotations are scarce. In this paper, we propose to query a …

Towards a seamless integration of word senses into downstream NLP applications

MT Pilehvar, J Camacho-Collados, R Navigli… - arXiv preprint arXiv …, 2017 - arxiv.org
Lexical ambiguity can impede NLP systems from accurate understanding of semantics.
Despite its potential benefits, the integration of sense-level information into NLP systems has …