On the explainability of natural language processing deep models
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …
easily explainable during the learning and the prediction phases. This lack of interpretability …
Measuring corporate culture using machine learning
We create a culture dictionary using one of the latest machine learning techniques—the
word embedding model—and 209,480 earnings call transcripts. We score the five corporate …
word embedding model—and 209,480 earnings call transcripts. We score the five corporate …
WiC: the word-in-context dataset for evaluating context-sensitive meaning representations
MT Pilehvar, J Camacho-Collados - arXiv preprint arXiv:1808.09121, 2018 - arxiv.org
By design, word embeddings are unable to model the dynamic nature of words' semantics,
ie, the property of words to correspond to potentially different meanings. To address this …
ie, the property of words to correspond to potentially different meanings. To address this …
An overview of word and sense similarity
R Navigli, F Martelli - Natural Language Engineering, 2019 - cambridge.org
Over the last two decades, determining the similarity between words as well as between
their meanings, that is, word senses, has been proven to be of vital importance in the field of …
their meanings, that is, word senses, has been proven to be of vital importance in the field of …
Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings
Contextualized word embeddings (CWE) such as provided by ELMo (Peters et al., 2018),
Flair NLP (Akbik et al., 2018), or BERT (Devlin et al., 2019) are a major recent innovation in …
Flair NLP (Akbik et al., 2018), or BERT (Devlin et al., 2019) are a major recent innovation in …
From word to sense embeddings: A survey on vector representations of meaning
J Camacho-Collados, MT Pilehvar - Journal of Artificial Intelligence …, 2018 - jair.org
Over the past years, distributed semantic representations have proved to be effective and
flexible keepers of prior knowledge to be integrated into downstream applications. This …
flexible keepers of prior knowledge to be integrated into downstream applications. This …
[PDF][PDF] Improving word sense disambiguation in neural machine translation with sense embeddings
AR Gonzales, L Mascarell… - Proceedings of the Second …, 2017 - aclanthology.org
Word sense disambiguation is necessary in translation because different word senses often
have different translations. Neural machine translation models learn different senses of …
have different translations. Neural machine translation models learn different senses of …
SemEval-2021 task 2: Multilingual and cross-lingual word-in-context disambiguation (MCL-WiC)
F Martelli, N Kalach, G Tola, R Navigli - Proceedings of the 15th …, 2021 - iris.uniroma1.it
In this paper, we introduce the first SemEval task on Multilingual and Cross-Lingual Wordin-
Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent …
Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent …
Rule based fuzzy computing approach on self-supervised sentiment polarity classification with word sense disambiguation in machine translation for Hindi language
With increasing globalization, communication among people of diverse cultural
backgrounds is also taking place to a very large extent in the present era. Issues like …
backgrounds is also taking place to a very large extent in the present era. Issues like …