On the explainability of natural language processing deep models

JE Zini, M Awad - ACM Computing Surveys, 2022 - dl.acm.org
Despite their success, deep networks are used as black-box models with outputs that are not
easily explainable during the learning and the prediction phases. This lack of interpretability …

Measuring corporate culture using machine learning

K Li, F Mai, R Shen, X Yan - The Review of Financial Studies, 2021 - academic.oup.com
We create a culture dictionary using one of the latest machine learning techniques—the
word embedding model—and 209,480 earnings call transcripts. We score the five corporate …

WiC: the word-in-context dataset for evaluating context-sensitive meaning representations

MT Pilehvar, J Camacho-Collados - arXiv preprint arXiv:1808.09121, 2018 - arxiv.org
By design, word embeddings are unable to model the dynamic nature of words' semantics,
ie, the property of words to correspond to potentially different meanings. To address this …

An overview of word and sense similarity

R Navigli, F Martelli - Natural Language Engineering, 2019 - cambridge.org
Over the last two decades, determining the similarity between words as well as between
their meanings, that is, word senses, has been proven to be of vital importance in the field of …

Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings

G Wiedemann, S Remus, A Chawla… - arXiv preprint arXiv …, 2019 - arxiv.org
Contextualized word embeddings (CWE) such as provided by ELMo (Peters et al., 2018),
Flair NLP (Akbik et al., 2018), or BERT (Devlin et al., 2019) are a major recent innovation in …

From word to sense embeddings: A survey on vector representations of meaning

J Camacho-Collados, MT Pilehvar - Journal of Artificial Intelligence …, 2018 - jair.org
Over the past years, distributed semantic representations have proved to be effective and
flexible keepers of prior knowledge to be integrated into downstream applications. This …

[PDF][PDF] Improving word sense disambiguation in neural machine translation with sense embeddings

AR Gonzales, L Mascarell… - Proceedings of the Second …, 2017 - aclanthology.org
Word sense disambiguation is necessary in translation because different word senses often
have different translations. Neural machine translation models learn different senses of …

SemEval-2021 task 2: Multilingual and cross-lingual word-in-context disambiguation (MCL-WiC)

F Martelli, N Kalach, G Tola, R Navigli - Proceedings of the 15th …, 2021 - iris.uniroma1.it
In this paper, we introduce the first SemEval task on Multilingual and Cross-Lingual Wordin-
Context disambiguation (MCL-WiC). This task allows the largely under-investigated inherent …

A survey on neural word embeddings

E Sezerer, S Tekir - arXiv preprint arXiv:2110.01804, 2021 - arxiv.org
Understanding human language has been a sub-challenge on the way of intelligent
machines. The study of meaning in natural language processing (NLP) relies on the …

Rule based fuzzy computing approach on self-supervised sentiment polarity classification with word sense disambiguation in machine translation for Hindi language

S Chauhan, JP Shet, SM Beram, V Jagota… - ACM Transactions on …, 2023 - dl.acm.org
With increasing globalization, communication among people of diverse cultural
backgrounds is also taking place to a very large extent in the present era. Issues like …