Towards a common understanding of contributing factors for cross-lingual transfer in multilingual language models: A review
F Philippy, S Guo, S Haddadan - arXiv preprint arXiv:2305.16768, 2023 - arxiv.org
In recent years, pre-trained Multilingual Language Models (MLLMs) have shown a strong
ability to transfer knowledge across different languages. However, given that the aspiration …
ability to transfer knowledge across different languages. However, given that the aspiration …
Unsupervised layer-wise score aggregation for textual ood detection
Abstract Out-of-distribution (OOD) detection is a rapidly growing field due to new robustness
and security requirements driven by an increased number of AI-based systems. Existing …
and security requirements driven by an increased number of AI-based systems. Existing …
The role of typological feature prediction in NLP and linguistics
J Bjerva - Computational Linguistics, 2023 - direct.mit.edu
Computational typology has gained traction in the field of Natural Language Processing
(NLP) in recent years, as evidenced by the increasing number of papers on the topic and the …
(NLP) in recent years, as evidenced by the increasing number of papers on the topic and the …
Multi task learning for zero shot performance prediction of multilingual models
Massively Multilingual Transformer based Language Models have been observed to be
surprisingly effective on zero-shot transfer across languages, though the performance varies …
surprisingly effective on zero-shot transfer across languages, though the performance varies …
On the calibration of massively multilingual language models
Massively Multilingual Language Models (MMLMs) have recently gained popularity due to
their surprising effectiveness in cross-lingual transfer. While there has been much work in …
their surprising effectiveness in cross-lingual transfer. While there has been much work in …
Beyond Static models and test sets: Benchmarking the potential of pre-trained models across tasks and languages
Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR
support around 100 languages, most existing multilingual NLP benchmarks provide …
support around 100 languages, most existing multilingual NLP benchmarks provide …
Litmus predictor: An ai assistant for building reliable, high-performing and fair multilingual nlp systems
Pre-trained multilingual language models are gaining popularity due to their cross-lingual
zero-shot transfer ability, but these models do not perform equally well in all languages …
zero-shot transfer ability, but these models do not perform equally well in all languages …
Rethinking Machine Learning Benchmarks in the Context of Professional Codes of Conduct
Benchmarking efforts for machine learning have often mimicked (or even explicitly used)
professional licensing exams to assess capabilities in a given area, focusing primarily on …
professional licensing exams to assess capabilities in a given area, focusing primarily on …
Predicting fine-tuning performance with probing
Large NLP models have recently shown impressive performance in language
understanding tasks, typically evaluated by their fine-tuned performance. Alternatively …
understanding tasks, typically evaluated by their fine-tuned performance. Alternatively …
Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Large language models hold significant promise in multilingual applications. However,
inherent biases stemming from predominantly English-centric pre-training have led to the …
inherent biases stemming from predominantly English-centric pre-training have led to the …