Evaluating the social impact of generative ai systems in systems and society

I Solaiman, Z Talat, W Agnew, L Ahmad… - arXiv preprint arXiv …, 2023 - arxiv.org
Generative AI systems across modalities, ranging from text, image, audio, and video, have
broad social impacts, but there exists no official standard for means of evaluating those …

Efficient methods for natural language processing: A survey

M Treviso, JU Lee, T Ji, B Aken, Q Cao… - Transactions of the …, 2023 - direct.mit.edu
Recent work in natural language processing (NLP) has yielded appealing results from
scaling model parameters and training data; however, using only scale to improve …

Aya model: An instruction finetuned open-access multilingual language model

A Üstün, V Aryabumi, ZX Yong, WY Ko… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent breakthroughs in large language models (LLMs) have centered around a handful of
data-rich languages. What does it take to broaden access to breakthroughs beyond first …

Model compression in practice: Lessons learned from practitioners creating on-device machine learning experiences

F Hohman, MB Kery, D Ren, D Moritz - … of the CHI Conference on Human …, 2024 - dl.acm.org
On-device machine learning (ML) promises to improve the privacy, responsiveness, and
proliferation of new, intelligent user experiences by moving ML computation onto everyday …

How does quantization affect multilingual LLMs?

K Marchisio, S Dash, H Chen, D Aumiller… - arXiv preprint arXiv …, 2024 - arxiv.org
Quantization techniques are widely used to improve inference speed and deployment of
large language models. While a wide body of work examines the impact of quantization on …

Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of a Multilingual ASR Model

J Xie, K Li, J Guo, A Tjandra… - ICASSP 2024-2024 …, 2024 - ieeexplore.ieee.org
Neural network pruning offers an effective method for compressing a multilingual automatic
speech recognition (ASR) model with minimal performance loss. However, it entails several …

You Never Know: Quantization Induces Inconsistent Biases in Vision-Language Foundation Models

E Slyman, A Kanneganti, S Hong, S Lee - arXiv preprint arXiv:2410.20265, 2024 - arxiv.org
We study the impact of a standard practice in compressing foundation vision-language
models-quantization-on the models' ability to produce socially-fair outputs. In contrast to …

Beyond top line metrics: understanding the trade-off between model size and generalization properties

S Hooker - 2024 - papyrus.bib.umontreal.ca
In this thesis, the constituent works ask “What is gained or lost as we vary the number of
parameters?”. This question is increasingly relevant in an era of scientific inquiry where …