Evaluating the social impact of generative ai systems in systems and society
Generative AI systems across modalities, ranging from text, image, audio, and video, have
broad social impacts, but there exists no official standard for means of evaluating those …
broad social impacts, but there exists no official standard for means of evaluating those …
Efficient methods for natural language processing: A survey
Recent work in natural language processing (NLP) has yielded appealing results from
scaling model parameters and training data; however, using only scale to improve …
scaling model parameters and training data; however, using only scale to improve …
Aya model: An instruction finetuned open-access multilingual language model
Recent breakthroughs in large language models (LLMs) have centered around a handful of
data-rich languages. What does it take to broaden access to breakthroughs beyond first …
data-rich languages. What does it take to broaden access to breakthroughs beyond first …
Model compression in practice: Lessons learned from practitioners creating on-device machine learning experiences
On-device machine learning (ML) promises to improve the privacy, responsiveness, and
proliferation of new, intelligent user experiences by moving ML computation onto everyday …
proliferation of new, intelligent user experiences by moving ML computation onto everyday …
How does quantization affect multilingual LLMs?
Quantization techniques are widely used to improve inference speed and deployment of
large language models. While a wide body of work examines the impact of quantization on …
large language models. While a wide body of work examines the impact of quantization on …
Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of a Multilingual ASR Model
Neural network pruning offers an effective method for compressing a multilingual automatic
speech recognition (ASR) model with minimal performance loss. However, it entails several …
speech recognition (ASR) model with minimal performance loss. However, it entails several …
You Never Know: Quantization Induces Inconsistent Biases in Vision-Language Foundation Models
We study the impact of a standard practice in compressing foundation vision-language
models-quantization-on the models' ability to produce socially-fair outputs. In contrast to …
models-quantization-on the models' ability to produce socially-fair outputs. In contrast to …
Beyond top line metrics: understanding the trade-off between model size and generalization properties
S Hooker - 2024 - papyrus.bib.umontreal.ca
In this thesis, the constituent works ask “What is gained or lost as we vary the number of
parameters?”. This question is increasingly relevant in an era of scientific inquiry where …
parameters?”. This question is increasingly relevant in an era of scientific inquiry where …