[PDF][PDF] Chunking Defense for Adversarial Attacks on ASR

Y Shao, J Villalba, S Joshi, S Kataria… - Proc. Interspeech …, 2022 - par.nsf.gov
Proc. Interspeech 2022, 2022par.nsf.gov
While deep learning has lead to dramatic improvements in automatic speech recognition
(ASR) systems in the past few years, it has also made them vulnerable to adversarial
attacks. These attacks may be designed to either make ASR fail in producing the correct
transcription or worse, output an adversary-chosen sentence. In this work, we propose a
defense based on independently processing random or fixed size chunks of the speech
input in the hope of “containing” the cumulative effect of the adversarial perturbations. This …
Abstract
While deep learning has lead to dramatic improvements in automatic speech recognition (ASR) systems in the past few years, it has also made them vulnerable to adversarial attacks. These attacks may be designed to either make ASR fail in producing the correct transcription or worse, output an adversary-chosen sentence. In this work, we propose a defense based on independently processing random or fixed size chunks of the speech input in the hope of “containing” the cumulative effect of the adversarial perturbations. This approach does not require any additional training of the ASR system, or any defensive preprocessing of the input. It can be easily applied to any ASR systems with little loss in performance under benign conditions, while improving adversarial robustness. We perform experiments on the Librispeech data set with different adversarial attack budgets, and show that the proposed defense achieves consistent improvement on two different ASR systems/models.
par.nsf.gov
以上显示的是最相近的搜索结果。 查看全部搜索结果