Among us: Adversarially robust collaborative perception by consensus
Proceedings of the IEEE/CVF International Conference on …, 2023•openaccess.thecvf.com
Multiple robots could perceive a scene (eg, detect objects) collaboratively better than
individuals, although easily suffer from adversarial attacks when using deep learning. This
could be addressed by the adversarial defense, but its training requires the often-unknown
attacking mechanism. Differently, we propose ROBOSAC, a novel sampling-based defense
strategy generalizable to unseen attackers. Our key idea is that collaborative perception
should lead to consensus rather than dissensus in results compared to individual …
individuals, although easily suffer from adversarial attacks when using deep learning. This
could be addressed by the adversarial defense, but its training requires the often-unknown
attacking mechanism. Differently, we propose ROBOSAC, a novel sampling-based defense
strategy generalizable to unseen attackers. Our key idea is that collaborative perception
should lead to consensus rather than dissensus in results compared to individual …
Abstract
Multiple robots could perceive a scene (eg, detect objects) collaboratively better than individuals, although easily suffer from adversarial attacks when using deep learning. This could be addressed by the adversarial defense, but its training requires the often-unknown attacking mechanism. Differently, we propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers. Our key idea is that collaborative perception should lead to consensus rather than dissensus in results compared to individual perception. This leads to our hypothesize-and-verify framework: perception results with and without collaboration from a random subset of teammates are compared until reaching a consensus. In such a framework, more teammates in the sampled subset often entail better perception performance but require longer sampling time to reject potential attackers. Thus, we derive how many sampling trials are needed to ensure the desired size of an attacker-free subset, or equivalently, the maximum size of such a subset that we can successfully sample within a given number of trials. We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果