Analysis and mitigations of reverse engineering attacks on local feature descriptors

D Dangwal, VT Lee, HJ Kim, T Shen, M Cowan… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2105.03812, 2021arxiv.org
As autonomous driving and augmented reality evolve, a practical concern is data privacy. In
particular, these applications rely on localization based on user images. The widely adopted
technology uses local feature descriptors, which are derived from the images and it was long
thought that they could not be reverted back. However, recent work has demonstrated that
under certain conditions reverse engineering attacks are possible and allow an adversary to
reconstruct RGB images. This poses a potential risk to user privacy. We take this a step …
As autonomous driving and augmented reality evolve, a practical concern is data privacy. In particular, these applications rely on localization based on user images. The widely adopted technology uses local feature descriptors, which are derived from the images and it was long thought that they could not be reverted back. However, recent work has demonstrated that under certain conditions reverse engineering attacks are possible and allow an adversary to reconstruct RGB images. This poses a potential risk to user privacy. We take this a step further and model potential adversaries using a privacy threat model. Subsequently, we show under controlled conditions a reverse engineering attack on sparse feature maps and analyze the vulnerability of popular descriptors including FREAK, SIFT and SOSNet. Finally, we evaluate potential mitigation techniques that select a subset of descriptors to carefully balance privacy reconstruction risk while preserving image matching accuracy; our results show that similar accuracy can be obtained when revealing less information.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果