Prada: Practical black-box adversarial attacks against neural ranking models

C Wu, R Zhang, J Guo, M De Rijke, Y Fan… - ACM Transactions on …, 2023 - dl.acm.org
Neural ranking models (NRMs) have shown remarkable success in recent years, especially
with pre-trained language models. However, deep neural models are notorious for their …

Universal adversarial perturbations for vision-language pre-trained models

PF Zhang, Z Huang, G Bai - Proceedings of the 47th International ACM …, 2024 - dl.acm.org
Vision-language pre-trained (VLP) models have been the foundation of numerous vision-
language tasks. Given their prevalence, it becomes imperative to assess their adversarial …

Proactive privacy-preserving learning for cross-modal retrieval

PF Zhang, G Bai, H Yin, Z Huang - ACM Transactions on Information …, 2023 - dl.acm.org
Deep cross-modal retrieval techniques have recently achieved remarkable performance,
which also poses severe threats to data privacy potentially. Nowadays, enormous user …

Machine unlearning for image retrieval: A generative scrubbing approach

PF Zhang, G Bai, Z Huang, XS Xu - Proceedings of the 30th ACM …, 2022 - dl.acm.org
Data owners have the right to request for deleting their data from a machine learning (ML)
model. In response, a naïve way is to retrain the model with the original dataset excluding …

Turning backdoors for efficient privacy protection against image retrieval violations

Q Liu, T Zhou, Z Cai, Y Yuan, M Xu, J Qin… - Information Processing & …, 2023 - Elsevier
Image retrieval, empowered by deep metric learning, is undoubtedly a building block in
today's media-sharing practices, but it also poses a severe risk of digging user privacy via …

Attack is the best defense: Towards preemptive-protection person re-identification

L Wang, W Zhang, D Wu, F Zhu, B Li - Proceedings of the 30th ACM …, 2022 - dl.acm.org
Person Re-IDentification (ReID) aims at retrieving images of the same person across
multiple camera views. Despite its popularity in surveillance and public safety, the leakage …

Proactive schemes: A survey of adversarial attacks for social good

V Asnani, X Yin, X Liu - arXiv preprint arXiv:2409.16491, 2024 - arxiv.org
Adversarial attacks in computer vision exploit the vulnerabilities of machine learning models
by introducing subtle perturbations to input data, often leading to incorrect predictions or …

Mitigating Cross-modal Retrieval Violations with Privacy-preserving Backdoor Learning

Q Liu, Y Qiu, T Zhou, M Xu, J Qin, W Ma… - … on Circuits and …, 2024 - ieeexplore.ieee.org
Deep cross-modal retrieval, with its effective and efficient search capabilities, has gained
widespread adoption in today's media-sharing practices yet raises concerns regarding …

Denoising Neural Relation Extraction for Spatio-temporal Recommendation System

Y Wang, L Guo, Y Yu, Y Gao - IEEE Transactions on Big Data, 2024 - ieeexplore.ieee.org
The Point-of-Interest (POI) recommendation system in location-based social networks is
pivotal, offering versatile applications. Personalized recommendations hinge on pre …

Robust learning with adversarial perturbations and label noise: A two-pronged defense approach

PF Zhang, Z Huang, X Luo, P Zhao - Proceedings of the 4th ACM …, 2022 - dl.acm.org
Despite great success achieved, deep learning methods are vulnerable to noise in the
training dataset, including adversarial perturbations and annotation noise. These harmful …