Rocbert: Robust chinese bert with multimodal contrastive pretraining

H Su, W Shi, X Shen, Z Xiao, T Ji, J Fang… - Proceedings of the 60th …, 2022 - aclanthology.org
Large-scale pretrained language models have achieved SOTA results on NLP tasks.
However, they have been shown vulnerable to adversarial attacks especially for logographic …

TileMask: A Passive-Reflection-based Attack against mmWave Radar Object Detection in Autonomous Driving

Y Zhu, C Miao, H Xue, Z Li, Y Yu, W Xu, L Su… - Proceedings of the 2023 …, 2023 - dl.acm.org
In autonomous driving, millimeter wave (mmWave) radar has been widely adopted for object
detection because of its robustness and reliability under various weather and lighting …

Detecting and characterizing SMS spearphishing attacks

M Liu, Y Zhang, B Liu, Z Li, H Duan, D Sun - Proceedings of the 37th …, 2021 - dl.acm.org
Although spearphishing is a well-known security issue and has been widely researched, it is
still an evolving threat with emerging forms. In recent years, Short Message Service (SMS) …

Optimising smart city evaluation: A people‐oriented analysis method

Y Fang, Z Shan - IET Smart Cities, 2024 - Wiley Online Library
Smart cities integrate information technology with urban transformation, making it crucial to
systematically evaluate their development level and effectiveness. Recent years have seen …

Cbas: Character-level backdoor attacks against chinese pre-trained language models

X He, F Hao, T Gu, L Chang - ACM Transactions on Privacy and Security, 2024 - dl.acm.org
Pre-trained language models (PLMs) aim to assist computers in various domains to provide
natural and efficient language interaction and text processing capabilities. However, recent …

Enhance Robustness of Language Models Against Variation Attack through Graph Integration

Z Xiong, L Qing, Y Kang, J Liu, H Li, C Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread use of pre-trained language models (PLMs) in natural language processing
(NLP) has greatly improved performance outcomes. However, these models' vulnerability to …

Adversarial attacks on brain-inspired hyperdimensional computing-based classifiers

F Yang, S Ren - arXiv preprint arXiv:2006.05594, 2020 - arxiv.org
Being an emerging class of in-memory computing architecture, brain-inspired
hyperdimensional computing (HDC) mimics brain cognition and leverages random …

Black-box opinion manipulation attacks to retrieval-augmented generation of large language models

Z Chen, J Liu, H Liu, Q Cheng, F Zhang, W Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-
time constraints of large language models, but it also induces vulnerabilities against retrieval …

Rochbert: Towards robust BERT fine-tuning for chinese

Z Zhang, J Li, N Shi, B Yuan, X Liu, R Zhang… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite of the superb performance on a wide range of tasks, pre-trained language models
(eg, BERT) have been proved vulnerable to adversarial texts. In this paper, we present …

PROTECT: Parameter-Efficient Tuning for Few-Shot Robust Chinese Text Correction

X Feng, T Gu, L Chang, X Liu - IEEE/ACM Transactions on …, 2024 - ieeexplore.ieee.org
Non-normative texts and euphemisms are widely spread on the Internet, making it more
difficult to moderate the content. These phenomena result from misspelling errors or …