Rocbert: Robust chinese bert with multimodal contrastive pretraining
Large-scale pretrained language models have achieved SOTA results on NLP tasks.
However, they have been shown vulnerable to adversarial attacks especially for logographic …
However, they have been shown vulnerable to adversarial attacks especially for logographic …
TileMask: A Passive-Reflection-based Attack against mmWave Radar Object Detection in Autonomous Driving
In autonomous driving, millimeter wave (mmWave) radar has been widely adopted for object
detection because of its robustness and reliability under various weather and lighting …
detection because of its robustness and reliability under various weather and lighting …
Detecting and characterizing SMS spearphishing attacks
Although spearphishing is a well-known security issue and has been widely researched, it is
still an evolving threat with emerging forms. In recent years, Short Message Service (SMS) …
still an evolving threat with emerging forms. In recent years, Short Message Service (SMS) …
Optimising smart city evaluation: A people‐oriented analysis method
Y Fang, Z Shan - IET Smart Cities, 2024 - Wiley Online Library
Smart cities integrate information technology with urban transformation, making it crucial to
systematically evaluate their development level and effectiveness. Recent years have seen …
systematically evaluate their development level and effectiveness. Recent years have seen …
Cbas: Character-level backdoor attacks against chinese pre-trained language models
X He, F Hao, T Gu, L Chang - ACM Transactions on Privacy and Security, 2024 - dl.acm.org
Pre-trained language models (PLMs) aim to assist computers in various domains to provide
natural and efficient language interaction and text processing capabilities. However, recent …
natural and efficient language interaction and text processing capabilities. However, recent …
Enhance Robustness of Language Models Against Variation Attack through Graph Integration
The widespread use of pre-trained language models (PLMs) in natural language processing
(NLP) has greatly improved performance outcomes. However, these models' vulnerability to …
(NLP) has greatly improved performance outcomes. However, these models' vulnerability to …
Adversarial attacks on brain-inspired hyperdimensional computing-based classifiers
F Yang, S Ren - arXiv preprint arXiv:2006.05594, 2020 - arxiv.org
Being an emerging class of in-memory computing architecture, brain-inspired
hyperdimensional computing (HDC) mimics brain cognition and leverages random …
hyperdimensional computing (HDC) mimics brain cognition and leverages random …
Black-box opinion manipulation attacks to retrieval-augmented generation of large language models
Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-
time constraints of large language models, but it also induces vulnerabilities against retrieval …
time constraints of large language models, but it also induces vulnerabilities against retrieval …
Rochbert: Towards robust BERT fine-tuning for chinese
Despite of the superb performance on a wide range of tasks, pre-trained language models
(eg, BERT) have been proved vulnerable to adversarial texts. In this paper, we present …
(eg, BERT) have been proved vulnerable to adversarial texts. In this paper, we present …
PROTECT: Parameter-Efficient Tuning for Few-Shot Robust Chinese Text Correction
Non-normative texts and euphemisms are widely spread on the Internet, making it more
difficult to moderate the content. These phenomena result from misspelling errors or …
difficult to moderate the content. These phenomena result from misspelling errors or …