Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting

Y Wang, X Liu, Y Li, M Chen, C Xiao - arXiv preprint arXiv:2403.09513, 2024 - arxiv.org
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …

BIAS: A Body-based Interpretable Active Speaker Approach

T Roxo, JC Costa, PRM Inácio… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
State-of-the-art Active Speaker Detection (ASD) approaches heavily rely on audio and facial
features to perform, which is not a sustainable approach in wild scenarios. Although these …

Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges

P Kumar - International Journal of Multimedia Information …, 2024 - Springer
Large language models (LLMs) have exhibited remarkable efficacy and proficiency in a
wide array of NLP endeavors. Nevertheless, concerns are growing rapidly regarding the …

An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety Alignment

Y Zheng, CH Chang, SH Huang… - IEEE Journal on …, 2024 - ieeexplore.ieee.org
AI has undergone a remarkable evolution journey marked by groundbreaking milestones.
Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands …

Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey

C Zhang, X Xu, J Wu, Z Liu, L Zhou - arXiv preprint arXiv:2410.23687, 2024 - arxiv.org
Adversarial attacks, which manipulate input data to undermine model availability and
integrity, pose significant security threats during machine learning inference. With the advent …

Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability

HE Oskouie, L Levine, M Sarrafzadeh - arXiv preprint arXiv:2408.08448, 2024 - arxiv.org
As Artificial Intelligence (AI) models are increasingly integrated into critical systems, the
need for a robust framework to establish the trustworthiness of AI is increasingly paramount …

Single-Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning

D Chen, J Zhang, Y Lv, J Wang, H Ni… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Graph neural networks (GNNs) have achieved remarkable success in various real-world
applications. However, recent studies highlight the vulnerability of GNNs to malicious …

Trustworthy and Robust Machine Learning for Multimedia: Challenges and Perspectives

K Nakano, M Zuzak, C Merkel… - 2024 IEEE 7th …, 2024 - ieeexplore.ieee.org
Multimedia applications for machine learning mod-els are characterized by the fusion of
multiple modalities of data. In this work, we highlight the trust and robustness challenges of …

DSE-Based Hardware Trojan Attack for Neural Network Accelerators on FPGAs

C Guo, M Yanagisawa, Y Shi - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Over the past few years, the emergence and development of design space exploration
(DSE) have shortened the deployment cycle of deep neural networks (DNNs). As a result …

Navigating Governance Paradigms: A Cross-Regional Comparative Study of Generative AI Governance Processes & Principles

J Luna, I Tan, X Xie, L Jiang - Proceedings of the AAAI/ACM Conference …, 2024 - ojs.aaai.org
Abstract As Generative Artificial Intelligence (GenAI) technologies evolve at an
unprecedented rate, global governance approaches struggle to keep pace with the …