Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …
(MLLMs), the imperative to ensure their safety has become increasingly pronounced …
BIAS: A Body-based Interpretable Active Speaker Approach
State-of-the-art Active Speaker Detection (ASD) approaches heavily rely on audio and facial
features to perform, which is not a sustainable approach in wild scenarios. Although these …
features to perform, which is not a sustainable approach in wild scenarios. Although these …
Adversarial attacks and defenses for large language models (LLMs): methods, frameworks & challenges
P Kumar - International Journal of Multimedia Information …, 2024 - Springer
Large language models (LLMs) have exhibited remarkable efficacy and proficiency in a
wide array of NLP endeavors. Nevertheless, concerns are growing rapidly regarding the …
wide array of NLP endeavors. Nevertheless, concerns are growing rapidly regarding the …
An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety Alignment
AI has undergone a remarkable evolution journey marked by groundbreaking milestones.
Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands …
Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands …
Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey
Adversarial attacks, which manipulate input data to undermine model availability and
integrity, pose significant security threats during machine learning inference. With the advent …
integrity, pose significant security threats during machine learning inference. With the advent …
Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability
HE Oskouie, L Levine, M Sarrafzadeh - arXiv preprint arXiv:2408.08448, 2024 - arxiv.org
As Artificial Intelligence (AI) models are increasingly integrated into critical systems, the
need for a robust framework to establish the trustworthiness of AI is increasingly paramount …
need for a robust framework to establish the trustworthiness of AI is increasingly paramount …
Single-Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning
D Chen, J Zhang, Y Lv, J Wang, H Ni… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Graph neural networks (GNNs) have achieved remarkable success in various real-world
applications. However, recent studies highlight the vulnerability of GNNs to malicious …
applications. However, recent studies highlight the vulnerability of GNNs to malicious …
Trustworthy and Robust Machine Learning for Multimedia: Challenges and Perspectives
Multimedia applications for machine learning mod-els are characterized by the fusion of
multiple modalities of data. In this work, we highlight the trust and robustness challenges of …
multiple modalities of data. In this work, we highlight the trust and robustness challenges of …
DSE-Based Hardware Trojan Attack for Neural Network Accelerators on FPGAs
C Guo, M Yanagisawa, Y Shi - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Over the past few years, the emergence and development of design space exploration
(DSE) have shortened the deployment cycle of deep neural networks (DNNs). As a result …
(DSE) have shortened the deployment cycle of deep neural networks (DNNs). As a result …
Navigating Governance Paradigms: A Cross-Regional Comparative Study of Generative AI Governance Processes & Principles
Abstract As Generative Artificial Intelligence (GenAI) technologies evolve at an
unprecedented rate, global governance approaches struggle to keep pace with the …
unprecedented rate, global governance approaches struggle to keep pace with the …