[PDF][PDF] Robust Information-Theoretic Algorithms for Outlier Detection in Big Data
NK Alapati - 2024 - researchgate.net
Outlier detection is essential to data analysis as it eliminates irregular or unforeseen values
that may affect the rest of a dataset. With the advent of Big Data, which produces copious …
that may affect the rest of a dataset. With the advent of Big Data, which produces copious …
TIM: Enabling Large-scale White-box Testing on In-App Deep Learning Models
Intelligent Applications (iApps), equipped with in-App deep learning (DL) models, are
emerging to provide reliable DL inference services. However, in-App DL models are …
emerging to provide reliable DL inference services. However, in-App DL models are …
PLeak: Prompt Leaking Attacks against Large Language Model Applications
Large Language Models (LLMs) enable a new ecosystem with many downstream
applications, called LLM applications, with different natural language processing tasks. The …
applications, called LLM applications, with different natural language processing tasks. The …
Watermarking Counterfactual Explanations
Counterfactual (CF) explanations for ML model predictions provide actionable recourse
recommendations to individuals adversely impacted by predicted outcomes. However …
recommendations to individuals adversely impacted by predicted outcomes. However …
MisGUIDE: Defense Against Data-Free Deep Learning Model Extraction
M Gurve, S Behera, S Ahlawat, Y Prasad - arXiv preprint arXiv:2403.18580, 2024 - arxiv.org
The rise of Machine Learning as a Service (MLaaS) has led to the widespread deployment
of machine learning models trained on diverse datasets. These models are employed for …
of machine learning models trained on diverse datasets. These models are employed for …
[PDF][PDF] Build a Computationally Efficient Strong Defense Against Adversarial Example Attacks.
Input transformation techniques have been proposed to defend against adversarial example
attacks in imageclassification systems. However, recent works have shown that, although …
attacks in imageclassification systems. However, recent works have shown that, although …
Defense against Model Extraction Attack by Bayesian Active Watermarking
Model extraction is to obtain a cloned model that replicates the functionality of a black-box
victim model solely through query-based access. Present defense strategies exhibit …
victim model solely through query-based access. Present defense strategies exhibit …