A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many
signal processing and machine learning (ML) applications. It is used for solving optimization …
signal processing and machine learning (ML) applications. It is used for solving optimization …
Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks
Recent studies have shown that adversarial examples in state-of-the-art image classifiers
trained by deep neural networks (DNN) can be easily generated when the target model is …
trained by deep neural networks (DNN) can be easily generated when the target model is …
Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning
With the emergence of a spectrum of high-end mobile devices, many applications that
formerly required desktop-level computation capability are being transferred to these …
formerly required desktop-level computation capability are being transferred to these …
Maze: Data-free model stealing attack using zeroth-order gradient estimation
S Kariyappa, A Prakash… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Abstract High quality Machine Learning (ML) models are often considered valuable
intellectual property by companies. Model Stealing (MS) attacks allow an adversary with …
intellectual property by companies. Model Stealing (MS) attacks allow an adversary with …
Structured adversarial attack: Towards general implementation and better interpretability
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of
the added perturbation is usually used to measure the similarity between original image and …
the added perturbation is usually used to measure the similarity between original image and …
Zeroth-order stochastic variance reduction for nonconvex optimization
As application demands for zeroth-order (gradient-free) optimization accelerate, the need for
variance reduced and faster converging approaches is also intensifying. This paper …
variance reduced and faster converging approaches is also intensifying. This paper …
Forms: Fine-grained polarized reram-based in-situ computation for mixed-signal dnn accelerator
Recent work demonstrated the promise of using resistive random access memory (ReRAM)
as an emerging technology to perform inherently parallel analog domain in-situ matrix …
as an emerging technology to perform inherently parallel analog domain in-situ matrix …
Distributed zero-order algorithms for nonconvex multiagent optimization
Distributed multiagent optimization finds many applications in distributed learning, control,
estimation, etc. Most existing algorithms assume knowledge of first-order information of the …
estimation, etc. Most existing algorithms assume knowledge of first-order information of the …
Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization
The adaptive momentum method (AdaMM), which uses past gradients to update descent
directions and learning rates simultaneously, has become one of the most popular first-order …
directions and learning rates simultaneously, has become one of the most popular first-order …
Non-structured DNN weight pruning—Is it beneficial in any platform?
Large deep neural network (DNN) models pose the key challenge to energy efficiency due
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …
to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …