Span recovery for deep neural networks with applications to input obfuscation
The tremendous success of deep neural networks has motivated the need to better
understand the fundamental properties of these networks, but many of the theoretical results
proposed have only been for shallow networks. In this paper, we study an important primitive
for understanding the meaningful input space of a deep network: span recovery. For $ k< n
$, let $\mathbf {A}\in\mathbb {R}^{k\times n} $ be the innermost weight matrix of an arbitrary
feed forward neural network $ M:\mathbb {R}^ n\to\mathbb {R} $, so $ M (x) $ can be written …
understand the fundamental properties of these networks, but many of the theoretical results
proposed have only been for shallow networks. In this paper, we study an important primitive
for understanding the meaningful input space of a deep network: span recovery. For $ k< n
$, let $\mathbf {A}\in\mathbb {R}^{k\times n} $ be the innermost weight matrix of an arbitrary
feed forward neural network $ M:\mathbb {R}^ n\to\mathbb {R} $, so $ M (x) $ can be written …
Span Recovery for Deep Neural Networks with Applications to Input Obfuscation
The tremendous success of deep neural networks has motivated the need to better
understand the fundamental properties of these networks, but many of the theoretical results
proposed have only been for shallow networks. In this paper, we study an important primitive
for understanding the meaningful input space of a deep network: span recovery. For k< n, let
A∈ R k× n be the innermost weight matrix of an arbitrary feed forward neural network M: R
n→ R, so M (x) can be written as M (x)= σ (Ax), for some network σ: R k→ R. The goal is then …
understand the fundamental properties of these networks, but many of the theoretical results
proposed have only been for shallow networks. In this paper, we study an important primitive
for understanding the meaningful input space of a deep network: span recovery. For k< n, let
A∈ R k× n be the innermost weight matrix of an arbitrary feed forward neural network M: R
n→ R, so M (x) can be written as M (x)= σ (Ax), for some network σ: R k→ R. The goal is then …
以上显示的是最相近的搜索结果。 查看全部搜索结果