[HTML][HTML] Expressive architectures enhance interpretability of dynamics-based neural population models
AR Sedler, C Versteeg… - Neurons, behavior, data …, 2023 - ncbi.nlm.nih.gov
Artificial neural networks that can recover latent dynamics from recorded neural activity may
provide a powerful avenue for identifying and interpreting the dynamical motifs underlying
biological computation. Given that neural variance alone does not uniquely determine a
latent dynamical system, interpretable architectures should prioritize accurate and low-
dimensional latent dynamics. In this work, we evaluated the performance of sequential
autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets …
provide a powerful avenue for identifying and interpreting the dynamical motifs underlying
biological computation. Given that neural variance alone does not uniquely determine a
latent dynamical system, interpretable architectures should prioritize accurate and low-
dimensional latent dynamics. In this work, we evaluated the performance of sequential
autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets …
以上显示的是最相近的搜索结果。 查看全部搜索结果