Discovering causal relations and equations from data

G Camps-Valls, A Gerhardus, U Ninad, G Varando… - Physics Reports, 2023 - Elsevier
Physics is a field of science that has traditionally used the scientific method to answer
questions about why natural phenomena occur and to make testable models that explain the …

Gaussian processes and kernel methods: A review on connections and equivalences

M Kanagawa, P Hennig, D Sejdinovic… - arXiv preprint arXiv …, 2018 - arxiv.org
This paper is an attempt to bridge the conceptual gaps between researchers working on the
two widely used approaches based on positive definite kernels: Bayesian learning or …

Do vision transformers see like convolutional neural networks?

M Raghu, T Unterthiner, S Kornblith… - Advances in neural …, 2021 - proceedings.neurips.cc
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data.
Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or …

Deep stable learning for out-of-distribution generalization

X Zhang, P Cui, R Xu, L Zhou… - Proceedings of the …, 2021 - openaccess.thecvf.com
Approaches based on deep neural networks have achieved striking performance when
testing data and training data share similar distribution, but can significantly fail otherwise …

Consensus graph learning for multi-view clustering

Z Li, C Tang, X Liu, X Zheng… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Multi-view clustering, which exploits the multi-view information to partition data into their
clusters, has attracted intense attention. However, most existing methods directly learn a …

One-for-all: Bridge the gap between heterogeneous architectures in knowledge distillation

Z Hao, J Guo, K Han, Y Tang, H Hu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Knowledge distillation (KD) has proven to be a highly effective approach for
enhancing model performance through a teacher-student training scheme. However, most …

On efficient transformer-based image pre-training for low-level vision

W Li, X Lu, S Qian, J Lu, X Zhang, J Jia - arXiv preprint arXiv:2112.10175, 2021 - arxiv.org
Pre-training has marked numerous state of the arts in high-level computer vision, while few
attempts have ever been made to investigate how pre-training acts in image processing …

[图书][B] Elements of causal inference: foundations and learning algorithms

J Peters, D Janzing, B Schölkopf - 2017 - library.oapen.org
A concise and self-contained introduction to causal inference, increasingly important in data
science and machine learning. The mathematization of causality is a relatively recent …

f-gan: Training generative neural samplers using variational divergence minimization

S Nowozin, B Cseke… - Advances in neural …, 2016 - proceedings.neurips.cc
Generative neural networks are probabilistic models that implement sampling using
feedforward neural networks: they take a random input vector and produce a sample from a …

Learning de-biased representations with biased representations

H Bahng, S Chun, S Yun, J Choo… - … on Machine Learning, 2020 - proceedings.mlr.press
Many machine learning algorithms are trained and evaluated by splitting data from a single
source into training and test sets. While such focus on in-distribution learning scenarios has …