Systematic literature review of security event correlation methods

I Kotenko, D Gaifulina, I Zelichenok - Ieee Access, 2022 - ieeexplore.ieee.org
Security event correlation approaches are necessary to detect and predict incremental
threats such as multi-step or targeted attacks (advanced persistent threats) and other causal …

Causal effect inference with deep latent-variable models

C Louizos, U Shalit, JM Mooij… - Advances in neural …, 2017 - proceedings.neurips.cc
Learning individual-level causal effects from observational data, such as inferring the most
effective medication for a specific patient, is a problem of growing importance for policy …

Concrete problems in AI safety

D Amodei, C Olah, J Steinhardt, P Christiano… - arXiv preprint arXiv …, 2016 - arxiv.org
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing
attention to the potential impacts of AI technologies on society. In this paper we discuss one …

Contrastive learning, multi-view redundancy, and linear models

C Tosh, A Krishnamurthy, D Hsu - Algorithmic Learning …, 2021 - proceedings.mlr.press
Self-supervised learning is an empirically successful approach to unsupervised learning
based on creating artificial supervised learning problems. A popular self-supervised …

Robust estimators in high-dimensions without the computational intractability

I Diakonikolas, G Kamath, D Kane, J Li, A Moitra… - SIAM Journal on …, 2019 - SIAM
We study high-dimensional distribution learning in an agnostic setting where an adversary is
allowed to arbitrarily corrupt an ε-fraction of the samples. Such questions have a rich history …

Provable meta-learning of linear representations

N Tripuraneni, C Jin, M Jordan - International Conference on …, 2021 - proceedings.mlr.press
Meta-learning, or learning-to-learn, seeks to design algorithms that can utilize previous
experience to rapidly learn new skills or adapt to new environments. Representation …

[PDF][PDF] Tensor decompositions for learning latent variable models.

A Anandkumar, R Ge, DJ Hsu, SM Kakade… - J. Mach. Learn. Res …, 2014 - jmlr.org
This work considers a computationally and statistically efficient parameter estimation method
for a wide class of latent variable models—including Gaussian mixture models, hidden …

Regularized learning for domain adaptation under label shifts

K Azizzadenesheli, A Liu, F Yang… - arXiv preprint arXiv …, 2019 - arxiv.org
We propose Regularized Learning under Label shifts (RLLS), a principled and a practical
domain-adaptation algorithm to correct for shifts in the label distribution between a source …

Towards understanding the mixture-of-experts layer in deep learning

Z Chen, Y Deng, Y Wu, Q Gu… - Advances in neural …, 2022 - proceedings.neurips.cc
Abstract The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by a
router, has achieved great success in deep learning. However, the understanding of such …

Introduction to tensor decompositions and their applications in machine learning

S Rabanser, O Shchur, S Günnemann - arXiv preprint arXiv:1711.10781, 2017 - arxiv.org
Tensors are multidimensional arrays of numerical values and therefore generalize matrices
to multiple dimensions. While tensors first emerged in the psychometrics community in the …