关注
Yoonho Lee
Yoonho Lee
PhD Student, Stanford University
在 stanford.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks
J Lee, Y Lee, J Kim, A Kosiorek, S Choi, YW Teh
ICML 2019, 2019
12042019
Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace
Y Lee, S Choi
ICML 2018, 2018
4112018
Detectgpt: Zero-shot machine-generated text detection using probability curvature
E Mitchell, Y Lee, A Khazatsky, CD Manning, C Finn
ICML 2023 Long Oral, 2023
3252023
Surgical fine-tuning improves adaptation to distribution shifts
Y Lee, AS Chen, F Tajwar, A Kumar, H Yao, P Liang, C Finn
ICLR 2023, 2023
1372023
Diversify and Disambiguate: Learning From Underspecified Data
Y Lee, H Yao, C Finn
ICLR 2023, 2023
78*2023
Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time
H Yao, C Choi, Y Lee, PW Koh, C Finn
NeurIPS 2022 Datasets Track, 2022
592022
Bootstrapping Neural Processes
J Lee, Y Lee, J Kim, E Yang, SJ Hwang, YW Teh
NeurIPS 2020, 2020
412020
Diversity Matters When Learning From Ensembles
G Nam, J Yoon, Y Lee, J Lee
NeurIPS 2021, 2021
362021
Set transformer
J Lee, Y Lee, J Kim, AR Kosiorek, S Choi, YW Teh
International Conference on Machine Learning 4 (8), 2019
272019
Deep Amortized Clustering
J Lee, Y Lee, YW Teh
NeurIPS 2019 Sets and Parts Workshop Oral, 2019
172019
Learning Dynamics of Attention: Human Prior for Interpretable Machine Reasoning
W Kim, Y Lee
NeurIPS 2019, 2019
112019
Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features
AS Chen, Y Lee, A Setlur, S Levine, C Finn
ICLR 2024 Spotlight, 2024
102024
On Divergence Measures for Bayesian Pseudocoresets
B Kim, J Choi, S Lee, Y Lee, JW Ha, J Lee
NeurIPS 2022, 2022
102022
Neural Complexity Measures
Y Lee, J Lee, SJ Hwang, E Yang, S Choi
NeurIPS 2020, 2020
82020
Conservative Prediction via Data-Driven Confidence Minimization
C Choi, F Tajwar, Y Lee, H Yao, A Kumar, C Finn
ICLR 2023 workshops: TrustML, ME-FoMo, 2023
62023
Discrete Infomax Codes for Supervised Representation Learning
Y Lee, W Kim, W Park, S Choi
Entropy, 2022
52022
Set transformer: A framework for attention-based permutation-invariant neural networks (2018)
J Lee, Y Lee, J Kim, AR Kosiorek, S Choi, YW Teh
URL https://arxiv. org/abs, 1810
51810
Amortized Probabilistic Detection of Communities in Graphs
A Pakman, Y Wang, Y Lee, P Basu, J Lee, YW Teh, L Paninski
arXiv preprint arXiv:2010.15727, 2020
4*2020
On The Distribution of Penultimate Activations of Classification Networks
M Seo, Y Lee, S Kwak
UAI 2021, 2021
32021
Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts
AS Chen, Y Lee, A Setlur, S Levine, C Finn
NeurIPS 2023 Workshop on Distribution Shifts, 2023
22023
系统目前无法执行此操作,请稍后再试。
文章 1–20