关注
Sungjun Cho
Sungjun Cho
Research Scientist, LG AI Research
在 lgresearch.ai 的电子邮件经过验证
标题
引用次数
引用次数
年份
Pure transformers are powerful graph learners
J Kim, TD Nguyen, S Min, S Cho, M Lee, H Lee, S Hong
arXiv preprint arXiv:2207.02505, 2022
1492022
Equivariant Hypergraph Neural Networks
J Kim, S Oh, S Cho, S Hong
arXiv preprint arXiv:2208.10428, 2022
142022
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
S Cha, S Cho, D Hwang, H Lee, T Moon, M Lee
arXiv preprint arXiv:2301.11578, 2023
132023
Rebalancing Batch Normalization for Exemplar-Based Class-Incremental Learning
S Cha, S Cho, D Hwang, S Hong, M Lee, T Moon
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
112023
Practical correlated topic modeling and analysis via the rectified anchor word algorithm
M Lee, S Cho, D Bindel, D Mimno
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
82019
Grouping-matrix based Graph Pooling with Adaptive Number of Clusters
SM Ko, S Cho, DW Jeong, S Han, M Lee, H Lee
arXiv preprint arXiv:2209.02939, 2022
52022
Using spectral characterization to identify healthcare-associated infection (HAI) patients for clinical contact precaution
J Cui, S Cho, M Kamruzzaman, M Bielskas, A Vullikanti, BA Prakash
Scientific Reports 13 (1), 16197, 2023
42023
Learning Equi-angular Representations for Online Continual Learning
M Seo, H Koh, W Jeung, M Lee, S Kim, H Lee, S Cho, S Choi, H Kim, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
22024
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
S Cho, S Min, J Kim, M Lee, H Lee, S Hong
arXiv preprint arXiv:2210.15541, 2022
22022
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning
S Cho, S Cho, S Park, H Lee, H Lee, M Lee
arXiv preprint arXiv:2309.04082, 2023
12023
3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
S Cho, DW Jeong, SM Ko, J Kim, S Han, S Hong, H Lee, M Lee
arXiv preprint arXiv:2309.04062, 2023
12023
Show Think and Tell: Thought-Augmented Fine-Tuning of Large Language Models for Video Captioning
B Kim, D Hwang, S Cho, Y Jang, H Lee, M Lee
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
2024
Mixed-Curvature Transformers for Graph Representation Learning
S Cho, S Cho, S Park, H Lee, H Lee, M Lee
2023
On-the-fly Rectification for Robust Large-Vocabulary Topic Inference
M Lee, S Cho, K Dong, D Bindel, D Mimno
International Conference on Machine Learning (ICML), 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–14