MixCo: Mix-up Contrastive Learning for Visual Representation S Kim*, G Lee*, S Bae*, SY Yun NeurIPS Workshop on Self-Supervised Learning: Theory and Practice, 2020 | 74 | 2020 |
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty J Oh*, S Kim*, N Ho*, JH Kim, H Song, SY Yun Advances in Neural Information Processing Systems 35, 2622-2636, 2022 | 32 | 2022 |
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification S Bae, JW Kim, WY Cho, H Baek, S Son, B Lee, C Ha, K Tae, S Kim*, ... Proceedings of Interspeech, 2023 | 13* | 2023 |
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models S Kim, J Lee, K Hong, D Kim, N Ahn arXiv preprint arXiv:2305.15194, 2023 | 12 | 2023 |
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning J Oh*, S Kim*, N Ho*, JH Kim, H Song, SY Yun Proceedings of the 31st ACM International Conference on Information …, 2022 | 9 | 2022 |
DistiLLM: Towards Streamlined Distillation for Large Language Models J Ko, S Kim, T Chen, SY Yun International Conference on Machine Learning, 2024 | 7 | 2024 |
Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun Proceedings of the AAAI Conference on Artificial Intelligence 37 (1), 197-205, 2023 | 7 | 2023 |
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning S Kim*, S Bae*, SY Yun Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 6 | 2023 |
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation K Jang*, S Kim*, SY Yun, H Kim Proceedings of Interspeech, 2023 | 4 | 2023 |
Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch S Kim, SY Yun IEEE Access 10, 53894-53908, 2022 | 4* | 2022 |
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation Y Kim*, J Oh*, S Kim, SY Yun ICML Workshop on Updatable Machine Learning, 2022 | 4 | 2022 |
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition S Kim*, K Jang*, S Bae, H Kim, SY Yun arXiv preprint arXiv:2407.03563, 2024 | | 2024 |
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning S Kim, M Jeong, S Kim, S Cho, S Ahn, SY Yun KDD Workshop on Federated Learning for Data Mining and Graph Analytics (FedKDD), 2024 | | 2024 |
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models K Jang, S Kim, H Kim IEEE International Conference on Acoustics, Speech and Signal Processing …, 2024 | | 2024 |
Real-time and Explainable Detection of Epidemics with Global News Data S Kim*, J Shin*, S Eom, J Oh, SY Yun Workshop on Healthcare AI and COVID-19, 73-90, 2022 | | 2022 |