Superloss: A generic loss for robust curriculum learning T Castells, P Weinzaepfel, J Revaud Advances in Neural Information Processing Systems 33, 4308-4319, 2020 | 91 | 2020 |
On architectural compression of text-to-image diffusion models BK Kim, HK Song, T Castells, S Choi | 34 | 2023 |
Shortened llama: A simple depth pruning for large language models BK Kim, G Kim, TH Kim, T Castells, S Choi, J Shin, HK Song arXiv preprint arXiv:2402.02834, 2024 | 12 | 2024 |
Bk-sdm: Architecturally compressed stable diffusion for efficient text-to-image generation BK Kim, HK Song, T Castells, S Choi Workshop on Efficient Systems for Foundation Models@ ICML2023, 2023 | 10 | 2023 |
Automatic neural network pruning that efficiently preserves the model accuracy T Castells, SK Yeom arXiv preprint arXiv:2111.09635, 2021 | 6 | 2021 |
LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights T Castells, HK Song, BK Kim, S Choi Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 4 | 2024 |
Bk-sdm: A lightweight, fast, and cheap version of stable diffusion BK Kim, HK Song, T Castells, S Choi arXiv preprint arXiv:2305.15798, 2023 | 4 | 2023 |
Superloss: A generic loss for robust curriculum learning P Weinzaepfel, J Revaud, T Castells US Patent App. 17/383,860, 2022 | 4 | 2022 |
EdgeFusion: On-Device Text-to-Image Generation T Castells, HK Song, T Piao, S Choi, BK Kim, H Yim, C Lee, JG Kim, ... arXiv preprint arXiv:2404.11925, 2024 | 1 | 2024 |
Method and apparatus for information flow based automatic neural network compression that preserves the model accuracy Y Seul-Ki, T Castells US Patent App. 18/056,644, 2023 | | 2023 |
Supplementary Material for SuperLoss: A Generic Loss for Robust Curriculum Learning T Castells, P Weinzaepfel, J Revaud | | |