Paraphrasing complex network: Network compression via factor transfer J Kim, SU Park, N Kwak Advances in Neural Information Processing Systems (NeurIPS) 31, 2018 | 581 | 2018 |
Feature-map-level online adversarial knowledge distillation I Chung, SU Park, J Kim, N Kwak International Conference on Machine Learning (ICML), 2006-2015, 2020 | 155 | 2020 |
Feature fusion for online mutual knowledge distillation J Kim, M Hyun, I Chung, N Kwak 2020 25th International Conference on Pattern Recognition (ICPR), 4619-4625, 2021 | 107 | 2021 |
QTI submission to DCASE 2021: Residual normalization for device imbalanced acoustic scene classification with efficient design B Kim, S Yang, J Kim, S Chang DCASE2021 Challenge, Tech. Rep, 2021 | 74 | 2021 |
Qkd: Quantization-aware knowledge distillation J Kim, Y Bhalgat, J Lee, C Patel, N Kwak arXiv preprint arXiv:1911.12491, 2019 | 74 | 2019 |
Detection of adversarial examples in text classification: Benchmark and baseline via robust density estimation KY Yoo, J Kim, J Jang, N Kwak Findings of the Association for Computational Linguistics: ACL 2022, 3656-3672, 2022 | 42* | 2022 |
Position-based scaled gradient for model quantization and pruning J Kim, KY Yoo, N Kwak Advances in Neural Information Processing Systems (NeurIPS) 33, 20415-20426, 2020 | 39 | 2020 |
PQK: model compression via pruning, quantization, and knowledge distillation J Kim, S Chang, N Kwak INTERSPEECH, 2021 | 38 | 2021 |
Domain generalization with relaxed instance frequency-wise normalization for multi-device acoustic scene classification B Kim, S Yang, J Kim, H Park, J Lee, S Chang INTERSPEECH, 2022 | 28* | 2022 |
Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization B Kim, S Yang, J Kim, S Chang DCASE 2021 workshops, 2021 | 21 | 2021 |
StackNet: Stacking feature maps for Continual learning J Kim, J Kim, N Kwak Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 15* | 2020 |
Self-Distilled Self-Supervised Representation Learning J Jang, S Kim, K Yoo, C Kong, J Kim, N Kwak Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023 | 13 | 2023 |
Finding Efficient Pruned Network via Refined Gradients for Pruned Weights J Kim, J Yoo, Y Song, KY Yoo, N Kwak Proceedings of the 31st ACM International Conference on Multimedia (ACM MM …, 2023 | 6* | 2023 |
Prototype-based Personalized Pruning J Kim, S Chang, S Yun, N Kwak 2021 IEEE International Conference on Acoustics, Speech and Signal …, 2021 | 5 | 2021 |
Quantization Robust Pruning With Knowledge Distillation J Kim IEEE Access 11, 26419-26426, 2023 | 4 | 2023 |
Magnitude Attention-based Dynamic Pruning J Back, N Ahn, J Kim arXiv preprint arXiv:2306.05056, 2023 | 2 | 2023 |
Personalized neural network pruning S Chang, KIM Jangho, P Hyunsin, LEE Juntae, J Choi, KW Hwang US Patent App. 17/506,646, 2022 | 2 | 2022 |
Pqk: Model com-pression via pruning quantization and knowledge distillation J Kim, S Chang, N Kwak INTER-SPEECH, 2021 | 2 | 2021 |
Detecting Korean characters in natural scenes by alphabet detection and agglomerative character construction J Kim, YJ Kim, Y Kim, D Kim 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC …, 2016 | 2 | 2016 |
Model compression via position-based scaled gradient J Kim, K Yoo, N Kwak IEEE Access 10, 133828-133841, 2022 | 1 | 2022 |