Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning M Jang, S Seo, P Kang Information Sciences 490, 59-73, 2019 | 65 | 2019 |
Consistency analysis of chatgpt ME Jang, T Lukasiewicz arXiv preprint arXiv:2303.06273, 2023 | 49 | 2023 |
BECEL: Benchmark for consistency evaluation of language models M Jang, DS Kwon, T Lukasiewicz Proceedings of the 29th International Conference on Computational …, 2022 | 27 | 2022 |
Unusual customer response identification and visualization based on text mining and anomaly detection S Seo, D Seo, M Jang, J Jeong, P Kang Expert Systems with Applications 144, 113111, 2020 | 25 | 2020 |
Learning-free unsupervised extractive summarization model M Jang, P Kang IEEE Access 9, 14358-14368, 2021 | 22 | 2021 |
Intrusion detection based on sequential information preserving log embedding methods and anomaly detection algorithms C Kim, M Jang, S Seo, K Park, P Kang IEEE Access 9, 58088-58101, 2021 | 18 | 2021 |
Text classification based on convolutional neural network with word and character level K Mo, J Park, M Jang, P Kang Journal of the Korean Institute of Industrial Engineers 44 (3), 180-188, 2018 | 13 | 2018 |
KoBEST: Korean balanced evaluation of significant tasks M Jang, D Kim, DS Kwon, E Davis Proceedings of the 29th International Conference on Computational …, 2022 | 12* | 2022 |
Accurate, yet inconsistent? consistency analysis on language understanding models M Jang, DS Kwon, T Lukasiewicz arXiv preprint arXiv:2108.06665, 2021 | 9 | 2021 |
Beyond distributional hypothesis: Let language models learn meaning-text correspondence M Jang, F Mtumbuka, T Lukasiewicz arXiv preprint arXiv:2205.03815, 2022 | 7 | 2022 |
Are training resources insufficient? Predict first then explain! M Jang, T Lukasiewicz arXiv preprint arXiv:2110.02056, 2021 | 5 | 2021 |
Paraphrase thought: Sentence embedding module imitating human language recognition M Jang, P Kang Information Sciences 541, 123-135, 2020 | 5 | 2020 |
KNOW how to make up your mind! adversarially detecting and alleviating inconsistencies in natural language explanations M Jang, BP Majumder, J McAuley, T Lukasiewicz, OM Camburu arXiv preprint arXiv:2306.02980, 2023 | 3 | 2023 |
NoiER: an approach for training more reliable fine-tuned downstream task models M Jang, T Lukasiewicz IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2514-2525, 2022 | 3 | 2022 |
Sentence transition matrix: an efficient approach that preserves sentence semantics M Jang, P Kang Computer Speech & Language 71, 101266, 2022 | 2 | 2022 |
Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary ME Jang, T Lukasiewicz arXiv preprint arXiv:2310.15541, 2023 | 1 | 2023 |
A robust deep learning platform to predict CD8+ T-cell epitopes CH Lee, J Huh, PR Buckley, M Jang, M Pereira Pinho, RA Fernandes, ... bioRxiv, 2022.12. 29.522182, 2022 | 1 | 2022 |
Leveraging Natural Language Processing and Large Language Models for Assisting Due Diligence in the Legal Domain M Jang, G Stikkel Proceedings of the 2024 Conference of the North American Chapter of the …, 2024 | | 2024 |
DriftWatch: A Tool that Automatically Detects Data Drift and Extracts Representative Examples Affected by Drift M Jang, A Georgiadis, Y Zhao, F Silavong Proceedings of the 2024 Conference of the North American Chapter of the …, 2024 | | 2024 |
Pre-training and diagnosing knowledge base completion models V Kocijan, M Jang, T Lukasiewicz Artificial Intelligence 329, 104081, 2024 | | 2024 |