关注
Myeongjun Erik Jang
Myeongjun Erik Jang
其他姓名Myeongjun Jang
在 cs.ox.ac.uk 的电子邮件经过验证
标题
引用次数
引用次数
年份
Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning
M Jang, S Seo, P Kang
Information Sciences 490, 59-73, 2019
652019
Consistency analysis of chatgpt
ME Jang, T Lukasiewicz
arXiv preprint arXiv:2303.06273, 2023
492023
BECEL: Benchmark for consistency evaluation of language models
M Jang, DS Kwon, T Lukasiewicz
Proceedings of the 29th International Conference on Computational …, 2022
272022
Unusual customer response identification and visualization based on text mining and anomaly detection
S Seo, D Seo, M Jang, J Jeong, P Kang
Expert Systems with Applications 144, 113111, 2020
252020
Learning-free unsupervised extractive summarization model
M Jang, P Kang
IEEE Access 9, 14358-14368, 2021
222021
Intrusion detection based on sequential information preserving log embedding methods and anomaly detection algorithms
C Kim, M Jang, S Seo, K Park, P Kang
IEEE Access 9, 58088-58101, 2021
182021
Text classification based on convolutional neural network with word and character level
K Mo, J Park, M Jang, P Kang
Journal of the Korean Institute of Industrial Engineers 44 (3), 180-188, 2018
132018
KoBEST: Korean balanced evaluation of significant tasks
M Jang, D Kim, DS Kwon, E Davis
Proceedings of the 29th International Conference on Computational …, 2022
12*2022
Accurate, yet inconsistent? consistency analysis on language understanding models
M Jang, DS Kwon, T Lukasiewicz
arXiv preprint arXiv:2108.06665, 2021
92021
Beyond distributional hypothesis: Let language models learn meaning-text correspondence
M Jang, F Mtumbuka, T Lukasiewicz
arXiv preprint arXiv:2205.03815, 2022
72022
Are training resources insufficient? Predict first then explain!
M Jang, T Lukasiewicz
arXiv preprint arXiv:2110.02056, 2021
52021
Paraphrase thought: Sentence embedding module imitating human language recognition
M Jang, P Kang
Information Sciences 541, 123-135, 2020
52020
KNOW how to make up your mind! adversarially detecting and alleviating inconsistencies in natural language explanations
M Jang, BP Majumder, J McAuley, T Lukasiewicz, OM Camburu
arXiv preprint arXiv:2306.02980, 2023
32023
NoiER: an approach for training more reliable fine-tuned downstream task models
M Jang, T Lukasiewicz
IEEE/ACM Transactions on Audio, Speech, and Language Processing 30, 2514-2525, 2022
32022
Sentence transition matrix: an efficient approach that preserves sentence semantics
M Jang, P Kang
Computer Speech & Language 71, 101266, 2022
22022
Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
ME Jang, T Lukasiewicz
arXiv preprint arXiv:2310.15541, 2023
12023
A robust deep learning platform to predict CD8+ T-cell epitopes
CH Lee, J Huh, PR Buckley, M Jang, M Pereira Pinho, RA Fernandes, ...
bioRxiv, 2022.12. 29.522182, 2022
12022
Leveraging Natural Language Processing and Large Language Models for Assisting Due Diligence in the Legal Domain
M Jang, G Stikkel
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
2024
DriftWatch: A Tool that Automatically Detects Data Drift and Extracts Representative Examples Affected by Drift
M Jang, A Georgiadis, Y Zhao, F Silavong
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
2024
Pre-training and diagnosing knowledge base completion models
V Kocijan, M Jang, T Lukasiewicz
Artificial Intelligence 329, 104081, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–20