关注
Taolin Zhang
Taolin Zhang
Alibaba Group
在 alibaba-inc.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
T Zhang, Z Cai, C Wang, M Qiu, B Yang, X He
ACL 2021, 2021
562021
DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
T Zhang, C Wang, N Hu, M Qiu, C Tang, X He, J Huang
AAAI 2022, 2021
402021
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
C Wang, M Qiu, T Zhang, T Liu, L Li, J Wang, M Wang, J Huang, W Lin
EMNLP22, 2022
252022
HORNET: enriching pre-trained language representations with heterogeneous knowledge sources
T Zhang, Z Cai, C Wang, P Li, Y Li, M Qiu, C Tang, X He, J Huang
Proceedings of the 30th ACM International Conference on Information …, 2021
92021
EMBERT: A pre-trained language model for Chinese medical text mining
Z Cai, T Zhang, C Wang, X He
Web and Big Data: 5th International Joint Conference, APWeb-WAIM 2021 …, 2021
92021
Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training
T Zhang, J DOng, J Wang, C Wang, A Wang, Y Liu, J Huang, Y Li, X He
EMNLP22, 2022
72022
Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources
T Zhang, C Wang, M Qiu, B Yang, X He, J Huang
Findings of ACL 2021, 2020
42020
From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models
J Yan, C Wang, T Zhang, X He, J Huang, W Zhang
arXiv preprint arXiv:2311.06754, 2023
22023
HAIN: Hierarchical Aggregation and Inference Network for Document-Level Relation Extraction
N Hu, T Zhang, S Yang, W Nong, X He
NLPCC 2021, 325-337, 2021
22021
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
Q Chen, T Zhang, D Li, L Huang, H Xue, C Wang, X He
arXiv preprint arXiv:2405.03279, 2024
12024
Learning knowledge-enhanced contextual language representations for domain natural language understanding
T Zhang, R Xu, C Wang, Z Duan, C Chen, M Qiu, D Cheng, X He, W Qian
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
12023
On the Role of Long-tail Knowledge in Retrieval Augmented Large Language Models
D Li, J Yan, T Zhang, C Wang, X He, L Huang, H Xue, J Huang
arXiv preprint arXiv:2406.16367, 2024
2024
DAFNet: Dynamic Auxiliary Fusion for Sequential Model Editing in Large Language Models
T Zhang, Q Chen, D Li, C Wang, X He, L Huang, H Xue, J Huang
arXiv preprint arXiv:2405.20588, 2024
2024
R4: Reinforced Retriever-Reorder-Responder for Retrieval-Augmented Large Language Models
T Zhang, D Li, Q Chen, C Wang, L Huang, H Xue, X He, J Huang
arXiv preprint arXiv:2405.02659, 2024
2024
UniPSDA: Unsupervised Pseudo Semantic Data Augmentation for Zero-Shot Cross-Lingual Natural Language Understanding
D Li, T Zhang, J Deng, L Huang, C Wang, X He, H Xue
Proceedings of the 2024 Joint International Conference on Computational …, 2024
2024
KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning
D Li, T Zhang, L Huang, C Wang, X He, H Xue
Proceedings of the 2024 Joint International Conference on Computational …, 2024
2024
CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem
Q Chen, T Zhang, D Li, X He
Proceedings of the AAAI Conference on Artificial Intelligence 38 (16), 17763 …, 2024
2024
TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
J Yan, C Wang, T Zhang, X He, J Huang, L Huang, H Xue, W Zhang
arXiv preprint arXiv:2403.11203, 2024
2024
Learning Knowledge-Enhanced Contextual Language Representations for Domain Natural Language Understanding
R Xu, T Zhang, C Wang, Z Duan, C Chen, M Qiu, D Cheng, X He, W Qian
arXiv preprint arXiv:2311.06761, 2023
2023
OnMKD: An Online Mutual Knowledge Distillation Framework for Passage Retrieval
J Deng, D Li, T Zhang, X He
CCF International Conference on Natural Language Processing and Chinese …, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–20