Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities

Y Zhu, X Wang, J Chen, S Qiao, Y Ou, Y Yao, S Deng… - World Wide Web, 2024 - Springer
This paper presents an exhaustive quantitative and qualitative evaluation of Large
Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Hybrid transformer with multi-level fusion for multimodal knowledge graph completion

X Chen, N Zhang, L Li, S Deng, C Tan, C Xu… - Proceedings of the 45th …, 2022 - dl.acm.org
Multimodal Knowledge Graphs (MKGs), which organize visual-text factual knowledge, have
recently been successfully applied to tasks such as information retrieval, question …

A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Domain specialization as the key to make large language models disruptive: A comprehensive survey

C Ling, X Zhao, J Lu, C Deng, C Zheng, J Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have significantly advanced the field of natural language
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …

Good visual guidance makes a better extractor: Hierarchical visual prefix for multimodal entity and relation extraction

X Chen, N Zhang, L Li, Y Yao, S Deng, C Tan… - arXiv preprint arXiv …, 2022 - arxiv.org
Multimodal named entity recognition and relation extraction (MNER and MRE) is a
fundamental and crucial branch in information extraction. However, existing approaches for …

Prompting is all you need: Automated android bug replay with large language models

S Feng, C Chen - Proceedings of the 46th IEEE/ACM International …, 2024 - dl.acm.org
Bug reports are vital for software maintenance that allow users to inform developers of the
problems encountered while using the software. As such, researchers have committed …