A comprehensive study of ChatGPT: advancements, limitations, and ethical considerations in natural language processing and cybersecurity
This paper presents an in-depth study of ChatGPT, a state-of-the-art language model that is
revolutionizing generative text. We provide a comprehensive analysis of its architecture …
revolutionizing generative text. We provide a comprehensive analysis of its architecture …
Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling
Recently, ChatGPT, a representative large language model (LLM), has gained considerable
attention due to its powerful emergent abilities. Some researchers suggest that LLMs could …
attention due to its powerful emergent abilities. Some researchers suggest that LLMs could …
Twhin-bert: A socially-enriched pre-trained language model for multilingual tweet representations at twitter
Pre-trained language models (PLMs) are fundamental for natural language processing
applications. Most existing PLMs are not tailored to the noisy user-generated text on social …
applications. Most existing PLMs are not tailored to the noisy user-generated text on social …
Oag-bench: a human-curated benchmark for academic graph mining
With the rapid proliferation of scientific literature, versatile academic knowledge services
increasingly rely on comprehensive academic graph mining. Despite the availability of …
increasingly rely on comprehensive academic graph mining. Despite the availability of …
Give us the facts: Enhancing large language models with knowledge graphs for fact-aware language modeling
Recently, ChatGPT, a representative large language model (LLM), has gained considerable
attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible …
attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible …
A systematic review of transformer-based pre-trained language models through self-supervised learning
E Kotei, R Thirunavukarasu - Information, 2023 - mdpi.com
Transfer learning is a technique utilized in deep learning applications to transmit learned
inference to a different target domain. The approach is mainly to solve the problem of a few …
inference to a different target domain. The approach is mainly to solve the problem of a few …
The effect of metadata on scientific literature tagging: A cross-field cross-model study
Due to the exponential growth of scientific publications on the Web, there is a pressing need
to tag each paper with fine-grained topics so that researchers can track their interested fields …
to tag each paper with fine-grained topics so that researchers can track their interested fields …
Oag: Linking entities across large-scale heterogeneous knowledge graphs
Different knowledge graphs for the same domain are often uniquely housed on the Web.
Effectively linking entities from different graphs is critical for building an open and …
Effectively linking entities from different graphs is critical for building an open and …
Pretraining language models with text-attributed heterogeneous graphs
In many real-world scenarios (eg, academic networks, social platforms), different types of
entities are not only associated with texts but also connected by various relationships, which …
entities are not only associated with texts but also connected by various relationships, which …
Weakly supervised multi-label classification of full-text scientific papers
Instead of relying on human-annotated training samples to build a classifier, weakly
supervised scientific paper classification aims to classify papers only using category …
supervised scientific paper classification aims to classify papers only using category …