When large language models meet personalization: Perspectives of challenges and opportunities

J Chen, Z Liu, X Huang, C Wu, Q Liu, G Jiang, Y Pu… - World Wide Web, 2024 - Springer
The advent of large language models marks a revolutionary breakthrough in artificial
intelligence. With the unprecedented scale of training and model parameters, the capability …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Evaluating the ripple effects of knowledge editing in language models

R Cohen, E Biran, O Yoran, A Globerson… - Transactions of the …, 2024 - direct.mit.edu
Modern language models capture a large body of factual knowledge. However, some facts
can be incorrectly induced or become obsolete over time, resulting in factually incorrect …

Can foundation models wrangle your data?

A Narayan, I Chami, L Orr, S Arora, C Ré - arXiv preprint arXiv:2205.09911, 2022 - arxiv.org
Foundation Models (FMs) are models trained on large corpora of data that, at very large
scale, can generalize to new tasks without any task-specific finetuning. As these models …

Large language models and knowledge graphs: Opportunities and challenges

JZ Pan, S Razniewski, JC Kalo, S Singhania… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have taken Knowledge Representation--and the world--by
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …

Crawling the internal knowledge-base of language models

R Cohen, M Geva, J Berant, A Globerson - arXiv preprint arXiv …, 2023 - arxiv.org
Language models are trained on large volumes of text, and as a result their parameters
might contain a significant body of factual knowledge. Any downstream task performed by …

PopBlends: Strategies for conceptual blending with large language models

S Wang, S Petridis, T Kwon, X Ma… - Proceedings of the 2023 …, 2023 - dl.acm.org
Pop culture is an important aspect of communication. On social media people often post pop
culture reference images that connect an event, product or other entity to a pop culture …

Measuring causal effects of data statistics on language model'sfactual'predictions

Y Elazar, N Kassner, S Ravfogel, A Feder… - arXiv preprint arXiv …, 2022 - arxiv.org
Large amounts of training data are one of the major reasons for the high performance of
state-of-the-art NLP models. But what exactly in the training data causes a model to make a …

[PDF][PDF] Bertnet: Harvesting knowledge graphs from pretrained language models

S Hao, B Tan, K Tang, H Zhang… - arXiv preprint arXiv …, 2022 - researchgate.net
Symbolic knowledge graphs (KGs) have been constructed either by expensive human
crowdsourcing or with complex text mining pipelines. The emerging large pretrained …

Explaining toxic text via knowledge enhanced text generation

R Sridhar, D Yang - Proceedings of the 2022 Conference of the …, 2022 - aclanthology.org
Warning: This paper contains content that is offensive and may be upsetting. Biased or toxic
speech can be harmful to various demographic groups. Therefore, it is not only important for …