Explainability for large language models: A survey

H Zhao, H Chen, F Yang, N Liu, H Deng, H Cai… - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …

Transformer-based attention network for stock movement prediction

Q Zhang, C Qin, Y Zhang, F Bao, C Zhang… - Expert Systems with …, 2022 - Elsevier
Stock movement prediction is an important field of study that can help market traders make
better trading decisions and earn more profit. The fusion of text from social media platforms …

Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning

K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2023 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …

Context-guided bert for targeted aspect-based sentiment analysis

Z Wu, DC Ong - Proceedings of the AAAI conference on artificial …, 2021 - ojs.aaai.org
Aspect-based sentiment analysis (ABSA) and Targeted ASBA (TABSA) allow finer-grained
inferences about sentiment to be drawn from the same text, depending on context. For …

On explaining your explanations of bert: An empirical study with sequence classification

Z Wu, DC Ong - arXiv preprint arXiv:2101.00196, 2021 - arxiv.org
BERT, as one of the pretrianed language models, attracts the most attention in recent years
for creating new benchmarks across GLUE tasks via fine-tuning. One pressing issue is to …

Attention uncovers task-relevant semantics in emotional narrative understanding

TS Nguyen, Z Wu, DC Ong - Knowledge-Based Systems, 2021 - Elsevier
Attention mechanisms in deep neural network models have helped them to achieve
exceptional performance at complex natural language processing tasks. Previous attempts …

Influence patterns for explaining information flow in bert

K Lu, Z Wang, P Mardziel… - Advances in Neural …, 2021 - proceedings.neurips.cc
While attention is all you need may be proving true, we do not know why: attention-based
transformer models such as BERT are superior but how information flows from input tokens …

The quarrel of local post-hoc explainers for moral values classification in natural language processing

A Agiollo, L Cavalcante Siebert… - … Autonomous Agents and …, 2023 - Springer
Although popular and effective, large language models (LLM) are characterised by a
performance vs. transparency trade-off that hinders their applicability to sensitive scenarios …

From large language models to small logic programs: building global explanations from disagreeing local post-hoc explainers

A Agiollo, LC Siebert, PK Murukannaiah… - Autonomous Agents and …, 2024 - Springer
The expressive power and effectiveness of large language models (LLMs) is going to
increasingly push intelligent agents towards sub-symbolic models for natural language …

Does BERT look at sentiment lexicon?

E Razova, S Vychegzhanin, E Kotelnikov - International conference on …, 2021 - Springer
The main approaches to sentiment analysis are rule-based methods and machine learning,
in particular, deep neural network models with the Transformer architecture, including BERT …