On Transforming Reinforcement Learning With Transformers: The Development Trajectory
Transformers, originally devised for natural language processing (NLP), have also produced
significant successes in computer vision (CV). Due to their strong expression power …
significant successes in computer vision (CV). Due to their strong expression power …
Interactive natural language processing
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
Introspective tips: Large language model for in-context decision making
The emergence of large language models (LLMs) has substantially influenced natural
language processing, demonstrating exceptional results across various tasks. In this study …
language processing, demonstrating exceptional results across various tasks. In this study …
NL2Color: Refining color palettes for charts with natural language
Choice of color is critical to creating effective charts with an engaging, enjoyable, and
informative reading experience. However, designing a good color palette for a chart is a …
informative reading experience. However, designing a good color palette for a chart is a …
Arigraph: Learning knowledge graph world models with episodic memory for llm agents
Advancements in generative AI have broadened the potential applications of Large
Language Models (LLMs) in the development of autonomous agents. Achieving true …
Language Models (LLMs) in the development of autonomous agents. Achieving true …
Learning belief representations for partially observable deep RL
Many important real-world Reinforcement Learning (RL) problems involve partial
observability and require policies with memory. Unfortunately, standard deep RL algorithms …
observability and require policies with memory. Unfortunately, standard deep RL algorithms …
EXPLORER: Exploration-guided Reasoning for Textual Reinforcement Learning
Text-based games (TBGs) have emerged as an important collection of NLP tasks, requiring
reinforcement learning (RL) agents to combine natural language understanding with …
reinforcement learning (RL) agents to combine natural language understanding with …
Noisy symbolic abstractions for deep RL: A case study with reward machines
Natural and formal languages provide an effective mechanism for humans to specify
instructions and reward functions. We investigate how to generate policies via RL when …
instructions and reward functions. We investigate how to generate policies via RL when …
Reward Machines for Deep RL in Noisy and Uncertain Environments
Reward Machines provide an automata-inspired structure for specifying instructions, safety
constraints, and other temporally extended reward-worthy behaviour. By exposing complex …
constraints, and other temporally extended reward-worthy behaviour. By exposing complex …
[PDF][PDF] Using Incomplete and Incorrect Plans to Shape Reinforcement Learning in Long-Sequence Sparse-Reward Tasks
H Müller, D Kudenko - Proc. of the Adaptive and …, 2023 - alaworkshop2023.github.io
Reinforcement learning (RL) agents naturally struggle with longsequence sparse reward
tasks due to the lack of reward feedback during exploration and the problem of identifying …
tasks due to the lack of reward feedback during exploration and the problem of identifying …