Principled reinforcement learning with human feedback from pairwise or k-wise comparisons
We provide a theoretical framework for Reinforcement Learning with Human Feedback
(RLHF). We show that when the underlying true reward is linear, under both Bradley-Terry …
(RLHF). We show that when the underlying true reward is linear, under both Bradley-Terry …
Towards conversational recommender systems
K Christakopoulou, F Radlinski… - Proceedings of the 22nd …, 2016 - dl.acm.org
People often ask others for restaurant recommendations as a way to discover new dining
experiences. This makes restaurant recommendation an exciting scenario for recommender …
experiences. This makes restaurant recommendation an exciting scenario for recommender …
Dueling rl: Reinforcement learning with trajectory preferences
We consider the problem of preference-based reinforcement learning (PbRL), where, unlike
traditional reinforcement learning (RL), an agent receives feedback only in terms of 1 bit …
traditional reinforcement learning (RL), an agent receives feedback only in terms of 1 bit …
Dueling rl: reinforcement learning with trajectory preferences
We consider the problem of preference based reinforcement learning (PbRL), where, unlike
traditional reinforcement learning, an agent receives feedback only in terms of a 1 bit (0/1) …
traditional reinforcement learning, an agent receives feedback only in terms of a 1 bit (0/1) …
Preference-based online learning with dueling bandits: A survey
In machine learning, the notion of multi-armed bandits refers to a class of online learning
problems, in which an agent is supposed to simultaneously explore and exploit a given set …
problems, in which an agent is supposed to simultaneously explore and exploit a given set …
Efficient and optimal algorithms for contextual dueling bandits under realizability
A Saha, A Krishnamurthy - International Conference on …, 2022 - proceedings.mlr.press
We study the $ K $-armed contextual dueling bandit problem, a sequential decision making
setting in which the learner uses contextual information to make two decisions, but only …
setting in which the learner uses contextual information to make two decisions, but only …
Versatile dueling bandits: Best-of-both world analyses for learning from relative preferences
A Saha, P Gaillard - International Conference on Machine …, 2022 - proceedings.mlr.press
We study the problem of $ K $-armed dueling bandit for both stochastic and adversarial
environments, where the goal of the learner is to aggregate information through relative …
environments, where the goal of the learner is to aggregate information through relative …
Iterative data smoothing: Mitigating reward overfitting and overoptimization in rlhf
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns
language models closely with human-centric values. The initial phase of RLHF involves …
language models closely with human-centric values. The initial phase of RLHF involves …
Multi-dueling bandits with dependent arms
The dueling bandits problem is an online learning framework for learning from pairwise
preference feedback, and is particularly well-suited for modeling settings that elicit …
preference feedback, and is particularly well-suited for modeling settings that elicit …
[PDF][PDF] Advancements in Dueling Bandits.
The dueling bandits problem is an online learning framework where learning happens “on-
thefly” through preference feedback, ie, from comparisons between a pair of actions. Unlike …
thefly” through preference feedback, ie, from comparisons between a pair of actions. Unlike …