Algorithmic chaining and the role of partial feedback in online nonparametric learning
Conference on Learning Theory, 2017•proceedings.mlr.press
We investigate contextual online learning with nonparametric (Lipschitz) comparison
classes under different assumptions on losses and feedback information. For full information
feedback and Lipschitz losses, we design the first explicit algorithm achieving the minimax
regret rate (up to log factors). In a partial feedback model motivated by second-price
auctions, we obtain algorithms for Lipschitz and semi-Lipschitz losses with regret bounds
improving on the known bounds for standard bandit feedback. Our analysis combines novel …
classes under different assumptions on losses and feedback information. For full information
feedback and Lipschitz losses, we design the first explicit algorithm achieving the minimax
regret rate (up to log factors). In a partial feedback model motivated by second-price
auctions, we obtain algorithms for Lipschitz and semi-Lipschitz losses with regret bounds
improving on the known bounds for standard bandit feedback. Our analysis combines novel …
Abstract
We investigate contextual online learning with nonparametric (Lipschitz) comparison classes under different assumptions on losses and feedback information. For full information feedback and Lipschitz losses, we design the first explicit algorithm achieving the minimax regret rate (up to log factors). In a partial feedback model motivated by second-price auctions, we obtain algorithms for Lipschitz and semi-Lipschitz losses with regret bounds improving on the known bounds for standard bandit feedback. Our analysis combines novel results for contextual second-price auctions with a novel algorithmic approach based on chaining. When the context space is Euclidean, our chaining approach is efficient and delivers an even better regret bound.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果