Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox

Abstract
We consider linear stochastic bandits where the set of actions is an ellipsoid. We provide the first known minimax optimal algorithm for this problem. We first derive a novel information-theoretic lower bound on the regret of any algorithm, which must be at least where is the dimension, the time horizon, the noise variance, a matrix defining the set of actions and the vector of unknown parameters. We then provide an algorithm whose regret matches this bound to a multiplicative universal constant. The algorithm is non-classical in the sense that it is not optimistic, and it is not a sampling algorithm. The main idea is to combine a novel sequential procedure to estimate , followed by an explore-and-commit strategy informed by this estimate. The algorithm is highly computationally efficient, and a run requires only time and memory , in contrast with known optimistic algorithms, which are not implementable in polynomial time. We go beyond minimax optimality and show that our algorithm is locally asymptotically minimax optimal, a much stronger notion of optimality. We further provide numerical experiments to illustrate our theoretical findings. The code to reproduce the experiments is available at \url{https://github.com/RaymZhang/LinearBanditsEllipsoidsMinimaxCOLT}. }
Type
Publication
In The Thirty Eighth Annual Conference on Learning Theory 2025