Best Arm Identification in Multi-Armed Bandits - École des Ponts ParisTech Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

Best Arm Identification in Multi-Armed Bandits

Résumé

We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) Σ i 1/Δ2i, where the sum is on the suboptimal arms andΔi represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/Δ2 samples to differentiate the means of two distributions with gap Δ.
Fichier principal
Vignette du fichier
COLT10.pdf (166.83 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00654404 , version 1 (21-12-2011)

Identifiants

  • HAL Id : hal-00654404 , version 1

Citer

Jean-Yves Audibert, Sébastien Bubeck. Best Arm Identification in Multi-Armed Bandits. COLT - 23th Conference on Learning Theory - 2010, Jun 2010, Haifa, Israel. 13 p. ⟨hal-00654404⟩
3106 Consultations
3753 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More