Kalman Temporal Differences - CentraleSupélec Accéder directement au contenu
Article Dans Une Revue Journal of Artificial Intelligence Research Année : 2010

Kalman Temporal Differences

Résumé

Because reinforcement learning suffers from a lack of scalability, online value (and Q-) function approximation has received increasing interest this last decade. This contribution introduces a novel approximation scheme, namely the Kalman Temporal Differences (KTD) framework, that exhibits the following features: sample-efficiency, non-linear approximation, non-stationarity handling and uncertainty management. A first KTD-based algorithm is provided for deterministic Markov Decision Processes (MDP) which produces biased estimates in the case of stochastic transitions. Than the eXtended KTD framework (XKTD), solving stochastic MDP, is described. Convergence is analyzed for special cases for both deterministic and stochastic transitions. Related algorithms are experimented on classical benchmarks. They compare favorably to the state of the art while exhibiting the announced features.
Fichier non déposé

Dates et versions

hal-00858687 , version 1 (05-09-2013)

Identifiants

  • HAL Id : hal-00858687 , version 1

Citer

Matthieu Geist, Olivier Pietquin. Kalman Temporal Differences. Journal of Artificial Intelligence Research, 2010, 39, pp.483-532. ⟨hal-00858687⟩
59 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More