Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks

Abstract : In this paper, we study the problem of dynamic channel allocation for URLLC traffic in a multiuser multi-channel wireless network where urgent packets have to be successfully received in a timely manner. We formulate the problem as a finite-horizon Markov Decision Process with a stochastic constraint related to the QoS requirement, defined as the packet loss rate for each user. We propose a novel weighted formulation that takes into account both the total expected reward (number of successfully decoded packets) and the risk which we define as the QoS requirement violation. First, we use the value iteration algorithm to find the optimal policy, which assumes a perfect knowledge of the controller of all the parameters, namely the channel statistics. We then propose a Q-learning algorithm where the controller learns the optimal policy without having knowledge of neither the CSI nor the channel statistics. We illustrate the performance of our algorithms with numerical studies.
Complete list of metadatas

https://hal-centralesupelec.archives-ouvertes.fr/hal-01930918
Contributor : Nesrine Ben Khalifa <>
Submitted on : Thursday, November 22, 2018 - 1:32:11 PM
Last modification on : Tuesday, December 11, 2018 - 10:20:16 AM
Long-term archiving on: Saturday, February 23, 2019 - 2:23:00 PM

File

wcnc_last_version.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01930918, version 1

Citation

Nesrine Ben Khalifa, Mohamad Assaad, Merouane Debbah. Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks. IEEE WCNC, Apr 2019, Marrakech, Morocco. ⟨hal-01930918⟩

Share

Metrics

Record views

192

Files downloads

209