Skip to Main content Skip to Navigation
Conference papers

Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks

Abstract : In this paper, we study the problem of dynamic channel allocation for URLLC traffic in a multiuser multi-channel wireless network where urgent packets have to be successfully received in a timely manner. We formulate the problem as a finite-horizon Markov Decision Process with a stochastic constraint related to the QoS requirement, defined as the packet loss rate for each user. We propose a novel weighted formulation that takes into account both the total expected reward (number of successfully decoded packets) and the risk which we define as the QoS requirement violation. First, we use the value iteration algorithm to find the optimal policy, which assumes a perfect knowledge of the controller of all the parameters, namely the channel statistics. We then propose a Q-learning algorithm where the controller learns the optimal policy without having knowledge of neither the CSI nor the channel statistics. We illustrate the performance of our algorithms with numerical studies.
Complete list of metadata
Contributor : Nesrine BEN KHALIFA Connect in order to contact the contributor
Submitted on : Thursday, November 22, 2018 - 1:32:11 PM
Last modification on : Sunday, June 26, 2022 - 2:27:40 AM
Long-term archiving on: : Saturday, February 23, 2019 - 2:23:00 PM


Files produced by the author(s)


  • HAL Id : hal-01930918, version 1


Nesrine Ben Khalifa, Mohamad Assaad, Merouane Debbah. Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks. IEEE WCNC, Apr 2019, Marrakech, Morocco. ⟨hal-01930918⟩



Record views


Files downloads