On using deep reinforcement learning to dynamically derive 5G new radio TDD pattern

Bagaa, Miloud; Boutiba, Karim; Ksentini, Adlen
GLOBECOM 2021, IEEE Global Communications Conference, 7-11 December 2021, Madrid, Spain

The deployment of 5G and 6G is highly motivated by the emerging network services that demand more bandwidth and very low latency. Besides, these services are shifting
from dominant Downlink (DL) Traffic to a more equilibrate DL/UpLink (UL) and dominant UL traffic for specific emerging services. One option to accommodate this new behavior is to use Time Duplex Division (TDD), where the radio frame is shared
between UL and DL time slots, namely UL/DL pattern. While 4G TDD has a fixed number of configurations that cannot be updated on runtime, 5GNR allows complete flexibility to define the UL/DL pattern. Therefore, 5G base stations can dynamically
change the pattern to adapt to the type of traffic (i.e., UL or DL). However, the 5G standard does not specify algorithms or solutions to derive the UL/DL pattern. To fill this gap, we propose a Deep Reinforcement Learning (DRL) that adds intelligence to
the base station to self-adapt to the traffic pattern of the cell type. The proposed DRL algorithm monitors UL and DL buffers at the 5G base station to derive the optimal UL/DL pattern in respect to the current traffic configuration. The proposed solution delivers the optimal configuration in a timely and efficient manner. Simulation results demonstrated the efficiency of the proposed algorithm to avoid buffer overflow and ensure the generality by reacting to traffic pattern changes.

Communication systems
Eurecom Ref:
© 2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PERMALINK : https://www.eurecom.fr/publication/6673