Schedule-Based Cooperative Multi-agent Reinforcement Learning for Multi-channel Communication in Wireless Sensor Networks

Citation:

Sahraoui M, Bilami A, Taleb-Ahmed A. Schedule-Based Cooperative Multi-agent Reinforcement Learning for Multi-channel Communication in Wireless Sensor Networks. Wireless Personal Communications [Internet]. 2022;122 :3445-3465.

Date Published:

2022

Abstract:

Wireless sensor networks (WSNs) have become an important component in the Internet of things (IoT) field. In WSNs, multi-channel protocols have been developed to overcome some limitations related to the throughput and delivery rate which have become necessary for many IoT applications that require sufficient bandwidth to transmit a large amount of data. However, the requirement of frequent negotiation for channel assignment in distributed multi-channel protocols incurs an extra-large communication overhead which results in a reduction of the network lifetime. To deal with this requirement in an energy-efficient way is a challenging task. Hence, the Reinforcement Learning (RL) approach for channel assignment is used to overcome this problem. Nevertheless, the use of the RL approach requires a number of iterations to obtain the best solution which in turn creates a communication overhead and time-wasting. In this paper, a Self-schedule based Cooperative multi-agent Reinforcement Learning for Channel Assignment (SCRL CA) approach is proposed to improve the network lifetime and performance. The proposal addresses both regular traffic scheduling and assignment of the available orthogonal channels in an energy-efficient way. We solve the cooperation between the RL agents problem by using the self-schedule method to accelerate the RL iterations, reduce the communication overhead and balance the energy consumption in the route selection process. Therefore, two algorithms are proposed, the first one is for the Static channel assignment (SSCRL CA) while the second one is for the Dynamic channel assignment (DSCRL CA). The results of extensive simulation experiments show the effectiveness of our approach in improving the network lifetime and performance through the two algorithms.

Publisher's Version

Last updated on 04/20/2022