An Adaptive Asynchronous Wake-Up Scheme for Underwater Acoustic Sensor Networks Using Deep Reinforcement Learning

2021
Underwater acoustic sensor networks (UWSNs), acting as a reliable and efficient infrastructure of the Internet of underwater things (IoUT), have attracted much research interest in recent years due to the wide range of their potential marine applications. The limited energy supply of underwater sensor nodes is a significant challenge that can be mitigated by the cyclic difference set (CDS)-based coordination asynchronous wake-up scheme. However, the CDS-based asynchronous wake-up scheme also introduces long delays in the neighbor discovery that deteriorates packet delay as well as the network lifetime. In this paper, we formulate the problem of policy selection for idle listening as a Markov decision process and exploit the framework of deep reinforcement learning to obtain the optimal policies of underwater sensor nodes. Furthermore, the long short-term memory (LSTM) networks are utilized to estimate the network traffic feature, which can improve the performance of the proposed adaptive asynchronous wake-up scheme. To verify the performance of the proposed scheme, simulations in different network scenarios are conducted with the comparison of random, fixed metric policies, and original CDS-based asynchronous wake-up schemes.
    • Correction
    • Source
    • Cite
    • Save
    32
    References
    4
    Citations
    NaN
    KQI
    []
    Baidu
    map