SARL: A reinforcement learning based QoS-aware IoT service discovery model
Selahattin Kosunalp – Kubilay Demir
The IoT environment includes the enormous amount of atomic services with dynamic QoS compared with traditional web services. In such an environment, in the service composition process, discovering a requested service meeting the required QoS is a difficult task. In this work, to address this issue, we propose a peer-to-peer-based service discovery model, which looks for the information about services meeting the requested QoS and functionality on an overlay constructed with users of services versus service nodes, with probably constrained resources. However, employing a plain discovery algorithm on the overlay network such as flooding, or k-random walk could cause high message overhead or delay. This necessitates an intelligent and adaptive discovery algorithm, which adapts itself based on users' previous queries and the results. To fill this gap, the proposed service discovery approach is equipped with a reinforcement learning-based algorithm, named SARL. The reinforcement learning-based algorithm enables SARL to significantly reduce delay and message overhead in the service discovery process by ranking neighboring nodes based on users' service request preferences and the service query results. The proposed model is implemented on the OMNet simulation platform. The simulation results demonstrate that SARL remarkably outperforms the existing approaches in terms of message overhead, reliability, timeliness, and energy usage efficiency.
Keywords: internet of things, services discovery, reinforcement learning
|