Task Offloading Using Deep Reinforcement Learning for Edge IoT Networks

Task Offloading Using Deep Reinforcement Learning for Edge IoT Networks

Pradeep Bedi, S. B. Goyal, Jugnesh Kumar
DOI: 10.4018/978-1-6684-3733-9.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Edge computing is a type of distributed computing that was designed especially for internet of things (IoT) users to provide computational resources and data management nearby to users' devices. By introducing edge computing for IoT, networks have reduced the bandwidth and latency issue while handling real-time applications. The major benefit of edge computing is that it reduces the communication overhead between IoT user and server. With integration of IoT in our daily lives, it has attracted researchers towards its performance management such as complexity minimization, latency minimization, memory management, energy consumption minimization, etc. In this chapter, deep reinforcement learning is focused to minimize the computational complexity at IoT user end. The task offloading decision process is designed using Q-Learning, which minimizes the system cost and curse of high dimensional data. In addition, the proposed methodology will perform better as compared to existing algorithms with respect to system costs.
Chapter Preview
Top

Introduction

The term Internet of Things (IoT), clearly refers to the objects, things and things in an Internet structure which can be easily identified, was first proposed in 1998 (Liu et al., 2020). In recent years, due to some representative application (for example, intelligent monitoring of greenhouses, while reading electricity meters, monitoring of telemedicine and intelligent transport) the IoT concept has become popular. Typically, the IoT has four main components- sensors, data processing, applications and services. The new era applications such as smart city, smart educational system, smart industries, etc are integrated with Internet of Things (IoT) (Anand et al, 2021). The IoT architecture is composed of IoT devices such as sensors, actuators and gateways that are used to collect and perform computation over these generated and collected data (Liu et al. 2020).

With advancement of technologies, every day new innovations in hardware as well as software are being made which had contributed in increase of IoT networks and their applications. With usage of IoT devices on large scale, there is increase in data processing and storage requirements (Alfakih et al., 2020). For instance, for adopting smart farming, there is requirement of many sensors, computational devices to monitor temperature, humidity, pH, light, nutrition level, etc required for proper growth and developments of plants. In IoT network scenario, these data are generated by deployed sensors and collected the gateway devices and further sent to the cloud server end where processing over collected data is performed (Wang et al., 2019).

While processing such large amount of data in dynamic and real channel environment, there arise some challenges that are needed to be addressed. In case of smart farming, first issue is that where to locate sensors so that they can cover maximum area. Another issue is that most of the farms are located mostly outside the city where internet services are limited and are not capable to connect to the remote servers all the time. This issue can be addressed by integrating IoT-cloud architecture with edge computing that can provide promising solution to such issues (Wang et al., 2019) (Alelaiwi, 2019). Another issue that arise in IoT network is that there is requirement of large processing services to edge server by these deployed IoT sensors that burdens on the radio services and the storage services. So, the IoT devices required to be developed such that they can process some simple data processing tasks locally. Therefore, for fast and optimal processing of large amount of data in Edge-IoT network, there is requirement to design an optimal computation task offloading scheme (Chen et al., 2020).

User equipment processing some computationally expensive programmes and uploading the data processing these applications to the edge server through wireless transmission on the condition of weighing continuous or other indicators is referred to as task offloading. The processing programme assigns certain computational resources to the edge server for these uploads in order to receive ongoing or progressive replacements, leading to a positive user engagement. A fundamental aspect of selecting whether to offload, that is, the offloading choice, is usually an initial part of computer offloading and resource allocation. End devices are viewed as agents in this article, making judgments on whether or not the network should offload compute chores to edge devices. The computing resource allocation problems were framed as a sum cost delay of this framework to address resource allocation and task offloading. In this paper, an optimal computational offloading choice is presented, and reinforcement learning is used to fix the issues.

Complete Chapter List

Search this Book:
Reset