Task Offloading in Cloud-Edge Environments: A Deep-Reinforcement-Learning-Based Solution

Task Offloading in Cloud-Edge Environments: A Deep-Reinforcement-Learning-Based Solution

Suzhen Wang, Yongchen Deng, Zhongbo Hu
Copyright: © 2023 |Pages: 23
DOI: 10.4018/IJDCF.332066
Article PDF Download
Open access articles are freely available for download

Abstract

Cloud computing involves transferring data to remote data centers for processing, which consumes significant network bandwidth and transmission time. Edge computing can effectively address this issue by processing tasks at edge nodes, thereby reducing the amount of data transmitted and enhancing the utilization of network bandwidth. This paper investigates intelligent task offloading under the three-layer architecture of cloud-edge-device to fully exploit the cloud-edge collaboration potential. Specifically, an optimization objective function is constructed by modelling the processing cost of all computing tasks. Additionally, asynchronous advantage actor-critic (A3C) algorithm is proposed under cloud-edge collaboration to solve the optimization problem of minimizing the sum of the weights of task offloading delay and energy consumption. Experimental results indicate that the algorithm can effectively utilize the computing resources of the cloud center, reduce task execution delay and energy consumption, and compare favourably with three existing task offloading methods.
Article Preview
Top

Introduction

The fast-paced development of big data and related technologies has led to a tremendous increase in the scale of data (Warren, 2015). The conventional cloud computing mode (Zhao et al., 2018) needs to transfer all computational tasks to the cloud server for processing, which causes response delay, energy loss, data security and other problems in the transmission process. Edge computing technology has experienced rapid growth as a means to address the limitations of cloud computing technology. Edge computing (Shi et al., 2016; Tran et al., 2017), and specifically mobile edge computing or MEC (Khan et al., 2019), is a groundbreaking data processing methodology that involves processing data closer to the edge of network-connected devices. When servers are in close proximity to mobile device users, the wireless LAN can be utilized to deliver necessary services and computing capabilities. MEC offers assistance for applications that are sensitive to delay, require low latency, are mobile, and need to be location-aware.

However, despite these positives, edge nodes have restricted computing resources and processing capacity. The amount of computing resources that edge nodes assign to mobile users is determined by their “workload,” or the total count of simultaneous tasks unloaded to the edge servers. When a significant number of portable devices unload task requirements to the same edge server, the workload will be higher and the processing time delay of the job will grow. Therefore, the optimization of the unloading strategy (Mao, You, et al., 2017; Shakarami et al., 2020; Sun et al., 2010; Zheng et al., 2019) can reduce the processing delay of the unloading task and enhance the level of service and user-friendliness of the device.

In the MEC network, there are not many studies on cloud-edge collaboration. The majority of research on MEC computing offloading has focused on distributed task allocation and resource management among mobile gadgets (Alam et al., 2018) and between mobile devices and MEC servers (Li et al., 2019). However, there has been a lack of emphasis on fully leveraging the computing resources and processing power of cloud servers. If the full computing power of the cloud center and the distributed characteristics of edge computing can be extracted to optimize the fine-grained computing task unloading strategy, this will not only relieve the dimension explosion caused by unloading a large number of subtasks to multiple edge servers but also decrease the total cost of the task unloading system. The focus of this paper is to introduce a method for fine-grained task unloading that relies on deep reinforcement learning. The three main ideas are:

  • 1.

    From the basis of the three-tier cloud-edge architecture, a fine-grained task unloading model is proposed in the cloud-edge framework. The optimization aim of this model is minimizing the sum of “task offloading delay and energy consumption” weights. In the model, the edge server assumes responsibility for the local optimization of task offloading policies, and the cloud center is in charge of global optimization. The cloud server and edge nodes work together to achieve distributed optimization of task offloading.

  • 2.

    With regard to the terminal task for partial offloading at the edge, the terminal task is split into several subtasks, and the interdependency between the subtasks is expressed through a directed acyclic graph to form a task division model.

  • 3.

    A novel approach is suggested to facilitate cloud-edge collaboration by combining the asynchronous dominant actor-critic algorithm with a task division model using the A3C algorithm. By leveraging the benefits of both deep learning and reinforcement learning, this algorithm offers a promising solution to mitigate end-task offloading delays and system energy consumption.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 3 Issues (2022)
Volume 13: 6 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing