Article Preview
TopIntroduction
In the past decades, the cloud computing paradigm has evolved as a major force to provide computing, storage and network services, which have been applied in various fields, e.g., scientific workflow execution (Li, 2018; Peng, 2018; Wang 2019; Guo 2019). However, the cloud computing paradigm can be ineffective in supporting IoT-based and time-critical applications due to the fact traditional cloud (Xia 2015) infrastructures far away from the edge while smart IoT devices are usually located at the edge of network. To address the above challenge, the edge computing paradigm (Li 2019; Chen 2020; Xiang 2020) is derived to satisfy the demanding requirements of low-latency, location-awareness, and mobility. This novel paradigm can be seen as a network edge cloud, which effectively compensates for the disadvantages of cloud computing such as communication latency. Edge resources are usually located close to end-user applications to better serve delay-sensitive and time-critical tasks. Due to the improvement of resource-user proximity, power consumption, network traffic, operating expenses, and fault tolerance of edge-oriented applications are improved as well.
Figure 1. Edge computing deployment example
Although extensive research efforts were paid to the problem of scheduling workflows over cloud infrastructures with multiple objectives and constraints, which is known to be NP-hard. Scheduling workflows upon edge infrastructures can be intrinsically different and it remains a challenge how to optimize the cost of workflow execution with the proximity constraint, i.e., every edge service can only support users within its communication range. Figure 1 illustrates a good example of deploying and offloading tasks among edge nodes. It’s assumed that there are four edge servers in a particular area and each server covers a specific area. A user can offload computing tasks to any server within the coverage. User u6 can offload computing tasks to servers s2 and s3, and u7 can offload tasks to s2, s3, and s4. u1 can only offload tasks to s1. Since user u10 is out of the coverage of any server, the task cannot be offloaded to the server. Each user initiates a workflow with multiple tasks to be offloaded. Tasks belonging to the same user can be offloaded to different edge servers. For instance, tasks belonging to u5 can be offloaded to both s1 and s2.
In this paper, we study the problem of location-aware and proximity-constrained cost-efficient multi-workflow scheduling in the edge computing environment. We consider a multi-edge-user, multi-workflow, cost-reduction, and completion-time-constrained formulation and employ a discrete firefly algorithm (DFA) for solution. To validate our proposed approach, we conduct simulative case studies and show that our proposed method beats its peers in terms of cost and workflow completion time.
Table 1. Variables and symbols used in this paper