Article Preview
TopIntroduction
The cloud computing paradigm has motivated various information systems moving from the local to the global Internet-based servers (Buyya et al, 2009; Buyya et al, 2017). Hence, having the management systems reside remotely have become a common practice for information systems. Unexceptionally, common Internet of Things (IoT) management systems have also followed the same architecture design (Chang et al, 2016). As the IoT front-ends continue increasing, and the involved data is getting larger, the drawback of the distant centralised system has risen, which is known as the latency issues, and it mainly derives from the fact that IoT system relies on the Internet as the main communication protocol between the front-end IoT devices and the back-end management server. Researchers discovered that there is a need to keep a certain process and decision making within or near the network where the front-ends are located in order to enhance the agility of the decision making in IoT applications. Such a paradigm is known as fog computing architecture.
A fog computing server is capable of providing storage, compute, acceleration, networking and control services by utilising virtualisation or containerisation technologies. For example, industrial integrated routers, which are capable of providing Virtual Machine (VM) or containers engine mechanisms, can support similar software deployment platform as the common cloud services. Moreover, by extending the Over-The-Air (OTA) programming mechanisms (Rossi et al, 2010), the IoT system can also distribute certain tasks to the resource constraint devices towards enabling self-managed IoT devices.
Today, fog computing architecture has become one of the main elements of IoT. Specifically, the market research has specified that the main applications of fog computing will be electricity/utilities, transportation, healthcare, industrial activities, agriculture, distributed datacenters, wearables, smart buildings, smart cities, retail and smart homes (451Research, 2017). Further, the emerged industrial standard IEEE 19341 and the market research report (451Research, 2017) indicate that the population of fog services will increase and it is foreseeable that fog services will not be limited in private networks but also available as public cloud today, in which different service providers can provide their fog computing servers for general public (Chang et al, 2017), which we can consider it as the public fog.
Suppose the near future smart cities encompass many local service providers who provide various public fog features, performing fog computing is no longer requiring IoT system management team to deploy their own physical fog computing servers (e.g. the high-end industrial routers), neither require them to upgrade their IoT equipment to compliant to fog computing; the management team can invoke the public fog to their system in order to distribute the tasks from their central server to the public fog in the proximity of their front-end IoT devices. Certainly, such an environment simplifies the establishment of fog computing and ideally reduces the cost from installing and maintaining physical equipment. However, it also raises a new challenge regarding the work assignment optimisation. Here, Figure 1 illustrates a Mobile Big Data acquisition (Chang et al, 2018) scenario that expresses the challenge.
Figure 1.
Distributed processing in heterogeneous fog computing environment