Because fog computing is closer to end devices, it provides high-quality services by meeting the demands of latency-sensitive tasks and reducing the workload of the cloud server. Most of the research work related to IoT applications focuses on individual exploration of fog computing or cloud computing environments. Another module called Task Offloading and Optimum Resource Allocation (TOORA) is modeled to decide whether to move the task into cloud or fog and uses WOA to allocate the resources of the fog node.
In [24], queue robustness was achieved by applying a reinforcement learning approach to Lyapunov optimization for edge computing resource allocation. The discharge of duty is decided on the basis of the urgency of the duty on the basis of the time of uncertainty. The fade time is also estimated due to the deadline, execution time and current time, which also set the task discharge.
Bharti and Mavi [37] adopted ETMCTSA and discovered that underutilized cloud resources can increase resource utilization. Most researches have considered different resource allocation methods in fog environment, cloud and wireless networks. Step 6: The set of tasks from the queues is transferred to TOORA for further processing.
Step-9: An optimal resource allocation scheduler is run in the TOORA module to optimally allocate resources from the fog node to the task. Step-10: As a result of the algorithm, the tasks are assigned to the fog nodes. The allocated resource for a fog node cannot exceed the total resource for the fog node.
The execution time of the task is also not predicted before the completion of the task. Taking the above task parameters, tasks can be classified into different types. If the maximum membership value of the task exceeds or equals the membership threshold value (β), then it takes a new cluster center and generates a new cluster.
Based on the number of clusters, that number of virtual queues is modeled for buffering the jobs. The task laxity time (lf) is used to determine the participation of the number of tasks from each queue for further operations. Usually, metaheuristic algorithms give a near-optimal solution to the resource allocation problem [15,17].
In the WORA algorithm, the space complexity is related to the population size and the dimension of the problem.
Performance Evaluation
We ran extensive simulations with a varied number of tasks and fog nodes in the system. Cost: Cost is the monetary cost of processing the jobs in cloud and fog nodes. The total energy consumption in fog nodes is added to the energy consumption for performing tasks and the energy consumption of the fog nodes that are idle.
When tasks are executed in the cloud, then the total energy is the sum of the energy consumed for executing the task and also the energy for transferring the task and data. When considering three fog nodes where each fog node has three containers and each container has three resource blocks, Figure 6 shows the cost, energy consumption, production space, and task completion ratio of different tasks. Figure 7 compares the cost overhead of WORA with the other three algorithms with different number of fog nodes with 500 tasks.
Therefore, a larger number of tasks are assigned to the fog nodes and a small number of tasks are transferred to the cloud for execution, which reduces costs. Therefore, fewer tasks are performed on the fog nodes in the FLRTS algorithm, which can increase costs. Most of the tasks are assigned with resources from fog nodes in the MOMIS algorithm.
The proposed WORA algorithm saves 23.89% of the average cost of FLRTS and 17.24% of the average cost of the MOMIS algorithm. It can be observed that increasing fog nodes can reduce power consumption as most of the tasks are performed in fog nodes where fewer tasks are moved to the cloud. When considering makespan with the number of fog nodes handling 500 tasks in Figure 9, it is observed that with an increase in fog nodes, the makespan decreases.
Instead of waiting for resources, tasks are executed as fog nodes increase, decreasing makespan. When 500 tasks are performed in different fog nodes from 5 to 20, Figure 10 shows that our WORA algorithm performed 3.51% better than MOMIS and 5.4% better than FLRTS in terms of successful task completion. Our WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS.
Similarly, the WORA algorithm saves 18.57% of the average energy of MOMIS and 30.8% of the average energy of FLRTS, shown in Figure12. Figure 15 shows the minimum fitness value in different time intervals for numbers of tasks in three misnodes.
Conclusions
A cost and performance effective approach to task scheduling based on cooperation between cloud and fog computing. Resource allocation and task offloading for heterogeneous real-time tasks with uncertain duration in a fog queuing system. Access IEEE. Fog Computing and the Internet of Things: Extend the cloud to where things are.
In Proceedings of the international conference on signal, networks, computers and systems; Springer: Berlin/Heidelberg, Germany, 14 Oct. In Proceedings of the 2020 5th International Conference on Fog and Mobile Edge Computing, Paris, France, 20–23 April 2020; pp. The high performance of a task scheduling algorithm using reference queues for cloud computing data centers.
A reinforcement learning formulation of Lyapunov optimization: application to edge computing systems with queuing stability.arXiv arXiv:2012.07279. Real-Time Task Scheduling in Fog-Cloud Computing Framework for IoT Applications: A Fuzzy Logic-Based Approach. In Proceedings of the 2017–2017 IFIP/IEEE International Symposium on Integrated Network and Service Management, Lisbon, Portugal, 8–12 May 2017.
In Proceedings of the Proceedings—2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems, IEEE CCIS 2012, Hangzhou, China, 30 oktober–1 november 2013; Volume 3, pp. In Proceedings of the 2016 International Conference on Identification, Information and Knowledge in the Internet of Things, IIKI 2016, Beijing, China, 20–21 oktober 2016; Volume 2018, pp. Een architectuur van IoT-servicedelegatie en resourcetoewijzing op basis van samenwerking tussen Fog en Cloud Computing.
Efficient prediction and resource allocation method (EPRAM) in fog computing environment for smart health system.Multimized. Smart shadow – An autonomous availability calculation platform for resource allocation for internet of things in the cloud computing environment. In Proceedings of the IEEE International Conference on Distributed Computing in Sensor Systems, DCOSS 2015, Fortaleza, Brazil, 10-12. June 2015; pp.