A Comparative Analysis of Genetic Algorithm and Ant Colony Optimization for Mobile Augmented Reality Offloading

Mobile augmented reality (MAR) systems typically offload computation-intensive tasks to edge servers. This paper contributes a comparative analysis of Genetic Algorithm (GA) and Ant Colony Optimization (ACO) in task offloading optimization for MAR on the edge, in terms of latency and power consumption across eight distinct setups with varying server configurations and tasks. Our experimental findings indicate that the ACO algorithm consistently manifests lower latency than the GA algorithm in setups where the servers are in proximity to the edge devices. However, the GA outperforms the ACO algorithm in setups where the servers are situated at a greater distance. The ACO algorithm primarily offloads tasks to a single server, resulting in consistent latency performance, while the GA demonstrates a more diverse task offloading strategy by distributing tasks across multiple servers and the local device. The latency fluctuations in the ACO are primarily attributed to changes in the offloading patterns, especially when tasks are offloaded to different servers than those predominantly used. On the other hand, the GA algorithm, with its more varied offloading approach, exhibits less significant latency variations. Regarding to power consumption, the GA algorithm generally consumes higher power than the ACO algorithm in most setups due to its more diverse task distribution strategy.


INTRODUCTION
Mobile Augmented Reality (MAR) enables immersive interactive user experiences by overlaying digital contents on top of physical environments.However, MAR applications often require significant computation resources, which challenge the limited capabilities and battery life of mobile devices.Task offloading involves relocation of computational tasks from mobile devices to external edge or cloud servers, which is crucial for improving the performance and energy efficiency of MAR applications [5] [12].The main challenges in offloading MAR tasks pertain to minimizing latency particularly in latency-sensitive applications while reducing energy consumption [2].Balancing the trade-offs between latency and energy consumption is crucial for achieving the desired Quality of Experience (QoE) and Quality of Service (QoS) in MAR applications [9].
Offloading computation intensive MAR tasks to edge servers has gained significant attention in recent years.However, many tasks are latency-sensitive and demand rapid processing to guarantee high QoE and QoS.Moreover, the offloading process should be energy-efficient.Optimization algorithms play a crucial role in determining the most effective offloading strategy and balancing both latency and power consumption requirements.Different offloading strategies with optimization algorithms have been researched.For example, [11] employed the Genetic Algorithm (GA) to optimize the delay and power consumption for optimal task offloading for smart energy stations by considering a variety of parameters, such as the amount of data in the task and CPU power transmission bandwidth.Their results demonstrated that the GA was effective in achieving optimal task-offloading decisions.[6] conducted decision-making for scheduling on edge devices to enhance efficiency and reduce processing latency.The authors utilized the networking model and the edge scheduling model to design the algorithm.In this work, GA efficiently reduced the execution time compared to alternative strategies.ACO is an optimization algorithm which employs the probabilistic technique and is used for solving computational problems and finding the optimal path.[1] used the optimization algorithm on time-sensitive healthcare applications to process data with minimum delay.ACO was leveraged to determine the optimal edge server for task allocation.Additionally, the paper [8] reviewed the application of ACO in reducing latency in latency-sensitive IoT-sensor applications.
Both GA and ACO algorithm have been widely utilized and have demonstrated significant advancements in making informed decisions regarding latency-sensitive task offloading decisions through the application of heuristic search methods [1][8].Nevertheless, the majority of existing research focuses on utilizing those two algorithms to solve optimization problems without taking into account the variations between them to select the most effective optimization approach.In this paper, we investigate the performance of those two algorithms for offloading latency-sensitive MAR tasks to edge servers.The performance of these algorithms is analyzed under different setups and offers recommendations for the most appropriate offloading strategy based on the specific context.
The remainder of the paper is organized as follows.Section 2 describes the GA and ACO algorithms.Section 3 presents the simulation and analyzes our results and states the takeaway from analysis, and Section 4 discusses and concludes the work.

OPTIMIZATION ALGORITHMS IN MAR OFFLOADING 2.1 Genetic Algorithm
The Genetic Algorithm imitates the genetic mutation of chromosomes and uses a fitness function to select the best solution among multiple candidates [11].In the simulation, the GA will go through the aforementioned metrics, namely network connection/speed, distance from edge server, and tasks to execute.To perform the GA algorithm, a fitness function is required and will be calculated. denotes latency and  represents power consumption.The total power consumption (  ) is determined by the device's standby energy consumption (  ), the power consumption per unit of data transmitted (  ), and the amount of data transmitted ().If the throughput is low, it takes a longer transmission time (   ) to transmit data between A and B using a network link.In our experiment, we assign a weight of  1 = 1.5 to the latency component and a weight of 1 to the power consumption component denoted as  2 , as we consider latency is crucial for MAR applications.

Ant Colony Optimization
Inspired from [4], we refine the ACO algorithm to better suit the task offloading use case.Adopting all the main takeaways from the standard ACO algorithm, the proposed refined ACO algorithm uses a width search instead of a depth search.This alteration aligns with our focus on identifying the optimal servers for task offloading, rather than determining the shortest path through the environment.The calculation of ant pheromones deposited on each edge from each ant is done using the formulas described as follows.
First, an initial pheromone value ( ℎ ) is set.Second, the total pheromone (   ) on each edge is updated with the newly deposited pheromone.Finally, the pheromone is evaporated on each edge during each iteration.This results in a total pheromone level on each edge (  ), which is used in the fitness function to evaluate candidate solutions.The pheromone values are updated for each ant in each iteration, making the colony converge toward an optimal solution.In the MAR offloading system, the fitness of each task is determined based on two key factors, namely, latency and power consumption.The fitness values are computed using Equations ( 6) and (7), where  and  parameters control the relative importance of latency and power consumption, respectively.To emphasize the significance of latency in the MAR offloading decision, we have set  to 1.5 and  to 1, aligning with similar parameters used in the GA algorithm.The total fitness value of a task is obtained by multiplying the fitness values for latency and power consumption, as depicted in Equation ( 8).This approach prioritizes tasks with lower latency and power consumption by assigning them higher fitness values, indicating their suitability for offloading.

EVALUATION AND ANALYSIS
We evaluate the performance of both GA and ACO algorithms based on several metrics, including processing time, throughput, jitter, scheduling latency, and power consumption.In MAR systems, several critical tasks work together to provide immersive, responsive, and realistic user experiences.These tasks include feature extraction, object recognition and tracking, scene reconstruction, virtual object rendering, and sensor data processing [3].Simulations will be conducted in a controlled environment, where the tasks of the MAR process are separated.The simulation parameters will encompass different task types, sizes, and resource availability at the local edge device, edge servers, and global servers.For simplicity, we assume that all tasks are eligible for offloading.
To ensure statistically significant results, we run a sufficient number of simulations.After collecting the results, we conduct a statistical analysis, which involves calculating average performance metrics.This approach allows for a thorough comparison of the two algorithms, aiding in the determination of the algorithm that exhibits superior average performance across the selected metrics.To run the simulations, we have used the numbers of the tasks and server from the iFogSim2 [7][10] as a reference.

Experimental Setups
The simulations are conducted across various environments to evaluate the performance of the algorithms in optimizing solutions for different scenarios.Table 1 illustrates four different scenarios.Scenario 1 emulates a real-world situation where users are situated at a moderate distance from the edge servers (located at different distances) with the global server located 1000km away from the local device.This configuration enables us to evaluate the performance of the algorithm in a scenario that is representative of typical usage scenarios.Servers 0-3 have queues whereas the global server currently has no tasks.Scenario 2 simulates a local device in a city where the servers are placed on the outskirts, but the network traffic on the servers is high.Scenario 3 places the edge device in a large city where the servers are placed inside the city center and are expected to suffer heavy usage in urban areas.The servers have high processing speeds, however, throughput is limited due to high network traffic.With scenario 4, we simulate a rural environment where offloading servers are located at a significant distance from the edge device.Similarly, the global server is also located far away from the edge device.Overall, these scenarios and their corresponding server environments facilitate the evaluation of different tasks performance across different devices in diverse environments.The simulation includes tasks with varying computational rates and environmental complexities, as illustrated in Table 2 (see Appendix).The first task configuration focuses on executing an application in a scene with a small number of objects, rendering a high volume of objects onto the scene without requiring extensive user input and interaction.The second task configuration, executed in a scene containing numerous objects, renders many objects onto the scene but does not rely heavily on user input and interaction.The third task, with a high number of objects in the scene, renders few objects while involving significant user interaction and input.The fourth task features a medium level of user input and interaction, rendering a moderate number of objects in a scene containing many objects.The last operates in a scene with a minimal number of objects, rendering few objects and requiring minimal user input and interaction.
Setups 1 and 2 are built up using Server Configuration 1 in Table 1 but process different task sets, where setup 1 with Task Configuration 1 and setup 2 with Task Configuration 5 in Table 2.The results of the typical usage scenario shown in Figure 1 demonstrate that the GA algorithm generally outperforms the ACO algorithm in terms of picking the servers which give the lowest latency.In setup 1, the average latency for GA is 12.93 ms, while the average latency for ACO is 14.16 ms.Similarly, in setup 2, the average latency for GA is 11.97 ms compared to 14.20 ms for ACO (third column in Figure 1).Over the process of 10 iterations, GA demonstrates consistent and stable latency performance without significant fluctuations.In contrast, ACO encounters two noticeable latency spikes in setups 1 and 2. Task distribution among servers varies for the two algorithms.ACO predominantly offloads tasks to a single server (first column in Figure 1).GA distributes tasks more spread across servers 1 through 3 (second column in Figure 1), which potentially contribute to its lower latency performance.In each iteration, the total power consumption remains constant for both setups (fourth column in Figure 1).For setup 1, the ACO algorithm consumes 34.45 mW of energy, while the GA algorithm utilizes 37.05 c.In setup 2, the GA algorithm demonstrates lower power consumption at 33.05 mW, while ACO algorithm still consumes 34.45 mW.
Server set 2: setups 3 and 4. Setups 3 and 4 are built up using Server Configuration 2 with Task Configuration 2 and Task Configuration 1, respectively.The outcomes (see Figure 2) reveal that the ACO algorithm demonstrates lower latency on the chosen servers compared to the GA.Specifically, setup 3 yields 12.71 ms for ACO and 16.08 ms for GA, while setup 4 exhibits 13.48 ms for ACO and 15.81 ms for GA.It is worth noting that in setup 3, the ACO consistently offloads all tasks to server 3 during each iteration, resulting in a uniform latency.This pattern is similarly observed in setup 4, where all tasks are offloaded to server 3 for each iteration, except iteration 7 resulting in a latency of 20.47 ms.On the other hand, GA shows a varied distribution of task offloading, where tasks are assigned to servers 1-4 in all iterations of setups 3 and 4.This distribution leads to diverse latency results, with intervals ranging from 14.98 ms to 17.30 ms in setup 3 and from 14.94 ms to 16.37 ms in setup 4.However, while looking at the average offloading latency, the ACO algorithm outperforms the GA in finding the best offloading servers showing average latency numbers of 12.71 ms and 13.48 ms against GA's 16.08 ms and 15.81 ms.The GA exhibits a higher power consumption in both setups 3 and 4, consuming 37.05 mW and 37.45 mW, respectively (see subfigures in the fourth column in Figure 2).
Server set 3: setups 5 and 6.Setups 5 and 6 are built up using Server Configuration 3 with Task Configurations 3 and 2 respectively.The results are presented in Figure 3.The ACO offloading demonstrates a comparable outcome, predominantly offloading tasks to server 3, albeit with a few exceptions.In setup 5, the average latency for ACO is 14.16 ms featuring two distinct spikes, while the average latency for GA is 13.58 ms.In setup 6, ACO displays a more consistent outcome culminating in an average latency of 13.50 ms, while GA demonstrates a more varied offloading approach leading to a broader latency spread with a peak at 17.78 ms and the lowest peak at 14.19 ms, which yields an average latency of 15.32 ms.In setup 5, both the GA and ACO algorithm have the power consumption to be 34.45mW, but in setup 6, the GA exhibits a higher power consumption at 37.45 mW (see subfigures in the fourth column in Figure 3).
Server set 4: setup 7 and 8. Setups 7 and 8 use Server configuration 4 with Task configurations 3 and 5, respectively.In these setups, GA is favored by achieving a result approximately 5 ms less than ACO.The results are presented in Figure 4.In setup 7, GA maintains a stable latency with an average of 10.60 ms, primarily offloading tasks to the local device rather than the edge servers.In contrast, the ACO predominantly favors server 3 but exhibits varying results in nearly half of the iterations.During iteration 2, the majority of tasks are offloaded to server 4, while tasks in other iterations not offloaded to server 3 are assigned to server 2. This variation leads to latency spikes in the iterations, resulting in an average latency of 15.18 ms for the ACO.Similarly, in setup 8, ACO primarily favors server 3 for all tasks in iterations 4-10, while in iterations 1-3, it prefers server 2. The offloading to server 2 impacts the latency, generating spikes with a latency of 19.95 ms, whereas offloading to server 3 yields a latency of 12.71 ms.In setup 8, GA also primarily offloads tasks to the local device rather than edge devices.Despite variations in task offloading locations, the latency in this setup remains stable, with an average latency of 9.79 ms.In setup 7, GA a higher power consumption at 36.15 mW and the ACO has a power consumption of 34.45 mW, while in setup 8 there is a decrease in GA power usage with 33.05 mW (see subfigures in the fourth column in Figure 4).
Total Execution Time and Power Consumption.The overall power consumption for each setup employing the ACO algorithm remains constant at 34.45 mW.In contrast, the GA exhibits a more varied power consumption, ranging from 33.05 mW to 37.45 mW.Currently, only the latency of    the task offloading is considered, not the execution time of the algorithm.In terms of total latency, the ACO algorithm outperforms the GA.The execution time for the GA reaches up to 503.19 ms in certain setups, while the ACO algorithm demonstrates a maximum execution time of 25.24 ms.

Takeaways
The performance analysis in terms of latency revealed that ACO consistently exhibits lower latency as compared to GA in setups 3, 4, 5 and 6 (corresponding Server Configurations are 2 and 3), whereas the GA outperforms ACO in setups 1, 2, 7 and 8 (corresponding Server Configurations are 1 and 4).The efficiency of both algorithms is contingent upon the specific attributes of the setups and the nature of the tasks assigned.A discernible trend in the offloading strategies is observed, whereby the ACO predominantly offloaded tasks to a singular server (commonly server 3), resulting in consistent latency rates.In contrast, the GA demonstrates a more heterogeneous task offloading approach, distributing tasks across multiple servers and the local device.Latency fluctuations in the ACO are primarily ascribed to alterations in the offloading patterns, particularly when tasks are offloaded to different servers than those predominantly employed.The GA, with its more varied offloading methodology, exhibits less significant latency variations.One could also see up to 20 times longer execution time in the GA than in the ACO algorithm, which is worth noting when addressing the total execution time in a real-world scenario.

CONCLUSION AND FUTURE WORK
We evaluate the performance of GA and ACO algorithms in task offloading optimization for edge computing for MAR application.The evaluation is performed using different server configurations and a variety of tasks.The results demonstrated that the performance of both algorithms is highly dependent on the specific characteristics of the environments and tasks.The ACO algorithm consistently demonstrates lower latency than the GA in environments where the servers are closer to the edge device.However, the GA outperforms the ACO algorithm in environments where the servers are situated at a greater distance.The ACO algorithm primarily offloaded tasks to a single server, resulting in consistent latency rates.In contrast, the GA demonstrates a more diverse task offloading strategy, distributing tasks across multiple servers and the local device, resulting in small changes in latency.Latency fluctuations in the ACO algorithm are primarily attributed to changes in the offloading patterns, particularly when tasks are offloaded to different servers than those predominantly used.The GA, with its more varied offloading approach, exhibited less significant latency variations.Regarding power consumption, the GA is characterized by higher usage in most environments compared to the ACO algorithm.
This study presents some limitations, such as the use of a dataset is not based on realistic numbers, the execution time of the algorithms is measured using the same hardware, and the server queues used are relatively modest compared to what would typically be encountered in real-world scenarios.Despite these limitations, this study provides valuable insights into the performance of GA and ACO in task offloading optimization for edge computing environments, highlighting the dependence of the choice between the GA and ACO on the specific use case and the desired balance between latency and power consumption.Moreover, the study could be extended by investigating alternative optimization algorithms and their performance in more complex edge computing scenarios.

Figure 1 :
Figure1: Offloading strategies, latency, and power consumption in setups 1 (above) and 2 (below).The first column describes ACO offloading strategies, the second column represents the GA offloading strategies, the third column and fourth column show the latency and power consumption of those two algorithms respectively.

Figure 2 :
Figure2: Offloading strategies, latency, and power consumption in setups 3 (above) and 4 (below).The first column describes ACO offloading strategies, the second column represents the GA offloading strategies, the third column and fourth column show the latency and power consumption of those two algorithms respectively.

Figure 3 :
Figure 3: Offloading strategies, latency, and power consumption in setups 5 (above) and 6 (below).The columns have the same representations as Figure 1.

Figure 4 :
Figure 4: Offloading strategies, latency, and power consumption in setups 7 (above) and 8 (below).The columns have the same representations as Figure 1.