Task Offloading Strategy in Satellite Edge Computing Based on Matching Game

The Low Earth Orbit (LEO) satellite network, serving as a vital complement to the terrestrial backbone network, has emerged as a prospective path for future mobile communication systems, owing to its extensive coverage. In this paper, a three-tier computing architecture catering for application scenarios where ground terminal has limited resources and elevated quality of service (QoS) requisites is presented. The proposed design features vertical collaboration among ground users, LEO satellites, and remote cloud servers, providing collaborative computing task offloading capabilities. We consider the delay and energy consumption of the system, formulated the offloading decision problem as a nonlinear integer programming problem, and then introduced an offloading mechanism that utilizes an improved version of the Gale-Shapley (GS) algorithm and non-cooperative game theory (MGSCO) to approximate the optimal solution under specified constraints. The outcomes of simulations indicate that this strategy can significantly mitigate system delays and energy consumption, in contrast to reference strategies.


INTRODUCTION
With the 5th Generation (5G) mobile communication network fully commercialized in 2020, the key technologies of the 6th Generation (6G) mobile communication network have also been launched.In order to deliver superior communication quality and versatility, catering to an even broader range of application scenarios, 6G draws a blueprint for building a communication network integrating land, sea, air and space to meet users' requirements for wide coverage, high bandwidth, high reliability and low latency.
The satellite system can effectively establish a global, integrated communication network, offering robust assistance in realizing the vision of 6G Internet of Everything.In many cases, particularly in remote regions devoid of support from terrestrial communication infrastructure, users are compelled to offload computing tasks to remote cloud servers for processing via bent-pipe transmission [1].The aforementioned approach requires relay via satellites which may fall short of the requirements of delay-sensitive services.Taking cues from terrestrial Multi-Access Edge Computing (MEC) solutions [2]- [4], the computing paradigm of satellite edge computing is introduced.By deploying the MEC server on the LEO satellite, the computing and storage resources of the cloud server are sunk to the edge of the LEO satellite network, so as to provide users within its coverage with fast computing services [5].
In recent years, satellite edge computing technology has become one of the key technologies of 6G.However, due to the lack of specific application scenarios, there is no relatively complete application research on the integrated management of air, space and ground computing resources.In order to integrate and utilize dispersed space, space and ground computing resources, expand computing scenarios, and accelerate mission response, multi-satellite collaborative computing architecture and computing offloading strategies have become current research hotspots.In multi-satellite collaborative computing scenarios that encompass inter-satellite links, Wang et al. [6] have proposed a genetic algorithm-based task offloading strategy that leverages collaborative computing among the satellite edge nodes to minimize the response delay of the workload.Zhai et al. [7] have put forth a greedy algorithm-based computation offloading strategy to optimize latency under the constraints of transmission and computing capabilities.In multi-satellite scenarios that lack inter-satellite links, Wang et al. [8] presented a game theory-based computation offloading strategy.At the same time, the resource allocation problem was solved using the Rosen gradient method and the Lagrange multiplier method, resulting in optimized system delay and reduced energy consumption.Cheng et al. [9] employing Lyapunov optimization theory, jointly optimized the task offloading strategy and allocated computing resources.This approach ensured asymptotic optimization, while maintaining the stability of the average rate of the LEO satellite energy queue.In addition, the burgeoning advancements in artificial intelligence technology have steered the widespread inclusion of intelligent paradigms in computation offloading for LEO satellites.Tang et al. [10] have modeled the challenge as a large-scale nonlinear integer programming problem, and introduced a dynamic computation offloading strategy based on deep learning techniques that exhibited near-optimal performance with low computational complexity.Wu et al. [11] have formulated the optimization problem as a Markov decision process and employed deep reinforcement learning techniques using the proximal policy optimization approach.The method approximated the optimal solution and showcased exceptional robustness.Currently, many studies have proposed valuable solutions to the above problems.However, these studies need to be further deepened in the following aspects: To begin with, the collaborative computing architecture and application service model of satellite edge computing need to be further deepened.Most current research ignores cloud servers with rich resources and only considers LEO satellite networks with local and edge computing.Secondly, most optimization algorithms do not consider satellite edge computing architecture scenarios, making it difficult to actually deploy applications.Using a fully distributed offloading algorithm will generate a lot of signaling and communication costs between terminals, while using a fully centralized offloading algorithm will also It will reduce the flexibility of network management.Third, in scenarios where satellite edge computing transmission delays are high and resources are limited, most optimization algorithms require a large number of iterations to adjust computing offloading decisions, making it difficult to offload computing tasks in time-varying environments, and intelligent algorithms are highly dependent on data set, and the training cost is also high.
In this study, we focus on exploring the computation offloading delay and energy consumption issues prevalent in LEO satellite systems.Motivated by the three-layer computing architecture, which emphasizes vertical collaboration between ground users, LEO satellites, and remote cloud servers, we propose a novel match-based game theory approach (MGSCO) to facilitate satellite computing offload strategy.The main contributions are as follows: 1) Mathematical Modeling and Analysis: This study endeavors to optimize the system computing delay while simultaneously minimizing energy consumption.A mathematical model was developed to analyze the vertical collaboration of the "cloud-edge-device" computing architecture in satellite edge computing, which includes three layers.The model considers the heterogeneous computing capabilities of ground terminals, LEO satellites, and remote cloud servers, as well as the mobility of the satellites.The model is aimed at facilitating computation offloading decisions across multi-satellite coverage areas.2) Matching game algorithm: We designed a two-stage computing offloading decision-making process that combines centralized and distributed methods, using the improved GS algorithm and distributed non-cooperative game method to achieve a stable computing offloading strategy.To a certain extent, the computational complexity is reduced, which helps to alleviate the influence of time-varying LEO satellite network on the offloading strategy.3) Simulation verification: The performance of the proposed algorithm is evaluated through extensive simulation experiments, conducted on Matlab.Our results demonstrate that the proposed strategy is effective in reducing both system delay and energy consumption, as compared to the reference strategy.

SYSTEM MODEL 2.1 Network Model
The performance of the We consider using the computing architecture depicted in Figure 1 for collaborative computation offloading, this infrastructure comprises M LEO satellites, N ground users, and a remote cloud server, where M = {1, 2, 3, . . ., } and N = {1, 2, 3, . . ., } denote the sets of LEO satellites and ground users, respectively.The ground terminal can establish direct communication with the LEO satellite via a satellite-ground link.Each LEO satellite is equipped with the Docker lightweight MEC platform, which acts as an edge node of the LEO satellite network.Furthermore, every LEO satellite is connected to a remote cloud server via a feeder link.
The geometric relationship between the LEO satellite and the ground terminal is depicted in Figure 2. Due to the satellite's mobility, uninterrupted communication between the ground terminal and LEO satellite cannot be guaranteed at all times.According to [12], Assuming that is the elevation angle between the ground terminal and the LEO satellite, is the geocentric angle corresponding to the coverage area of the LEO satellite, R is the radius of the earth, h is the altitude of the LEO satellite orbit, and is the distance from the ground terminal to the LEO satellite.The communication link can be established between the ground terminal and LEO satellite only when 0 < < .
Suppose is known, it is possible to obtain based on the geometric relation as follows: The remaining coverage arc length of the LEO satellite is: Let denote the moving rate of LEO satellite m, then the residual coverage time of the satellite to ground user n can be expressed as:

Communication Model
The Suppose there exists a communication link between the ground terminal n and the LEO satellite m.In this setup, each ground user can establish communication with only one LEO satellite.Furthermore, multiple ground users share the same spectrum resource  via OFDMA [13], which results in channel interference between terrestrial users.The uplink transmission rate of ground terminal n can be expressed as: Among them, is the channel bandwidth, is the transmit power of the ground terminal, is the channel gain, 2 is the power of Gaussian white noise of the system, and is the interference signal power generated by other ground users.
Given that the output result of the calculation task is considerably smaller than the input data, and the downlink transmission rate of the LEO satellite surpasses the uplink transmission rate of the ground terminal, the transmission delay incurred by the feedback of the calculation result to the corresponding user on the ground was not taken into account.

Computation Model
It is assumed that each ground terminal produces an individual computation task = ( , , ), capable of being processed locally, on LEO satellites, or on remote cloud servers.Herein, refers to the input data size of the computation task , refers to the task's computing density, indicating the number of computations that have to be executed per data bit of the input information.
In this study, three distinct approaches for computational task handling are examined, where , , ∈ {0, 1} indicate the offloading scheme of ground terminal n.Specifically, denotes local processing of the tasks by the ground terminal, signifies processing of the tasks by its corresponding LEO satellite, and implies that the task is relayed by the LEO satellite and then transmitted to the cloud server for processing.Given that each ground terminal can opt for only one mode of computing task handling, the following constraints have to be satisfied:

1) Local Computing
Define as the computing resource of ground terminal n (unit: cycles/s), then the local computing delay of computing task can be expressed as: The energy consumption resulting from local processing of task can be expressed as: Where represents the effective switched capacitance, which depends on the chip structure.
2) LEO Satellite Computing Define as the computing resources allocated by LEO satellite m to ground terminal n (unit: cycles/s), then the computation delay of the task on the LEO satellite m can be expressed as: The uplink transmission delay of task offloaded by ground terminal n to LEO satellite m can be expressed as: Since there is a long distance between the LEO satellite and the ground terminal, and propagation speed of data is the speed of light c, the propagation delay from the ground terminal n to the LEO satellite m can be expressed as: Where represents the distance between the ground terminal n and the LEO satellite m, which can be obtained by the following formula: When using LEO satellite edge computing, processing of computing task entails ground user n transmitting the task to the associated LEO satellite through the uplink, followed by completion of the computation on the LEO satellite.Consequently, the overall delay incurred as a result of task processing through LEO satellite edge computing can be expressed as: The energy consumption of the ground user n choosing the LEO satellite for edge computing is the transmission energy consumption of the task , which can be obtained by the following formula: where is the transmit power of ground terminal n.

3) Cloud Computing
The computing resources of the cloud service center are extremely rich.Define as the computing resource allocated by the cloud server to the ground terminal n (unit: cycles/s), then the computing delay of the cloud server processing task can be expressed as: The ground user n first needs to transmit the computing task to the LEO satellite through the satellite-ground link, and then the LEO satellite sends it back to the cloud server through the feeder link.Therefore, the total time delay for the cloud server to process the computing task can be expressed as: Where represents the delay of the LEO satellite m returning to the cloud server.Assuming that r is the downlink transmission rate of the LEO satellite, then can be expressed as: In addition, the energy consumption generated by the cloud server processing computing tasks is also the energy consumption for the launch of the ground terminal.

PROBLEM FORMULATION
We comprehensively consider the time delay of processing computing tasks and the energy consumption of ground terminals, and express it as the cost of computation offloading.
The cost of local computing for ground terminal n can be expressed as: The computation cost of the ground terminal n offloading the task to the LEO satellite can be expressed as: The computation cost of ground terminal n offloading the task to the cloud server can be expressed as: Let = { , ∈ N } represent the computation offloading decision of the ground user set, then the optimization problem of the computation offloading decision can be expressed as: C3 : C1 indicates that when using local computing or LEO satellite edge computing, the total delay shall not exceed the maximum tolerance delay of the task; C2 indicates that when the task is calculated by LEO satellites, it must be completed within the time covered by LEO satellites; C3 indicates that each ground user can only choose one offloading strategy; C4 means that each computing task is indivisible; C5 represents the bandwidth resource constraint of the LEO satellite.In addition, there are no time delay and energy consumption constraints on the tasks which offloaded on the cloud server.

COLLABORATIVE COMPUTATION OFFLOADING STRATEGY BASED ON MATCHING GAME
Due to the inclusion of binary variables in both the objective function and constraints of the optimization problem posed, it is classified as an NP-hard problem and cannot be solved in polynomial time.Typically, addressing this sort of problem involves the use of heuristic algorithms or incorporating a relaxation factor to convert it into a convex optimization problem for resolution.However, the traditional solution has high computational complexity, and it is difficult to quickly obtain the optimal offloading decision as the number of satellites and ground terminals increases.To address this, we have developed a two-stage computing offloading decision-making process that integrates centralized and distributed methods.Initially, a centralized processing approach is employed at the cloud service center utilizing the improved Gale-Shapley algorithm to effectively achieve many-to-one matching between a subset of ground terminals and LEO satellites.Subsequently, by adding constraints, non-cooperative game methods are used between ground terminals to independently optimize and maximize the overall utility of the system.

Matching Algorithm
In this paper, we construct a bilateral many-to-one matching model, that is, each LEO satellite can match up to a certain number of ground terminals, and each ground terminal can only match one LEO satellite.Construct the preference matrix G = [ , ] [14] for M LEO satellites and N ground terminals according to the time delay and energy consumption cost, where , can be calculated by the following formula: We perform many-to-one matching between ground terminals and LEO satellites based on their respective preference orders.Our approach prioritizes acceptance of as many ground terminals as possible by LEO satellites, with unassigned ground terminals being designated as having no matching status.

Game Theory
Tasks are evaluated based on the maximum tolerable delay to determine their sensitivity to delay, and non-delay-sensitive tasks are offloaded to cloud servers for processing.The distributed noncooperative game theoretic approach is employed to iteratively optimize the remaining set of ground terminals.Each terminal independently assesses whether its offloading strategy is optimal in the current context and updates the strategy based on computational cost to maximize the system utility.
1) Game Rules: Define the current computation offloading decision matrix = { , 1 , 2 , . . ., , }, = { , ∈ N }.Define the cost matrix of the three computation offloading strategies as = { , , }.If = 1 and the maximum tolerable delay of task satisfies > 0 , task will be regarded as a non-delay-sensitive task, make it offloaded to the cloud server for processing, and modify = 0, = 1, Algorithm 1 Lateral many-to-one matching based on GS Input: M, N , , , , , , Output: Initial computation offloading strategy 1: for every m∈M, n∈N do 2: calculate , by (25) 3: local_pref = sort each row of G in descending order 4: leo_pref = sort each column of G in ascending order 5: end for 6: Initialization: the terminal and the LEO satellite is unmatched, maximum accepted quantity max 7: while there exists a LEO satellite don't match max users do 8: pick an unmatched terminal n and make it connect to the first satellite m in its local_pref list 9: for every n ∈ N do 10: if leo_accept(m) < max then 11: form a pair { , } 12: leo_accept(m) = leo_accept(m) + 1 13: else m is already in a pair 0 { 0 , } 14: if n is higher on m's preference list than 0 then 15: 0 is set to free again 17: else 18: the pair 0 { 0 , } is kept 19: n remains free 20: end if 21: end if 22: end for 23: end while' the number of terminals currently matched by LEO satellite m minus 1.For all ground terminals using local calculations, compare the difference in their calculation costs, select the calculation task 0 corresponding to the maximum value of | − | to match the LEO satellite m, and modify 0 = 0, 0 = 1, the number of terminals currently matched by LEO satellite m add 1. Furthermore, for all ground terminals with = 1, separately judge the size between its and , and modify the computation offloading decision until a stable match is achieved.
2) Nash Equilibrium: If any ground terminal unilaterally changes its strategy under this combination of strategies, it will not reduce the calculation cost.Any finite game has at least one Nash equilibrium.

Algorithm Analysis
1) Convergence: The MGSCO algorithm guarantees the convergence of the results.
Prove: As the calculation cost of a ground terminal is inversely proportional to its utility, the utility function for ground terminal n can be defined as where k is any constant greater than 0 and { − } represents the offloading strategy of all other terminals.The total system utility ( ) = ∈ N ( , { − } ) can then be calculated.According to the MGSCO algorithm, each ground terminal maximizes its own Algorithm 2 External game update algorithm Initialization: maximum iterations K 2: for iter = 1 to K do 3: end for 20: end while 21: end for utility function * = arg max ( , {− } ) during each iteration and does not affect the utility of other ground terminals.Therefore, the total utility of the system increases monotonically, i.e., ( + 1) > ( ).Since the utility function of any ground terminal has an upper limit, i.e., ≤ * , the total system utility will converge to a constant * after a finite number of iterations.
2) Stability: The MGSCO algorithm ensures stable matching of results.
Prove: Since the computation offloading decision vector space is finite dimensional, according to the Brouwer fixed point theorem, if the utility function of each ground user is continuous and convex, then there exists at least one point, so that each ground user cannot change own strategy vector to get more benefits.

SIMULATION EXPERIMENT AND RESULT ANALYSIS
In this section, we conduct simulation experiments based on Matlab R2021a.Assume that 8 LEO satellites, positioned at a 550km altitude, are evenly distributed throughout a specific area that occupies the Ku band for communication.Additionally, each satellite is capable of establishing a connection with ground terminals located within its coverage area.We further assume the presence of 20∼80 fixed ground terminals distributed within the same area simultaneously.Each time slot only generates an independent computing task, different time slots can be connected to different numbers of satellites to ensure the continuity of computing tasks.The computation capability of ground users is 0.1Gcycles/s, and the computation capability allocated by LEO satellites and the cloud server to each  ground user is 3Gcycles/s and 10Gcycles/s [15], respectively.The simulation parameters are shown in Table 1.

Result Analysis
In evaluating the performance simulation, we assigned equal weight of 0.5 to both the delay and energy consumption criteria associated with ground user processing tasks.Contrast the MGSCO algorithm with the following three computation offloading strategies: 1) Greedy offloading algorithm: Given that LEO satellite computing typically incurs low computational costs, ground terminals are encouraged to utilize this option wherever possible.For those computing tasks that exceed the maximum number of LEO satellites accepted, allocation is made to the local and cloud servers in a 7:3 ratio for computation purposes.2) Random offloading algorithm: Regardless of the constraints of the computing tasks, according to the ratio of 40%, 40% and 20%, the computing tasks are randomly selected to be calculated on the local, LEO satellite, and cloud servers.3) Computing Locally: The computation task generated by each ground terminal can only be computed by itself.Figures 3 and 4 depict the system computing cost and number of successfully offloaded tasks, respectively, under varying task numbers when the number of LEO satellites is 8. Comparing the MGSCO algorithm to the other three computation offloading strategies, Figure 3 indicates that the former reduces system computing costs by an average of 5.49%, 13.60%, and 60.71%, respectively.As the number of computing tasks increases from 30 to 40, a significant increase in computing cost is observed.This can be attributed to the fact that the number of tasks handled by LEO satellites has reached its threshold, thus necessitating increased reliance on local and cloud computing resources.It can be seen from Figure 4 that the offloading success rate increases by an average of 4.8%, 9.6%, and 20.4%, respectively, when comparing the performance of the MGSCO algorithm to the other three computation offloading strategies, which can meet the QoS requirements of the task.
Figure 5 shows the system computing cost under varying ground terminal computing capabilities.As local computing resources increase, the computing cost of the system experiences a sharp reduction, and reaches a critical point near 0.3Gcycles/s, then keeps stable or even recovers slightly, this is because that the enhancement of local computing capabilities not only results in a reduction of time delays in computing tasks but also leads to an increase in energy consumption of the system.When the computing resources of the ground terminal are sufficient, it is found that the efficiency of the MGSCO algorithm is comparable to that computed locally.As a result, the MGSCO algorithm is deemed more suitable for scenarios where ground terminal resources are limited.
Figure 6 shows the impact of different numbers of access satellites on the computational cost of the system, set the number of ground terminals to 80 and then increased the number of LEO satellites from 5 to 25 sequentially.As the number of access satellites increases, the computation cost of the system using the MGSCO algorithm drops rapidly, this reflects the effectiveness of the LEO satellite edge computing paradigm in reducing the cost of computing tasks, thereby highlighting the superiority of the algorithm in the field of satellite edge computing.

CONCLUSION
This paper proposes a novel three-layer vertical collaborative computing architecture that facilitates computing task offloading in  scenarios characterized by limited ground terminal resources and high QoS requirements.The simulation experiments demonstrate that the MGSCO algorithm enables efficient task offloading, reduces system delay and energy consumption, and improves the success rate of task offloading.In future research, we recommend incorporating the LEO satellite horizontal cooperation scheme and considering the impact of resource heterogeneity on the system.

Figure 2 :
Figure 2: Mathematical model between LEO satellite and terminal.

Figure 3 :
Figure 3: System computational costs at different task scales.

Figure 4 :
Figure 4: The number of successfully offloaded tasks at different task scales.

Figure 5 :
Figure 5: Computing costs under different terminal computing capabilities.

Figure 6 :
Figure 6: System computation cost under different number of satellites.

Table 1 :
Simulation Parameter Settings