Avoiding Empty Instances and Offset Drifts of Basic Sequencer Tasks in Automotive Operating System

Modern embedded automotive software depends on the AUTOSAR architecture for design and development. AUTOSAR application consists of functions, called runnables, that are mapped to sequencer tasks. In the state-of-the-art approach, each sequencer task uses a counter to decide which runnables to execute in each activation. However, this approach might (i) contain empty task instances (ETI) of the sequencer task, i.e., instances that execute no runnables; and (ii) introduce offset drifts, i.e., delay the execution of subsequent runnables if the corresponding sequencer task misses activations and, consequently, does not increment its counter. The empty instances lead to unnecessary CPU usage, while unexpected offset drifts lead to different runtime behaviors, potentially invalidating the design-time analysis. In this paper, we propose a timestamp-based approach that eliminates empty instances and restricts the impact of missing activations to a single instance instead of delaying the execution of all subsequent instances. We evaluate our approach using a real-world automotive use case and show (i) a reduction in CPU usage compared to the counter-based approach and (ii) a consistent data latency and response time even missing some task activations.


INTRODUCTION
Modern embedded automotive applications are designed and implemented according to the AUTomotive Open System ARchitecture (AUTOSAR) standards [2].Each AUTOSAR application is decomposed into Software Components (SWCs) [3].SWCs encapsulate the implementation of their functionality and internal behavior.The internal behavior of each SWC is realized using a set of runnables ().Runnables can have temporal correlation among each other, called Event Chains (ECs) [5], e.g., input-processing-output schemes.There are numerous requirements for different parts of events chains, e.g., sampling time, reaction time to data change ( ) [6,18], and response time [5,11,13,16,19,20].To meet event chain requirements, the periods and offsets of the runnables have to be carefully selected.Therefore, the period and offset of one runnable are often different from other runnable.
An example of a timing requirement for reaction time to data change is the airbag opening time.In the airbag opening example, the data latency constraint is used [6], which is the limit on the time difference between a stimulus, i.e., the car crash, and the expected response of the system [6,10], i.e., the airbag deployment.Fig. 1 illustrates the response time requirement of an application containing two tasks and four runnables.The arrows represent the data dependency relation and the chain of runnables must complete execution within 3 ms.
In practice, the number of runnables is much higher than the number of tasks.Therefore, to achieve better utilization of hardware resources, runnables are mapped into Sequencer Tasks [5,9,12,26] using diverse techniques [12,18,21].Each instance of task executes zero or more mapped runnables according to their periods and offsets.For the sake of brevity, from now on, we use tasks to refer to sequencer tasks and runnables to refer to their mapped runnables.
In the state-of-the-art counter-based approach (CBA), each task maintains a counter for time awareness [5].Each task instance increments the counter and activates a subset of runnables according to the counter value.However, this approach suffers from two major problems: Empty Task Instances (ETIs) and Offset Drift (OD), as illustrated in Fig. 1.ETIs are task instances that execute no runnables.Nevertheless, they still incur the associated processing overhead, such as context switches and task stack management, and might increase the response time of lower-priority tasks.ETIs occur due to different periods and offsets of runnables.Even though they execute no runnables, they are necessary to keep the task counter Figure 1: Runnables  0 ,  1 ,  2 and  3 represent an event chain and their periods are 2ms, 3ms, 2ms and 3ms respectively.Offset of  2 and  3 is 1ms.Furthermore,  0 ,  1 are mapped to  1 and  2 ,  3 mapped to  2 .Both tasks  1 and  2 are running on the same processor core.Finally,  2 is the response time and  2 is the period of  2 task.consistent in the CBA approach.Offset Drifts (ODs) occurs when the response time of a task instance is greater than its period, as shown in Fig. 1, system time 5ms.This situation might occur due to unforeseen runtime conditions, such as peak CPU load and resource congestion.When this problem occurs, the next activation of that particular task is missed and ignored.Consequently, the counter is not incremented ( 1 and  2 deviate from 5ms on), and there is a drift on the execution of all runnables on subsequent activations of the task.If this violation occurs multiple times in a task, the OD increases even further.Unexpected ODs lead to different runtime behaviors, potentially invalidating the design-time analysis.To eliminate ETIs and OD problems, we propose a timestamp-based approach: We eradicate ETIs at design time by scheduling task activations based on runnables only if at least one runnable needs to be executed.To execute runnables inside a task, our approach uses the task index calculated based on the time elapsed between the first and the current task instances.
Contribution and Outline.The rest of this paper is organized as follows: In Section II, we explain the background and the system model.In Section 3 (contribution-I), we provide a detailed analysis of two problems of the existing approach: empty task instance and offset drift.In Section 4 (contribution-II), to solve the above problems, we present a timestamp-based approach along with its correctness proof.In Section 5 (contribution-III), we experimentally evaluate both approaches with a real-world case study.Related work is presented in Section 6, and we conclude with Section 7.

BACKGROUND AND SYSTEM MODEL
In Table 1, we have described various terms that will be used in the rest of the paper.Now, let us define different parts of our system model using the Table 1

terms:
Runnable.A runnable   is represented as   = (  ,  ,   ), where   represents the release time delay of its first instance.For all runnables, 0 ≤   <   .
Sequencer Task.A runnables set   is mapped to one sequencer task.A sequencer task or just task is scheduled by the AUTOSAR classic OS which is a static, customizable OS and follows fixed priority scheduling [4].A task   can be represented as   = (  ,   ,   ,   ,   ,   ).If    =  1 ,  2 , . . .,   , where  is Task Instance.A single activation of a task is known as a task instance.Every task   releases an instance at time ( •   +   ) for  ∈ N.For different instances of the same task, the number of executing runnables can vary from 0 to k, where k is the number of mapped runnables.If inside an instance of   no runnable executes   = (  + 1)mod   i.e. ( = 0), we name it as an empty task instance (ETI).Alg. 1 shows the implementation of a task [5].Each task uses a counter   initialized to zero, to identify which runnables to execute inside each task instance.The task in Alg. 1 is of type basic.The basic task cannot process events while the extended task can process events [4].
In this paper, we only consider the basic task.Schedule Table .The schedule table provides a predefined sequence of action points, also called expiry points offset (EPs), associated with a counter [4].When the counter reaches the value of an expiry point offset, the corresponding action is triggered, e.g., the activation of a task.Several actions are possible at each expiry point.Each schedule table always has a hyper-period (), after which it repeats the activation of tasks/events as needed at its expiry points.This activation pattern of tasks/events can be periodic or aperiodic within .If there are a number ℎ of tasks, use one schedule table for task activation, then: System Time ().Time awareness is essential to the development of real-time systems.Many OS implement it as virtual local time so that the kernel can track time to a certain resolution, e.g., 1s [7]. Assumptions.
(1) Each task can be activated at most once at any time.If a task is not in the suspended state, its next activation will be ignored.(2) For   range, we have 0 ≤   <   .
(3) The next activation point of a task is the deadline for its previous activation.

ANALYSIS OF COUNTER BASED APPROACH
At design time, runnables are mapped to sequencer tasks, and the activations of a task   are placed on schedule table expiry points using the following expression: is the expiry points ordered set of the schedule table in which the   activations are planned.At runtime,   uses the counter-based approach (CBA) to handle the execution of runnables (see Alg. 1), which leads to the following two problems: ETI and OD.

Empty Task Instances (ETI)
ETIs are the instances of   where no runnable executes.These instances are required by the CBA to keep the value of counter   consistent, thus executing the expected subset of runnables in each activation.The number of ETIs depends on the periods and offsets of runnables mapped to the task.
For comparative analysis of ETIs in different tasks, we use ETI% as a metric that presents the ratio between the number of ETIs and the total instance of   within its   .The ETI existence condition is presented in P5 of Section 4.1.
Next, we show how runnables' periods and offsets affect the number of ETIs.We show three different cases, in which we map the runnables to tasks using the well-established mapping techniques [12]: (i) periodic solution (PS) for runnables with the same period, (ii) multiple periodic solution (MPS) for runnables with harmonic periods, and (iii) arbitrary periodic solution (APS) for runnables with sub-or non-harmonic periods.To compare the cases, we use ETI%, which represents the ratio between the number of ETIs and the total number of   instances within   .• Case 1: Same Periods, Different Offsets.ETIs can exist in the PS mapping technique, when runnables with the same periods but with different offsets are mapped to one task as shown in Fig. 2 (case 1).In this figure, ETI% is 60%.Here  1 with  1 =100ms,  1 =20ms and  2 with  2 =100ms,  2 =60ms are mapped to task  1 .The task period is 20ms (Equ. 1) and the major cycle is 100ms (Equ.2), within which there are five task activations (Equ.3).Three of these five activations are ETIs, resulting in ETI% = 60%.The task period is 20ms (Equ. 1) and the major cycle is 100ms (Equ.2), within which there are five task activations (Equ.3).Three of these five activations are ETIs, resulting in ETI% = 60%.• Case 2: Harmonic Periods, Different Offsets.The ETIs can exist in the MPS mapping technique when runnables with harmonic periods but with different offsets are mapped to a task as shown in Fig. 2 (case 2).In this figure ETI% is 70%.

Role of Period in ETI.
In APS mapping technique, runnables with sub-harmonic periods are mapped to a task.ETIs can exists in APS as shown in Fig. 2 (case 3) where ETI% is 33%.
• Case 3: Sub-Harmonic Periods.In the sub-harmonic period, runnables periods have a common factor greater than one but are not multiples of each other.For example,  1 and  2 that are mapped to  1 , have zero offsets, but their periods are 10ms and 15ms respectively.The task period is 5ms (Equ. 1) and the major cycle is 30ms (Equ.2), within which there are six task activations (Equ.3).Two of these six activations are ETIs, resulting in ETI% = 33% Fig. 2 (case 3).

Offset Drift (OD)
In the counter-based approach, the task   uses a counter   to handle the execution of runnables [5].However,   is activated according to its period (  ) using the schedule table expiry points.This approach risks offset drift of   due to a fault condition, i.e., a deadline miss, which occurs when the response time of a particular task instance is greater than its period.

Possible Offset Drift Scenarios:
(1) From Equ. 1, a task period is less than or equal to the minimum period of runnables.If periods and offsets of runnables have a high standard deviation, then Equ. 1 results in a lower   value, which means a tighter deadline for its critical instance (a task instance with worst-case execution time).( 2) A maximum number of tasks have activations in a particular time interval and within this time interval, the maximum CPU usage is caused by different tasks.(3) Interrupts may appear randomly and their execution may cause deadline misses in task instances.(4) The high resource contention between tasks, especially in a multicore environment, may increase the execution time of tasks.This can lead to miss deadlines for lower-priority tasks.(5) The presence of a special task in the embedded software, whose period depends on the value of a runtime variable, e.g., engine speed [17].Therefore, a high value of the variable at runtime results in a lower value for the period of this task.Therefore, this task may cause deadline misses in lower-priority tasks.
Based on these scenarios, a single instance of any task (   ) could have a deadline miss which leads to an offset drift (OD) problem in the counter-based approach.As a result, OD will cause the following two drawbacks: Drawback 1: Offset Drift Impact on Response Time of Sequencer Task.To minimize preemption and for a more deterministic response time of tasks, we use either offset of runnables or tasks to create a refined schedule table for the tasks' activations.Offset values help us to reduce fluctuations of the CPU usage over all time windows of a certain size [5,13,14,19].Moreover, these offsets are selected at design time and it is unacceptable for these offsets to be changed at runtime because it potentially invalidates the analysis of response time performed at design time.In Fig. 1, if we assume that the WCET of all runnables is the same, say 10s, then the CPU usage in Window-I is 20 per 1ms.However, due to the offset drift problem in the task  2 that occurs in Window-II, we find that the CPU usage changes in Window-III, i.e., 30s per 1ms (for a system time, G=7ms).This change in CPU usage causes fluctuations in the response time of tasks.
Drawback 2: Offset Drift Impact on Data Latency of Event Chain.Data latency timing constraints on event chains are used to model the sensitivity of application software, e.g. it that indicates the reaction time ( ) of the embedded system at the actuator to input changes at the sensor, such as, safety airbag opening time [6].The offset and order of execution of tasks and their runnables directly impact reaction time [15,22,25].In Fig. 1,  0 may be seen as runnable that reads a sensor value, and  3 may be the runnable that sets an actuator value.We followed the guidelines given in [15,25] and then carefully selected the runnables offsets and order of execution at design time.Therefore, in Window-I of Fig. 1, the minimum reaction time is less than 2ms (see the system time: 1ms-3ms).However, due to the deadline miss of  2 in Window-II, we have the OD problem in  2 due to the existing counter-based approach.This problem leads to a change in the reaction time in Window-III.Now, the reaction time is greater than 2ms.Hence, this offset drift can invalidate the design time analysis of the system sensitivity.

PROPOSED APPROACH
The problem with the existing approach lies in the absence of a relation between the system time and the task counter.To address this issue, our approach uses a task index instead of a counter.This index is derived from the time difference between system time values, creating the relation with the system time.
Our approach consists of four stages divided into static and runtime computations.We assume runnables to task mapping has already been completed using state-of-the-art techniques [12,18].

Static Computations
Static computations is the stage that is performed at design time or before compilation.Now let us discuss this in detail: Stage 1: Placement of   Non-ETI Activations on the Schedule Table at Design-Time.In our approach, we effectively eradicate ETIs in all tasks at the design time.We achieved it by scheduling   activations on expiry points (EPs) of the schedule table, only when at least one mapped runnable needs to be executed.Therefore, we use the Equ.8 to schedule   activations on the expiry points offset set (  ) based on the mapped runnable periods and offset.If there are  number of runnables in the application that are mapped to different tasks, then the hyper period ( ) of the schedule table is given as follows: Alg. 2 is the algorithmic representation of   .Within the length of the schedule table ( ), a task always has one or more major cycles (  ), i.e.,  =  •   ,  ∈  .We obtain the set of expiry points within a major cycle   as follows: Here   also shows reference values set of   .In the subsequent part of this section, we shall use   to prove the correctness of our proposed Alg. 4  Activate   at EPOffset of Schedule Table 7: Append EPOffset value to   set. 8: EPOffset=EPOffset+  Example.In Fig. 3, two runnables  2 and  3 are mapped to task  1 .Using Equ. 8, we generate non-ETI activations of  1 using the expiry point offset set of schedule table, i.e.,  1 = {1, 3, 4, 5}.Moreover using Equ.9, we have  1 = {1, 3, 4, 5}.In this case,  1 =  1 because, only the  1 uses a schedule table for its activation.If we have two tasks with different major cycles, then  1 ⊂  1 .Properties of   .Activations of the task using Equ .8 have the following properties: P1: Non-ETI Activations.All   activations are non-ETIs as we placed   activations using period and offset of its mapped runnables.P2: Instance-Specific Deadline of   .In our approach the deadline of a task instance can be greater than its period (  ).This is because, in our approach, we removed ETI activations of   .If we consider the time difference (△  ) between successive activations of any   as deadline, then its range is given as follows: Here  is the number of runnables mapped to the task   .In Fig. 3 for  1 activation at G=11ms, the deadline is 2ms, while for  1 activation at G=13ms, the deadline is 1ms.P3: Task Sporadic Behavior.A task (  ) always repeats its runnables execution pattern after its major cycle (  ).Moreover,   ≥   .If   >   , then we have multiple activations of   within   and can observe a sporadic behavior of   within   with an inter-arrival rate equal to   .This behavior of   is due to the removal of ETI-related activations of   during the scheduling of   using Equ.8.In Fig. 3, we have  1 =1ms, and aperiodic activations of  1 within  1 = 6, e.g., at G=12ms there is no activation of  1 .P4: Lemma 1.Each activation point of a task,   , is always divisible by at least one mapped runnables' period after subtracting the same runnable's offset.
Proof.Let  , be the  ℎ expiry point from set   where task   activation is scheduled.Since the pattern of   activations repeats in   after   , the  ℎ element from set   is defined as  , = , %  .Also, as per Equ. 9 at least one runnable with offset   and period   shall be executed.Therefore Equ. 9 can be rewritten as: where, Therefore, as per Equ.12, we show that  ,0 is an integer multiple of    , thus proving Lemma 1. P5: Empty Task Instance (ETI) Condition.In the case of the existence of ETIs in a task,   , then: In stage 1, we eliminate ETIs and place only non-ETI activations of the task (  ) at the schedule table's expiry points.By removing ETI activations, we have a sporadic activation behavior in   .Additionally,   no longer involves the counter   .Therefore, we don't have time awareness at the task level.This leads to the challenge of making accurate decisions about runnable execution.The upcoming section tackles this challenge.

Runtime Calculations
According to the AUTOSAR Classic OS, task activation has priority over all running tasks [4].Once a task (  ) is activated, it is moved from the suspended to the ready state and placed in a ready queue managed by the scheduler.When   starts running, it uses Alg. 3 to handle the runnables execution.Index Based Sequencer Task.Alg. 3 shows a modified version of 1 to suit our approach.It uses   instead of   to decide which runnables to execute; we obtain   by using Alg. 4  if   =        then To explain Equ.14, consider the example in Fig. 3 for  1 activation at  =15ms, however this instance starts running at   ≈ 15.2ms, maybe because of other running tasks.Here,  1 was first activated at  0 = 11ms.As  1 =1ms (Equ.1),  1 = 6 (Equ.3) and (  ) =1ms.Therefore, △ 1, = 4.2ms and  1, =5.Now, we want to analyze whether or not  , can be treated as the final   value.For this let us discuss different cases, that would be useful for the system designers to identify: (i) what additional information they have to provide (ii) what design choice they have regarding algorithms for   value.
Suppose △ , indicate the time elapsed between the activation of a task particular instances, i.e.,   , and its execution,   (line 2 of Alg.3), then there are the following three cases: Case 1: The   Deadline ≤   .In the counter-based approach (CBA), the maximum value of the task deadline is its period i.e.   deadline ≤   .In our approach, if it is ensured at design time that, Deadline(  ) ≤   then  , calculated using Alg. 4 is the   .This is because the △ , , which is less than the deadline of   , is already discarded out in Equ. 14 by ⌊ △ ,   ⌋.Using this deadline constraint, we can calculate   in the constant time.We elaborate the case1 using the example mentioned in Fig. 3 (Part-b).Here, the instance of  1 was activated at   = 13, but it started running at   ≈ 13.2, hence △ , ≈ 0.2 <  1 = 1 and  1 =  1, = 3. Case 2: The   Instance's Deadline >   .As per P2, our approach empowers some task instances can have a deadline greater than i.e. △ , ≥   .Therefore, this deadline extension can cause inaccurate calculation of  , using Equ.14.For example, in Fig. 3 (Part-b), instance of  1 is activated at   = 11, and it started running at   ≈ 12.3, so, △ , ≈ 1.3 >  1 = 1 and  1, = 2 which is incorrect.This is because for this value, no runnable shall be executed by Alg. 3, which is impossible as we schedule task activation as per runnable periods and offset.Therefore, we have to look back in the timeline and have to find the nearest time value in the past, such that we have a valid  1, from Equ. 14, i.e.  1, ∈  1 .
Stage 3: Handling Delayed Execution Cases of Task.To correct the value of  , calculated with Equ.14, we propose Alg. 5 and prove its correctness with the following theorem.Proof.Consider the general activation scenario shown in Fig. 4. The activation of the   is triggered at   at the scheduling point (. +   ),  ∈ N, while (   + .   ),  ∈ N is the normalized offset and period of the mapped runnable   to be executed within this instance of   .  however, is the system time at which this particular instance of   begins its execution, and △ , is the elapsed time between   and   .To compute △ , , using Fig. 4, we have: Where   is the change in   during the time elapsed between   and  •    .This can be rewritten as: As each runnable and task repeats its activation pattern either by period or by major cycles, therefore using the euclidean division   can be modeled as the residual of  , w.r.  =     =  , +   of Alg. 5, the array of periods should be statically sorted in ascending order.Since, the smaller the period the higher the number of activations, we can directly relate the ascending order of periods to the probability order for finding the minimum △ ,  value.This runnables period ordering results in the improved best case and average case performance of Alg. 5. Case 3: Missed Activations.If due to some fault condition, we miss some activations, then we have two options: (i) we consider the latest activation for   calculation using case1 or case2.(ii) we want to retrieve all missed activations () and their respective   values, between the latest and prior execution of the same task   .For this option, we have Alg.6

EVALUATION
To experimentally evaluate our approach, we used an industrial case study, the Autonomous Emergency Braking System, (AEBS), described in [16].

Experimental Setup
For modeling the application case study, we followed the AUTOSAR template for Software Components [3].We generated a set of tasks and performed the runnable to task mapping manually using numerous mapping techniques like Periodic Solution (PS), Multiple Periodic Solution (MPS), and Arbitrary Periodic Solution (APS) [12].Each mapping results in a separate experimental configuration and after mapping, we placed task activations using Equ.6 (for counterbased approach) or Alg. 2 (for our approach).We then integrated our code with AUTOSAR-compliant BSW [8].We evaluated six configurations (for both approaches and within each approach three runnable load scenarios, summarized in Fig. 6) of the case study on a TriCore Evaluation Board TC387 [1].We collected data at each state change of the task and stored it in a custom build trace protocol.We transmitted trace data serially from the Aurix board to the PC.On the PC side, we built a module for data analysis.In this module, we first decode the trace messages and then calculate the CPU usage of each task instance which is the sum of two context switch times and the execution time.

SWC Obstacle Loca�on
Figure 5: AEBS case study application model.

Case Study
We slightly adapted the case study, Autonomous Emergency Braking System (AEBS), of a car described in [16].In AEBS, we obtain data from distance and speed sensors periodically.SWC Obstacle Location is used to determine the distance of each obstacle in front of the car.Then, SWC Collision Detection processes this information and calculates how much time is left before colliding (TTC) with one of the obstacles.Based on the TTC, SWC Driver Warning is used for various types of warnings or brake preparation.Different runnables are used to model the internal behavior of these SWCs, which are also shown in Fig. 5.For better response time of tasks, we computed the offsets of the runnables using the heuristic algorithm Least Loaded that reduces peak CPU load over time [19].
• Analyzing Empty Task Instances (ETIs) in Various Mapping Techniques.To assess how runnable-to-task mapping techniques affect ETIs within tasks, we examined three mapping methods from [12] in our case study-Periodic Solution (PS), Multiple Periodic Solution (MPS), and Arbitrary Periodic Solution (APS).To compare the results of mapping techniques, we use ETI% as a metric, which is the ratio between the number of ETI instances and total instances, in a specified time interval.The results of these mapping techniques are summarized in Table 2.In PS we have more number of tasks as compared to other mapping techniques, but the ETI% per second, from all tasks, is relatively low.On the other hand, in MPS and APS the number of tasks and their activations per second are relatively less but the ETI% in one second, from all tasks, is increasing.Notably, Table 2 emphasizes the presence of ETIs across varied runnable-totask mapping techniques.In our approach, we schedule only the non-ETI activations in all tasks using a schedule table and this decision is independent of the mapping technique.Therefore, our approach removes ETIs irrespective of the mapping techniques.Moreover, it can effectively reduce the number of activations per second of tasks by 44% in the case of PS mapping technique, 46% in the case of MPS, and 51% in the case of APS compared to the counter-based approach for this case study.
For further analysis of the impact of runnables execution time on CPU usage, we consider the APS technique for mapping runnables to tasks.Moreover, we have configured  ( 1 ) >  ( 2 ) >  ( 3 ) >  ( 4 ) in our implementation.To assess the reduction in CPU usage, we gathered 2000 trace samples of AEBS case study.The CPU usage for a task instance includes two context switch times and execution time.We analyzed trace data for each task's CPU usage in both approaches: counter-based and ours.We calculated the baseline CPU usage ( ) which is the sum of CPU usage of ETIs ( ), and non-ETIs (  ) within one-second intervals in the counter-based approach.In Fig. 6, the blue bar shows the percentage of CPU usage for ETIs which is (  • 100)/ , while the orange bar shows the percentage of CPU usage for non-ETIs which is (   • 100)/ , in the counter-based approach.Moreover, the yellow bar displays the percentage of CPU usage in our approach, i.e., (Sum of CPU usage for all instances in one second, in our approach) 100/ .• Reduction in CPU Usage Over Counter-Based Approach.
If we consider the load scenario 0X which means the execution time of the runnables is zero (to benchmark the reduction in CPU usage possible by our approach, as compared to the counter-based approach), then the blue bar shows that up to 51% reduction in CPU usage is possible.We have achieved a 26 ± 2% reduction in CPU usage (green bar) with our approach.The difference between these two percentages, i.e., 25 ± 2%, is due to the additional CPU usage required by our approach.

• Effect of Runnable Execution Time on Overall Reduction
in CPU Usage.To further analyze the impact of variations in the runnables execution time on the overall CPU usage, we have added a dummy function one time (1 ) inside each runnables body.We consider Alg. 4 and Alg. 5 as a dummy load.We have further doubled (2X) the execution time of each runnable by calling dummy load two times and reconfigured the AEBS case study again with the existing approach and our approach.The results of these experiments are shown in Fig. 6.
In the case of 1X dummy load, the value of the blue bar is 40%, while in the case of 2X dummy load, it is 27%.In the case of 1X dummy load, the green bar is 18% ± 2% while its value in the case of 2X dummy load is 14% ± 2%.This decreasing trend in the reduction of CPU usage for the different load scenarios is due to the fact that there is an increase in CPU usage for non-ETIs (orange bar) as compared to ETIs (blue bar).As per AUTOSAR design guidelines, runnables represent the smallest code fragment, and their execution time is expected to be as low as possible [3].Thus, if the execution time of the mapped runnables is smaller, we will experience a higher reduction in CPU usage through our proposed approach.• Effect on Memory Requirement.The counter-based approach requires 4 bytes per task to maintain the task's counter.The timestamp based approach requires 8 bytes per task to record the timestamp of the first task's instance.Both approaches have the same growth rate of memory requirement, i.e., linear w.r.t. the number of tasks.

Offset Drift
We will now discuss how our approach addresses offset drift within the context of the same fault condition, where activation of the task   is missed due to its previous instance's response time exceeds its period.In Fig. 7, Window-II, we miss the activation of  2 that is scheduled at system time =60ms because the previous instance of  2 (activated at =55ms) had not yet completed its execution.Suppose, this instance of  2 finishes its execution at =64ms, transitioning its state from running to suspend.Subsequently, we have activation of  2 at   =70ms, but this instance gets a chance to run at   =73ms, and we want to compute the value of the task index  2 for  2 .
To compute  2 using Equ.14, we require △ , which indicates the time elapsed between the execution of the current  2 instance which was activated at system time =70ms and the activation of its first instance at  0 =10ms.Since the current instance of  2 starts running at   =74ms, thus △ , =64ms.Moreover, using  2 =5ms and equations Equ. 14 Equ.20, the value of  2 is 0 for the instance of  2 that is activated at   =70ms.The value  2 =0, is subsequently used by  2 to execute its mapped runnable  4 using Alg. 3.
The key insight from the above example is that the previously missed activation of  2 at =60ms does not influence the current computation of △ , , which is   −  0 = 64ms.This △ , value, indicates the time interval between the execution of the current  2 instance at system time, i.e.,   =70ms, and the activation of activation of its first instance at  0 =10ms.Therefore, we can say that △ , , remains unaffected by any intermediate activations of   between the present and initial activations.Moreover, aside from △ , , all variables of equations Equ. 14 and Equ.20 are computed statically.This implies that, within our approach, calculating   for the current activation of the task is independent of its prior activations except for the initial one.Consequently, we can conclude that our approach effectively mitigates the offset drift.Therefore, if our approach is adopted, the system won't suffer adverse effects of offset drift, i.e., inconsistent data latency and CPU load.
Lastly, our approach reduces the likelihood of encountering the fault condition, i.e., a deadline miss, compared to the counter-based approach.This is due to the task's instance-specific deadline of the task, as depicted by Equ. 10.The instance-specific deadline is a result of the removal of ETI activations through our approach.For example, in Fig. 7 consider the critical  1 instance at system time 40ms.Its deadline 15ms=3 •  1 in our case, while in the case of the counter-based approach, the deadline is 5ms=1• 1 .

RELATED WORK
Various approaches address the challenge of mapping runnables to tasks based on diverse criteria.For instance, [12] presents techniques like Periodic Solution (PS), Multiple Periodic Solution (MPS), and Aperiodic Solution (APS) to map runnables to tasks based on periods.To manage the execution of mapped runnables according to their period and offsets using the basic task, [5] introduces a counter-based approach.This approach works well, if we confine our system, e.g., assign all runnables with the same period and offset mapped to one task.However, it is a restrictive constraint that may impact other non-functional requirements of the system such as peak load [19], data latency [18].Additionally, this approach significantly increases the number of tasks, an undesirable outcome.Thus, [12] recommends MPS and APS as more favorable mapping strategies.However, in the case of PS or MPS, if runnables have different offsets and the ETI condition is satisfied, then handling the execution of mapped runnables through a counter-based approach leads to empty task instances and offset drift problems.APS, on the other hand, always has ETI and may also have offset drift.
However, [5,23] introduces an event-based approach: here, the runnables with the same periods, offsets, consecutive execution order, and mapped to the same task, are assigned a single event.However, we have to provide a 32-bit mask for each event mapped to a task.This event-based approach has the following five limitations: (i) Maximum 32 runnables with different periods or offsets can be mapped to a task.(ii) Another type of task called extended task, is used which remains permanently alive and therefore consumes resources [4].Extended tasks remain permanently in memory to receive periodic activations of runnables via events.(iii) Since extended tasks are permanently alive, therefore they don't have a period.Therefore, there is also no concept of the implicit deadline for the response time, i.e., period, compared to the basic task.Hence, it would be more difficult to provide temporal guarantees for the response time of extended tasks (event-based approach) as compared to a basic task.(iv) Task chaining is not possible, since the extended tasks do not have periods in the event-based approach.(v) Lastly, if the response time of an extended task for an event is delayed enough that the next activation of the same event is triggered, then we shall lose this activation as this event is already set.

CONCLUSION AND FUTURE WORK
This paper addresses two problems with the state-of-the-art counterbased approach, used to handle the execution of mapped runnables by a sequencer task.These problems are: Empty Task Instances (ETI) and Offset Drift (OD).We have shown that ETIs of a sequencer task add processing overhead and can lead to delayed response time for lower-priority tasks.We have also analyzed the offset drift problem in sequencer tasks, which can invalidate the design time analysis performed regarding response time or data latency.
In addition to analyzing the problems of the counter-based approach, we also developed an alternative approach based on global system time that eliminates empty task instances and offsets drift problems.We have proven the correctness of our key algorithms.In addition to analyzing the problems of the counter-based approach, we also developed an alternative timestamp-based approach that eliminates empty task instances and offsets drift problems.We have proven the correctness of our key algorithms.In addition, we have shown through an industrial case study that our approach has reduced processing overhead, gains greater deadline flexibility, and consistent data latency compared to the counter-based approach.
In this work, we have focused on AUTOSAR, the de facto standard for automotive software.However, this research is also valid for other OS where the basic task is used for the execution of mapped runnables or code segments with timing requirements i.e. period and offset.Moreover, the ETI problem is independent of the scheduler types, e.g., earliest deadline first (EDF), because the ETI problem is associated with the task body while the scheduler types refer to the task period or priority, etc.Therefore, this research is also useful in other OS with different scheduler types.
Finally, choosing the right approach for the sequencer task to handle mapped runnable activations is a critical decision that directly affects the processing overhead incurred by the OS, i.e., non-application overhead, and can affect various non-functional requirements such as performance, data latency, number of preemptions, etc.
In the future, we plan to reduce the memory footprint of our approach, as we currently require an initial system time in each sequencer task.However, this may be reduced to a single system time value for all sequencer tasks.We also plan to investigate how our approach can be helpful in minimizing data reads and writes for a sequencer task in the case of implicit communication.

Figure 2 :
Figure 2:  is the system time in milliseconds and  1 is the task counter.ETIs in the case of runnables with the same (case 1) or harmonic (case 2) periods but different offsets mapped to a task  1 .Furthermore, ETIs in the case of sub-harmonic runnables periods with zero offset are shown in (case 3).

3. 1 . 1
The Role of Offset in ETI.When all runnables have the same offset, PS and MPS do not create ETIs.On the other hand, different offsets might lead to ETIs, as demonstrated next.

Figure 3 :
Figure 3: The runnables  2 and  3 with periods 2ms, 3ms and offset 1ms, 1ms respectively are mapped on the task  1 .For the sake of simplicity of the figure, we have removed other tasks from this figure besides  1 .

Figure 4 :
Figure 4: The task   delayed execution effects on  , .Here, the arrow shows the activation point of   .

Figure 6 :
Figure 6: Affect of runnable execution time on the reduction of CPU usage by our approach.

Figure 7 :
Figure 7: With of the help of tasks execution timeline of AEBS case study, we demonstrate that in our approach the effect of offset drift on the index of the task  2 does not propagate to subsequent instances of a task.

Table 1 : Terms referenced in multiple locations in this paper.
△ , Time elapsed since the activation of current task instance △ , Time elapsed since the 1  activation of a task     Offset of  ℎ runnable    =  /  , a normalized value of     Major Cycle of  ℎ task   Worst-case execution time(WCET) of  ℎ runnable   Expiry points set for activation of  ℎ task   System time at 1  instance activation of a task    Greatest Common Divisor   A set of reference value for    ,   value calculated using △ , and    Least Common Multiple   Number of instances of a task in its major cycle   Offset of  ℎ sequencer task   Period of  ℎ sequencer task using Equ. 1   Period of  ℎ runnable    =  /  , a normalized value of      Runnables subset from runnables set , mapped to     Response time of  ℎ task the number of runnables in a subset    , then: Time elapsed between two activations of a task.
Scheduling of   non-ETI activations using periods and offsets of mapped runnables Input Mapped runnables set    Output   set that have non-ETIs activations of and Alg. 5. Algorithm 2 1: procedure NonETIActivationBuilder( ) 2: for all   ∈ {   } do 3: EPOffset=   4: while EPOffset <  do 5: if   not activated at EPOffset then 6: , is a single element from set   , representing the reference value for   .If we assume    =     and    = and Alg.5.  =    (   (  ,   ,   ,   ,   ) )   ∈ {   } do To calculate the task index at runtime, we use Alg. 4. This algorithm consists of three steps.The first step (lines 2-3) initializes   with the system time of the first activation of   , in case it is not yet initialized.The second step (line 4) is to find △ , , the time elapsed since the first activation of   .The third step is to calculate the value of  , using line 5.  , can be considered as intermediate or candidate value for  Algorithm to Calculate  , using System Time Input   ,  ,  ,  ,(  ) Output  , .For  , range we have, 0 ≤  , <   , because of the euclidean division by   .We add the term (  )/  in line 5 because the first activation of a   can be scheduled late due to the non-zero offset Algorithm 4 1: procedure DTIFormulas () 2: △ ,  =  , −  ,       −    (18) Line 4 of Alg.5 is an equivalent representation of Equ.18. △ ,  is the shortest distance between   and   for   activation.One or more runnables from    are executed within this   instance.Initially, we do not know which runnables to execute, but we can find them using the following equation:  △ ,  , ∀  ∈    (19) If we have multiple runnables, whose values of △ ,  are minimal and equal, then it means that these runnables are executed in this   instance.Lines 2-8 of Alg. 5 are the pseudo code of Equ.19.As mentioned earlier, △ ,  found by Equ.19 must be subtracted from Equ. 14 to obtain the correct value for   .Therefore,   =  , =  , −  △ ,  (20) Line 9 of Alg. 5, is exactly the representation of Equ.20.  , and △ ,  are both in the euclidean domain since they are modulus functions, so in case of negative results we round to the previous iteration [24], i.e.,  , +   and △ ,  +    accordingly.The time complexity of Alg. 5, is (), where  is the number of runnables mapped on   .However, to achieve better performance Algorithm 5 Handling   Correction Caused by, △ , >   .Input  , ,   , {   } Ascending sorted w.r.t.Periods Output t.    , i.e.:   =  , − 1: procedure REMJitter () 2:

Handling Multiple Missed Activation of The Task.
in stage4.To calculate the   values of all intermediate missed activations () we used Alg.6.In this algorithm, we have an additional input,  , , which represents the   that was computed in the previous execution of   .However, as the output of Alg. 6, we obtain a set of   values for all intermediate missed activations of Input  , ,   ,   , TR  Output    Values 1: procedure RECALL() 5:    [ ] =   6: for all  , ≠    do 7:  , =    − 1 8:    =REMJITTER( , ) 9:  =  + 1 10:    [ ] =    11: , =   Stage 4:  .The time complexity of this algorithm is (( + 2))), where  is the number of missed activations and  is the number of runnables mapped on   .Finally, the bound on the number of missed activations  is the number of task activations in one major cycle, i.e.,  <   .