Algorithm xxxx: HiPPIS A High-Order Positivity-Preserving Mapping Software for Structured Meshes

Polynomial interpolation is an important component of many computational problems. In several of these computational problems, failure to preserve positivity when using polynomials to approximate or map data values between meshes can lead to negative unphysical quantities. Currently, most polynomial-based methods for enforcing positivity are based on splines and polynomial rescaling. The spline-based approaches build interpolants that are positive over the intervals in which they are defined and may require solving a minimization problem and/or system of equations. The linear polynomial rescaling methods allow for high-degree polynomials but enforce positivity only at limited locations (e.g., quadrature nodes). This work introduces open-source software (HiPPIS) for high-order data-bounded interpolation (DBI) and positivity-preserving interpolation (PPI) that addresses the limitations of both the spline and polynomial rescaling methods. HiPPIS is suitable for approximating and mapping physical quantities such as mass, density, and concentration between meshes while preserving positivity. This work provides Fortran and Matlab implementations of the DBI and PPI methods, presents an analysis of the mapping error in the context of PDEs, and uses several 1D and 2D numerical examples to demonstrate the benefits and limitations of HiPPIS.


INTRODUCTION
Mapping data values from one grid to another is a fundamental part of many computational problems.Preserving certain properties such as positivity when interpolating solution values between meshes is important.In many applications [1,24,33,40,41,45], failure to preserve the positivity of quantities such as mass, density, and concentration results in negative values that are unphysical.These negative values may propagate to other calculations and corrupt other quantities.Many polynomial-based methods have been developed to address these limitations.
Positivity-preserving methods based on linear polynomial rescaling are introduced in [15,23,24,45,46].These polynomial rescaling methods are often used in the context of hyperbolic PDEs, in numerical weather prediction (NWP) [23], combustion simulation [15,24], and other applications.These methods introduce rescaling parameters obtained from quadrature weights that are used to linearly rescale the polynomial to ensure positivity at the quadrature nodes and conserve mass.These approaches ensure positivity only at the set of mesh points used for the simulation but do not address the case of mapping data values between different meshes, which is the focus of HiPPIS.
Other approaches for preserving positivity that are based on splines can be found in computer-aided design (CAD), graphics, and visualization [10,18,20,[34][35][36].Several positivity-and monotonicity-preserving cubic splines have been developed.A widely used example of such an approach is the piecewise cubic Hermite interpolation (PCHIP) [10], which is available as open-source code in [27].In addition, quartic and quintic spline-based approaches have been introduced in [14,16,17,25,26].These methods impose some restrictions on the first and second derivatives to ensure monotonicity, positivity, and continuity.For instance, the monotonic quintic spline interpolation (MQSI) methods in [26] and [25] use the sufficient conditions stated in [36] and [44] to check for monotonicity and iteratively adjust the first and second derivative values to enforce monotonicity.
Positivity can also be enforced using ENO-type methods [2,3,29,33], which enforce data-boundedness and positivity by adaptively selecting mesh points to build the stencil used to construct the positive interpolant for each interval.
ENO-type methods use divided differences to develop a sufficient condition for data-boundedness or positivity that is used to guide the stencil selection process.The software introduced in this work is based on the high-order ENO-type data-bounded interpolation (DBI) and positivity-preserving interpolation (PPI) methods in [29].The work in [29] provides a positivity-preserving method that uses higher degree polynomials compared to the other ENO-type methods in [2,3,33] and the spline-based methods.
Given that polynomial interpolation is well-established and widely used, several implementations of the different polynomial approximation algorithms are available.For example, FunC by Green et al. [12] uses polynomial interpolation with a lookup table for faster approximation of a given function compared to direct evaluation.However, most of these implementations, including FunC, do not preserve data-boundedness and positivity.The implementations available for positivity preservation are based on splines [10,14] and polynomial rescaling [23,45].The spline-based approaches often require solving a linear system of equations to ensure continuity and an optimization problem in the case of quartic and quintic splines.These spline approaches are often limited to fifth-order polynomials and can be computationally expensive in cases where solving a global optimization problem is required.A full suite of test problems comparing the DBI and PPI methods against different spline-based methods including PCHIP [10], MQSI [26], and shape-preserving splines (SPS) [6] has been undertaken by the authors in [28].The different polynomial rescaling methods allow for polynomial degrees higher than five and are built as part of larger partial differential equation (PDE) solvers [23,45].
As previously mentioned, the polynomial rescaling approaches guarantee positivity only at a given set of points, not over the entire domain.The present work provides an implementation of a high-order software (HiPPIS) based on [29] that guarantees positivity over the entire domain where the interpolant is defined.In addition, this work evaluates the use of HiPPIS in the context of function approximation and mapping between different meshes.This evaluation provides an analysis of the mapping error in the case of PDEs and numerical examples demonstrating the benefits and limitations of HiPPIS.
The remaining parts of the paper are organized as follows: Section 2 presents the background for the mathematical framework required for the DBI and PPI methods.Section 3 provides the algorithms used to build the software, and the descriptions of the different components of HiPPIS.Section 4 provides several 1D and 2D numerical examples, while Section 5 conducts an analysis and evaluation of the mapping error in the context of time-dependent PDEs.A discussion and concluding remarks are presented in Section 6.

MATHEMATICAL FRAMEWORK
This section provides a summary and the theoretical background of both the DBI and PPI methods.

Adaptive Polynomial Construction
Both the DBI and PPI methods rely on the Newton polynomial [21,43] representation to build interpolants that are positive or bounded by the data values.The ability to adaptively choose stencil points to construct the interpolation, as in ENO methods [13], is the key feature employed to develop the data-bounded and positivity-preserving interpolants.
Consider a 1D mesh defined as follows: where procedure starts by setting the initial stencil V 0 , The stencil V 0 in Equation ( 2) is expanded by successively appending a point to the right or left of V  to form V +1 .
Once the final stencil V −1 is obtained, the interpolant of degree  defined on   = {  ,  +1 } can be written as where  0, () = ( −   ),  1, () = ( −   )( −   1 ), • • • are the Newton basis functions.   is the point added to expand the stencil V  −2 to V  −1 and can be explicitly expressed as The divided differences are recursively defined as follows: The polynomial   () can be compactly expressed as () in Equation ( 4) is defined as where ,   , and   are expressed as follows: and   in Equations ( 5), (6), and ( 8) are defined such that  ∈ [0, 1] and   ≥ 0. The positivity-preserving and data-bounded interpolants are obtained by imposing some bounds on λ , defined as

Positivity-Preserving and Data-Bounded Interpolation
For a given interval inside a mesh, the DBI and PPI polynomial interpolant is constructed by adaptively selecting points near the target interval to build an interpolation stencil and polynomial that together meet the requirements for positivity or data-boundedness.Requiring positivity alone can lead to large oscillations and extrema that degrade the approximation.Positivity alone does not restrict how much the interpolant is allowed to grow beyond the data values.
The large oscillations can be removed with the PCHIP, MQSI, and DBI methods.However, in the case where a given interval   has a hidden extremum, PCHIP, MQSI, and DBI will truncate the extremum.As in [3,37], the interval   has an extremum when two of the three divided differences and of neighboring intervals are of opposite signs.The constrained PPI algorithm addresses these limitations by allowing the constructed interpolant to grow beyond the data values but not produce extrema that are too large.
The positive polynomial interpolant is constrained as follows: The bounds   and   in Equation (10) are defined as The parameters Δ  and Δ  in Equation (11) are positive, and the data-bounded interpolant is obtained for Δ  =Δ  = 0.0.These parameters are chosen according to and The positive parameters  0 and  1 , used for intervals with and without extrema, respectively, are introduced to adjust Δ  and Δ  .This work extends the bounds in [29] by introducing the parameter  1 to allow for more flexibility on how to bound the interpolants in cases where an extremum is detected.The choice for the positive parameters  0 and  1 depends on the underlying function and the input data used for the approximation.As both  0 and  1 get smaller, the upper and lower bounds get tighter and the PPI method converges to the DBI method.The choices for  0 and  1 are further discussed in Section 3.2.In Equation (12), the interval   has a local maximum if   −1  +1 < 0 and   −1 < 0.
The positivity-preserving result in Equation ( 10) is obtained by successively imposing bounds on the quadratic, cubic, and higher order terms in the expression of   () defined in Equation (5).The reconstruction procedure begins by considering the linear and quadratic terms from   () in Equation ( 5) and imposing the following bounds: Equation ( 19) can be reorganized to obtain Noting that 1  ( −1) ≤ −4, 1  ≥ 1, and 1 The bounds from Equation (20) are extended to bound the cubic form by requiring that what multiplies λ1 must fit into the inequality in Equation (20).Thus, for the cubic case, Equation (20) becomes When  2 defined in Equation ( 7) is negative,  −  2 has a maximum value at  = 1 and a minimum value at  = 0. λ2 is then bounded by .
When  2 is positive, 1 1− 2 is substituted by 1 − 2 and the inequalities ≤ with ≥ and vice versa are swapped.This procedure is continued to quartic and higher order interpolants to produce the recursive expression for the bounds on λ for the PPI and DBI methods as follows: and − 1 and  + 1 are defined as − 1 and  1 for the DBI method, whereas for the PPI method, they are defined as (−4(  − 1) − 1) 1 and (−4 ℓ + 1) 1 , respectively.We refer the reader to Theorems 1 and 2 in [29] for more details on the mathematical foundation used to build the positivity-preserving software.

ALGORITHMS AND SOFTWARE
This section describes the algorithms and different components used in the data-bounded and positivity-preserving software.The software developed in this work provides 1D, 2D, and 3D implementations of the DBI and PPI methods for uniform and nonuniform structured meshes.The 1D implementation is constructed based on the mathematical framework provided in Section 2. The 2D and 3D implementations are obtained via a tensor product of the 1D version.

Algorithms
The algorithms provide the necessary elements to construct the data-bounded or positive interpolants.Rogerson and Meiburg [31] showed that the ENO reconstruction can lead to a left-or right-biased stencil that causes stability issues Manuscript submitted to ACM when used to solve hyperbolic equations.Shu [38] addressed this limitation by introducing a bias coefficient used to target a preferred stencil.As indicated in [29], the left-and right-biased stencil can fail to recover hidden extrema.For a given interval   , the left-and right-biased stencil does not include the points   −1 or  +1 , respectively.Algorithm I addresses these limitations by extending the algorithm in [29] to introduce more options for the adaptive stencil selection process described below.In addition to the symmetry-based points selection in [29], Algorithm I includes ENO-type and locality-based point selection processes.
At any given step , the next point inserted into V  can be to the left or right.Let   and   be the mesh points immediately to the left and right of V  , respectively.

We define λ−
+1 and λ+ +1 as follows: The terms λ− +1 and λ+ +1 in Equation 22 correspond to the case where the stencil inserted is to the left and right, respectively.Given V  , let    be the number of points to the left of   and    the number of points to the right.Algorithm I extends the algorithm in [29] by introducing a user-supplied parameter  used to guide the procedure for stencil construction.In the cases where adding both   (to the left) or   (to the right) are valid, the algorithm makes the selection based on the three cases below: • If  = 1 (default), the algorithm chooses the point with the smallest divided difference, as in the ENO stencil.
• If  = 2, the point to the left of the current stencil is selected if the number of points to the left of   is smaller than the number of points to the right.Similarly, the point to the right is selected if the number of points to the right of   is smaller than the number of points to the left.When both the number of points to the right and left are the same, the algorithm chooses the point with the smallest λ+1 .
• If  = 3, the algorithm chooses the point that is closest to the starting interval   .It is important to prioritize the closest points in cases where the intervals surrounding   vary significantly in size.These variations are found in computational problems for which different resolutions are used for different parts of the domain.
Algorithm II describes the 1D DBI and PPI methods built using the mathematical framework in Section 2 and Algorithm I. Algorithm II further extends the constraints in [29] by introducing the user-supplied positive parameter  1 that is used to impose upper and lower bounds on the interpolants according to Equations (12) and (13).The positive parameters  0 and  1 are used for intervals without and with an extremum, respectively.The user-supplied parameter  is used to choose between the DBI and PPI methods.Algorithm III and Algorithm IV describe the extension from 1D to 2D and 3D, respectively.Both Algorithm III and IV are constructed by successively applying Algorithm II to each dimension.Given that the DBI and PPI methods are nonlinear, the order in which Algorithm II is used can lead to different approximation results.In this paper and in [29], the 1D DBI and PPI are first applied to , then the  dimension, and finally the  dimensions, as indicated in Algorithm III and IV.In Algorithm III and IV, the input and intermediate data values are modified only by Algorithm I, which preserves data-boundedness or positivity.
Therefore, the resulting solutions from the 2D and 3D extensions will preserve data-boundedness and positivity.Similar to Algorithm II, the choices of parameters ,  0 , and  1 influence the quality of the approximation in Algorithm III and IV.In the 1D, 2D, and 3D cases, the choice for parameters ,  0 , and  1 is dependent on the data.In the case of 2D and 3D, applying the 1D PPI method (Algorithm II) along the  and/or  dimensions may introduce oscillations that could be amplified when applying the 1D PPI in the subsequent dimensions.These oscillations can be significantly reduced with small values for  0 and  1 .The parameters  0 and  1 should be chosen to be small enough such that hidden extrema are recovered without introducing new large oscillations.In Algorithm II and IV, M □ ×□ and M □ ×□□ represent 2D and 3D meshes obtained by taking the tensor product of the 1D mesh along the  and  dimensions for the 2D mesh, and along the , , and  dimensions for the 3D mesh.The square □ represents  or .
Algorithm I then the interval   has a hidden local extremum.For the boundary intervals, we assume that the divided differences to the left and right have the same sign.
(5) Given a stencil V  , , then insert a new stencil point to the right; (6) This process (Steps 3) iterates until the halting criterion that the ratio of divided differences lies outside the required bounds stated above or the stencil has  + 1 points, with  being the target degree for the interpolant.

Algorithm III (2D)
, , and  1 .Output: { ũ, } , ,=0 .(1) Interpolate from the mesh M  × to M  × by applying the Algorithm II (1D) along the x dimension for each index  along the y direction.More precisely, for each index  (0 (2) Use the solution{ , } , ,=0 from step (1) to interpolate from M  × to the final output mesh M  × by applying the Algorithm II (1D) along the y dimension for each index  along the x direction.For each index .

Software Description
The DBI and PPI software implementation is guided by the algorithms described above.HiPPIS is available at https: //github.com/ouermijudicael/HiPPIS.The software can be organized into four major parts: (1) computation of divided differences, (2) calculations of upper and lower bounds for each interval, (3) a stencil construction procedure, and (4) 1D, 2D, and 3D DBI and PPI implementations.
The divided differences are essential to the DBI and PPI methods because they are used in the calculations of λ and the stencil selection process.The divided differences are computed using the standard recurrence form in Equation (2.1) and stored in a table of dimension  × ( + 1) where  is the maximum polynomial degree for each interpolant.Given that the maximum degree is , it is sufficient to consider the  + 1 divided differences for the stencil selection process and the construction of the final polynomial interpolant for each interval.
The bounds on each interpolant are obtained from Equation ( 11), (12), and (13) where the positive parameters  0 and  1 are user-supplied values used to adjust the bounds for the interval with and without extremum, respectively.The adjustment focuses on removing large oscillations as much as possible while still allowing high-degree polynomial interpolants that meet the positivity requirements.
The stencil selection process requires the computation of  +  and  −  , which are both dependent on   ,   , and λ .The stencil V  is constructed from V  −1 by appending a point to the left or right of V  −1 .When appending to either the right or left meets the requirements for positivity, the software offers three possible options for choosing from both points that can be set by the user.The first and default option ( = 1) chooses the stencil with the smallest divided difference, similar to the ENO-like approach.The second option ( = 2) prioritizes the choice that makes the stencil more symmetric around   .The third option ( = 3) chooses the point closest to the starting interval   , thus prioritizing locality.
The 1D DBI and PPI methods use (1), (2), and (3) as building blocks to construct the final approximation.Once the final stencil has been selected, the interpolant is built using a Newton polynomial representation and then evaluated at Manuscript submitted to ACM the corresponding output points.The Newton polynomial is used here because its coefficients/divided differences are available.The 2D and 3D implementations successively use the 1D version along each dimension to construct the final approximation on uniform and nonuniform structured meshes.
The interfaces for the 1D, 2D, and 3D DBI and PPI subroutines are designed to be similar to widely used interfaces for polynomial interpolation such as PCHIP, and can be incorporated into larger application codes.The interfaces require • the input mesh points and the data values associated with those points, • the maximum polynomial degree to be used for each interpolant, • the interpolation method to be used (DBI or PPI), and • the output mesh points.
Listing 1 shows examples of how to use the 1D, 2D, and 3D interfaces for DBI and PPI in HiPPIS.The variables x, y, and z are 1D arrays used to define the input meshes, and xout, yout, and zout are used to define the output meshes.The variables v, v2D, and v3D correspond to the input data values associated with the input meshes.The parameters d and im (1, or 2) are used to indicate the target polynomial degree and the interpolation method to be used.For DBI and PPI, the parameter im is set to 1 and 2, respectively.The parameters ,  0 , and  1 are optional parameters that are set to 3, 0.01, and 1 by default, as explained below.The choice of the optional parameters depends on the underlying function and the input data.
In problems for which different resolutions are used for different parts of the computational domain, st=3 is a preferable choice.The algorithm prioritizes the closest points to the starting interval   if st=3.This choice is particularly important in regions where the size of the intervals varies significantly.For cases when smoothness is the primary goal, st=1 is a suitable choice.For st=1, the Algorithm I prioritizes smoothness by choosing the points with the smallest divided differences during the stencil construction process.Both the st=1 and st=3 can lead to a left-or right-biased stencil.In these instances, st=2 can be used to remove the bias.For st=2, the algorithm prioritizes a symmetric stencil.
The default value of st is set to 3 because the examples in this study indicate that st=3 leads to better approximations compared to st=1 or 2 and locality is often a highly desired property in many computational problems.
The positive parameters  0 and  1 are used to bound the interpolants for the intervals with and without extrema, respectively.The configurations in [29] and [28] correspond to setting the parameters  0 and  1 to the default values of 0.01 and 1, respectively.The values of  0 and  1 are chosen such that the lower and upper bounds on each interpolant are relaxed enough to allow for a high-order polynomial that does not introduce undesirable oscillations.For profiles that are prone to oscillation such as the logistic functions, it is important to choose small values for  0 and  1 .For  ×  = 17 × 17, the approximation leads to large oscillations if  0 and  1 are greater than 10 −4 .For intervals without extrema, it is important to keep  0 small to not introduce new extrema.For the intervals with extrema,  1 needs to be large enough to allow for recovery of hidden extrema but small enough to not cause undesired large oscillations.This is very challenging given that the sizes of the peaks are not known a priori.The default values of  1 = 1 are such that the interpolant maximum value is twice  (  ,   +1 ).This default value of one is sufficient for the modified Runge and TWP-ICE examples.However, in the case of BOMEX, smaller values of  1 ≤ 10 −5 are required to remove undesired oscillations.In practice, it is prudent to start with a small value for  0 and  1 and increase them as needed if the approximation fails to recover hidden extrema or uses low-degree polynomial interpolants.
Figure 1 is a diagram of the different components of the main module of HiPPIS.The function divdiff(...) is used to calculate the divided differences needed for Algorithms I and II.Once the final stencil is constructed, the function newtonPolyVal(...) is used to build and evaluate the positive interpolant at the corresponding output points.The major part of the data-boundedness and positivity preservation including Algorithms I and II is in the function adaptiveInterpolation1D(...).This function is used for the 1D approximation or mapping problems and depends on the function divdiff(...) and newtonPolyVal(...).The functions adaptiveIterpolation2D(...) and adaptiveInterpolation3D(...) use adaptiveInterpolation1D(..) to construct the data-bounded or positive polynomial approximations on 2D and 3D structured tensor product meshes, respectively.The interfaces for the 1D, 2D, and 3D interpolations, in bold, require the parameter , which is used to indicate the interpolation method chosen.For the DBI and PPI methods, the parameter  is set using 1 and 2, respectively.HiPPIS does not allow for any other choices for the parameter .with  = 3 leading to slightly smaller errors compared to  = 1 and  = 2. Given that the results are similar, Tables 1 -8 show errors with the parameter  set to 3. For the BOMEX example, the errors from the three choices are significantly different.Therefore, the results from all three choices are included.More test examples can be found in [28].

Example I: Modified Runge Function
This example uses a modified version of the canonical Runge function defined as Approximating the modified Runge function in Equation ( 23) with a global standard polynomial leads to large oscillations.
Table 1 shows the  2 -errors norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate  1 ().The DBI and PPI methods lead to better approximation results compared to the PCHIP and MQSI methods.The errors from PCHIP and MQSI are comparable.In this example, the algorithms used in MQSI to approximate and adjust the derivatives to enforce monotonicity do not produce more accurate results compared to PCHIP.As the target polynomial degree increases from  = 4 to  = 8, the DBI approximation does not improve significantly compared to the PPI method.The relaxed nature of the PPI method allows for higher degree polynomial interpolants compared to DBI, PCHIP, and MQSI which lead to better approximations.

Example II: 1D Logistic Function
This test case uses a logistic function defined as This function is a smoothed analytical approximation of the Heaviside step function.The logistic function in Equation ( 24) is challenging because of the steep gradient at about  = 0. Approximating  2 () with a standard polynomial interpolation leads to large oscillations to the left and right of the gradient.In addition, the oscillations to the left produce negative values.

Example III: 1D Discontinuous Function
This example uses a modified version of a function introduced by Tadmor and Tanner [42] and used by Berzins [2] in the context of bounded interpolation methods.The value one is added to the original function in [42] to ensure that the function is positive over the interval [-1,1].The modified function is defined as Table 3 shows  2 -error norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate the function in Equation (25).Approximating  3 () is challenging because  3 () is a piecewise function with a discontinuity at  = 0.5.The global error is dominated by the local errors around the discontinuity.The PCHIP, MQSI, DBI, and PPI approximation results are comparable.Increasing the target polynomial degree does not decrease the  2 -error norms.
The approximations in the smooth regions improve as we increase the target polynomial degree, but the global error is dominated by the error around the discontinuity.The error around the discontinuity does not decrease with higher polynomial degrees.for  0 = 1.The oscillations are removed for  0 ≤ 0.1.As expected, all the interpolation methods have difficulties approximating the function around the discontinuity, as shown in Figure 3.

Example IV: 2D Modified Runge Function
This example extends the previously modified 1D Runge function to 2D as follows: Table 4 shows  2 -error norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate the 2D modified Runge function in Equation (26).In this example, as in Example I, the MQSI and PCHIP methods have comparable errors.The approaches used to approximate the derivatives and construct the quintic splines are not more accurate than the approximation obtained from PCHIP.The DBI and PPI methods with  = 3 lead to better approximation results compared to the PCHIP and MQSI.As the target polynomial degree  increases, the approximation errors from PPI decrease much faster than from DBI.The relaxed nature of the PPI methods allows for higher degree polynomials compared to DBI.The bounds for data-boundedness, which are based on Equation (11) with Δ  = Δ  = 0.0, are more restrictive than positivity for which Δ  > 0.0 and Δ  > 0.0.In addition, the approximation does not lead to oscillations for  0 and

Example V: 2D Logistic Function
The test case extends the 1D logistic (smoothed Heaviside) function from Example II to 2D.
The function  5 (, ) is challenging because of the large gradient at  = −.Approximating  5 (, ) with a standard polynomial interpolation leads to oscillations and negative values that violate the desired property of positivity.
Table 5 shows  2 -error norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate the 2D logistic function  5 (, ) defined in Equation ( 27).The errors from MQSI are larger compared to the other methods.In this example, the approximated derivatives and quintic splines from MQSI are less accurate than the approximations from PCHIP.The DBI and PPI methods lead to better approximation results compared to the PCHIP and MQSI approaches.
Increasing the target polynomial degree improves the approximations for DBI and PPI, as shown in Table 5.The global error is dominated by the local around the steep gradients at  = −.For  2 = 33 2 points and above with  0 = 0.01,  1 = 1.0, and  = 3, the DBI and PPI choose the identical stencils for the intervals around the steep gradient.Therefore, the global error, which is dominated by the local error around the steep gradient, is the same for both DBI and PPI.
However, for  2 = 17 2 points with  0 = 0.01,  1 = 1.0, and  = 3, the DBI and PPI choose different stencils that lead to different errors, as indicated in the row with  2 = 17 2 of Table 5. Figure 4 shows approximation plots of  5 (, ) using  ×  = 17 × 17 uniformly spaced points with PCHIP, MQSI, and PPI.The approximations in Figures 4c and 4e are obtained using PPI with  0 =  1 = 1.The PPI method with  0 =  1 = 10 −4 is used for the approximations in Figures 4d and 4f.The target polynomial degrees for the second and third rows are set to  = 4 and  = 8, respectively.The oscillations increase when going from P 4 to P 8 with  1 =  2 = 1.0.For  0 =  1 = 10 −4 , the oscillations are significantly reduced, and the approximation is closer to the target solution.

Example VI: 𝐶 0 -continuous Surface Function
This example is based on a 2D function used to study positive and monotonic splines [5,30].The function is defined as follows: The function  6 (, ) is challenging because it is only C 0 -continuous at various locations.
Table 6 shows  2 -error norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate the 2D function  6 (, ) defined in Equation ( 28).The PCHIP, MQSI, DBI, and PPI methods lead to comparable  2 −error norms.Increasing the target polynomial degree does not significantly improve the approximation for DBI and PPI, as shown in Table 6.
The global error is dominated by the local around the C 0 .The approximation for both DBI and PPI can be improved by using an underlying mesh that better captures the C 0 -continuity.
Figure 5 shows approximation plots of  6 (, ) using  ×  = 17 × 17 uniformly spaced points with PCHIP, MQSI, and PPI.The left and right approximation plots show approximated solutions using the PPI method.For the left plot using PPI,  0 =  1 = 1.0, and for the right plot using PPI,  0 =  1 = 10 −4 The target polynomial degree is set to  = 4 and  = 8 for the second and third rows, respectively.For  0 =  1 = 1.0, the oscillations increase as the target polynomial degree increases from  = 4 to  = 8.The oscillations observed for  1 =  0 = 1 are removed for small values of  0 and  1 , shown in Figures 5d and 5f.

MAPPING ERROR ANALYSIS AND EXAMPLES
This section provides an analysis of the mapping error between two different meshes in the context of time-dependent PDEs when DBI and PPI are employed to interpolate data values between the meshes.In addition to the error analysis, the Runge, tropical warm pool international cloud experiment (TWP-ICE), and Barbados oceanographic and meteorological  Runge and TWP-ICE examples.In the BOMEX example, the advection mesh is composed of uniformly spaced points, and the reaction mesh is constructed using the mid-point of each interval from the advection mesh.

Mapping Error Analysis
In addition to the development and study of the DBI and PPI methods, it is important to provide some insight into the behavior of the mapping error in the context of time-dependent PDEs.An example of a time-dependent problem where a positivity-preserving mapping is required is the US Navy Environmental Prediction System Using the NUMA Core (NEPTUNE) [19].NEPTUNE is a next-generation global NWP system being developed at the Naval Research Laboratory (NRL) and the Naval Postgraduate School (NPS).In NEPTUNE, the physics and dynamics are calculated using different meshes and require mapping the solution values between both meshes.NEPTUNE uses nonuniform structured meshes that have vertical columns with nonuniformly spaced points inside each column.The mapping must preserve positivity for quantities such as density and cloud water mixing ratio.The cloud water mixing ratio is the amount of cloud water in air.At each time step, the dynamics (advection) solutions, which are calculated on the advection mesh, are mapped to the reaction mesh to be used as input for the physics calculations.The physics results are then mapped back to the dynamics to be used as input for the next time step.Enforcing positivity alone may still lead to large oscillations and approximation errors.Using the DBI method will remove the large oscillations but will truncate any hidden extremum and may be too restrictive for high-order accuracy in some cases.For simulations in which different structured meshes are used and mapping is required, the errors from both the DBI and unconstrained PPI will propagate into other calculations and may even cause the simulation to fail.This section provides an analysis of the mapping error when interpolating from one mesh to another and back to the starting mesh.The mapping error is considered within time-dependent PDEs.For example, when interpolating the data values between the advection and reaction mesh in NEPTUNE, a mapping error is introduced in addition to the physics and time integration errors.The error in approximating a function  () with the Newton polynomial   () over the interval   is where  ∈ [   ,    ].Given that  and  (+1) are not known, the local interpolation error in Equation ( 29) can approximated as follows: where The error approximation in Equation ( 30) is based on the mean value theorem for divided differences, which states that Equation ( 30) approximates the local interpolation error for each interval when mapping from one set of points to another.To consider a mapping error for interpolating from one mesh to another and back to the starting mesh, let M  and M  be the advection and reaction mesh, respectively.In addition, let   and   be the interpolation operators that map a given set of data values from M  to M  and from M  to M  , respectively.We consider an advection-reaction problem where the advection part is calculated on M  and the reaction on M  .A simple forward Euler time integration Manuscript submitted to ACM is used.Let ū and û be the approximate and the exact solution at time .The dynamics/advection part is written as and the physics/reaction w+Δ is expressed as where  ū1 +Δ =    (  ū1 +Δ ).Let Ē+Δ be the global space and time error accumulated up to  + Δ after the advection in Equation ( 31) and before mapping the solution values to M  .Ē+Δ does not include the mapping errors at  + Δ.The final solution after applying the operator  in Equation ( 32) is The true solution û+Δ at the end of time step  + Δ and after the mapping from M  to M  and back M  to can be expressed as where Ĥ is assumed to be the corresponding "exact" operator for  .Subtracting Equation ( 34) from (33) gives an expression for the true error that can be written as where   is the global space and time error including the mapping errors at  + Δ.Adding and subtracting  ū1 +Δ + Ē+Δ yields Using a Taylor expansion of  ū1 +Δ + Ē+Δ about ū1 +1 and dropping the high-order terms, we can approximate the total errors as The results in Equation (35) indicate that the total error is dependent on • the existing global space and time error Ē+Δ , which does not include the mapping error at  + Δ, • the mapping error   +Δ at  + Δ, +Δ −  û1 +Δ , and • a multiplier of the existing global space and time error Ē+Δ , Mapping data values from M  to M  and back to M  introduces the interpolation errors that degrade the solution if   +Δ is greater than the existing global space and time error Ē+Δ .This problem is resolved when the mapping error is kept smaller than the existing global space and time error.Similar ideas in the context of time dependent differential equations are explored in [4,11,22].The studies in [11] and [22] develop strategies for balancing the space and time error for better error control and improved performance while [4] show that in mesh adaptivity the spatial interpolation error must be controlled and kept smaller than the temporal error.

1D Modified Runge Function
This example is based on the modified version of the Runge function defined in Equation ( 23) and two meshes that emulate the dynamics/advection (M  ) and physics/reaction (M  ) meshes used in NEPTUNE.No actual PDE is solved in this example.Here, we consider the advection time step of the trivial case where the identity operator is used to represent the advection and reaction.The function  1 () is evaluated on advection mesh M  to create the initial data values.Given that the identity operator is used for both the advection and reaction, these initial data values are mapped to the reaction mesh M  and back to the starting mesh   .Using the identity operator allows for a study of the mapping error without the influence of the advection and reaction.
Table 7 shows  2 -norms of the mapping errors over the grid points for  1 () when using the PCHIP, MQSI, DBI, and PPI methods to map the data values from the advection mesh to the reaction mesh and back to the advection mesh.For  = 64 points, increasing the interpolant degree does not significantly improve the approximation.The global error is dominated by the local error in the regions with steep gradients that are to the left and right of the peak at  = 0.
The mapping errors can be improved by increasing the resolution and adding more points in the regions with steep gradients.The resolution is increased by adding one or three uniformly spaced points in each interval from the initial profile with 64 points.Increasing the resolution leads to better approximations when mapping data values between both meshes, and the error decreases as we increase the polynomial degree from 3 to 7.This example demonstrates that in cases with steep gradients, using the PPI method with high-order interpolants may not significantly improve the approximation unless there is sufficient resolution.In order to benefit from the positivity and the high-order interpolants, it is important to be in the regime where the problem has sufficiently many points to observe convergence as the polynomial degree increases.Overall, the PPI method leads to smaller errors compared to the other methods as the resolution and polynomial degree increase.7.  2 -norm of mapping errors for the modified Runge function  4 ( ) when using the PCHIP, MQSI, DBI, and PPI methods to map the data values from the advection mesh to the reaction mesh and back to the advection mesh.The target polynomials are set to  = 3,  = 5, and  = 7.  represents the number of input points used for both meshes.The parameter  is set to 3.

TWP-ICE Example
This study uses the tropical warm pool international cloud experiment (TWP-ICE) test case from the common community physics package (CCPP) [8].The input mesh for the simulation is configured to emulate a vertical column in NEPTUNE.
The simulation result at time  = 1440 sec is extracted and scaled, and used to evaluate different interpolation approaches when mapping solution values between advection and reaction meshes.The domain and range are scaled to [−1, 1] and [0, 1], respectively.This study considers the cloud water mixing ratio profile, which represents the amount of cloud water in air.The extracted profile is then fitted using a radial basis function interpolation to construct an analytical function that can be used as the starting point of the mapping evaluation.The radial basis function is based on multiquadrics: The parameter  is approximated using cross validation [7].The initial values are obtained by evaluating the analytical function on the advection mesh.These values are then mapped to the reaction mesh and back to the advection mesh.
Table 8 shows  2 −norms of the mapping errors for the extracted profile when using the PCHIP, DBI, and PPI methods to map the data values from the advection to physics mesh and back to advection mesh.For  = 64, the global error is dominated by the local error at a few points located in the regions with steep gradients.Increasing the polynomial degree does not significantly improve the approximation compared to using PCHIP for  = 64.More points are required to better approximate the underlying profile in the regions with steep gradients.The resolutions are increased by adding one and three uniformly spaced points in each interval from the initial  = 64 mesh points.Table 8 shows that with the increased resolution, the approximation improves as the polynomial degree increases.The number of points used in each region with steep gradients increased as more points were added.This example provides an application example using simulation data from TWP-ICE.In cases of coarse resolution (64 points), the PCHIP and MQSI results are comparable and going to higher degree interpolants doesn't significantly improve the approximation for DBI and PPI.The approximation improves with higher degree interpolants when the resolution is increased, as shown in Table 8.The results from this experiment suggest that increasing the resolution is needed for the mapping between meshes to benefit from the high-order interpolants from the PPI methods.8.  2 −norm of mapping errors for the TWP-ICE profile when using the PCHIP, MQSI, DBI, and PPI methods to map the data values from the advection mesh to the reaction mesh and back to the advection mesh.The target polynomials are set to  = 3,  = 5, and  = 7.  represents the number of input points used for both meshes.The parameter  is set to 3.

BOMEX Example
Here, a maritime shallow convection example based on the 1D Barbados Oceanographic and Meteorological Experiment (BOMEX) [9] is used to study the effects of the different interpolation methods in an application.BOMEX is a singlecolumn shallow convection test case used to measure and study changes in temperature, wind, water vapor mixing ratio, rain water mixing ratio, and cloud water mixing ratio.The mixing ratios represent the amount of water vapor, rain water, and cloud water in air.In this simulation, the dynamics/advection is modeled by where  is the state variable that contains the wind, temperature, water vapor mixing ratio, cloud water mixing ratio, and rain water mixing ratio.The dynamics are approximated using fifth-order weighted essentially nonoscillatory (WENO) and third-order Runge-Kutta methods [39].The physics component of the simulation uses the hybrid eddy-diffusivity mass-flux and free atmospheric turbulence (hybrid EDMF) and Kessler microphysics schemes from [8] to alter the results of the dynamics and incorporate the physics properties.The dynamics and physics results are calculated on different meshes.At each time step, the dynamics are calculated on the advection mesh M  , and the results are interpolated to Manuscript submitted to ACM the reaction mesh M  for the use of the physics routines.The physics terms are calculated using the reaction mesh M  , and the results are interpolated back to the advection mesh M  .
As in [32], let   be the cloud water mixing ratio profile in the different experiments.Figures 6a -7f show the cloud mixing ratio profile   at  = 5 hours that is used as input for the physics routines.The physics calculations require positive input values for   .Figure 6a shows the target profile for   .This target profile is obtained by using the same mesh for both dynamics and physics calculations where mapping is not required and   remains positive during the simulation.In addition, as the temporal and spatial resolution increases,   converges to the profile shown in Figure 6a. Figure 6b shows the cloud mixing ratio profiles   for the target and approximated solution at  = 5 hours.In the case of the approximated solution, a fifth-order standard polynomial interpolation is used when mapping between the advection and reaction meshes.For a given interval   , the polynomial interpolant is constructed using the stencil V 4 = {  −2 ,   −1 ,   ,  +1 ,  +2 ,  +3 }.At the boundary and nearby boundary intervals, the stencil V 4 is biased toward the interior of the domain.The results in Figure 6b demonstrate that using the standard polynomial interpolation leads to oscillations, negative values, and an overestimation of the peak and total cloud mixing ratio of the profile   .Using standard polynomial interpolation leads to an overproduction of the total cloud mixing ratio by 93.45%.The peak is  (  ) = 0.46/, which is larger than the target peak  (  ) = 0.28/.
The negative values in Figure 6b can be removed via "clipping", which is a procedure that consists of removing the negative values by setting them to zero. Figure 6c shows the cloud mixing ratio profiles for the target solution and an approximated solution that uses "clipping" to remove the negative values at each time step.The approximated solution uses a standard interpolation to map the data values from one mesh to another.The interpolant for each interval is constructed using the stencil V 4 = {  −2 ,   −1 ,   ,  +1 ,  +2 ,  +3 } with a fifth-order polynomial.Once the interpolation is completed, "clipping" is used to remove the negative values.Figure 6c shows that using "clipping" still allows for oscillations and a positive bias in the prediction of the cloud mixing ratio   .The total cloud mixing ratio is 2.09 times greater than the target solution, and the peak  (  ) = 0.46/ is larger than the target peak  (  ) = 0.28/.
Using PCHIP to map between the advection and reaction meshes eliminates the negative values, removes oscillations, and reduces the positive bias in the cloud mixing ratio prediction compared to the standard interpolation with and without "clipping".Figure 6d shows the target profile   and an approximated profile that uses PCHIP for mapping solution values between advection and reaction meshes.The total cloud mixing ratio is now 27.21% less than the target with a peak  (  ) = 0.21/.In the BOMEX test case, NEPTUNE, and similar codes, using PCHIP for mapping data values from one mesh to another can degrade the high-order accuracy obtained from the high-order methods used for the dynamics calculations.PCHIP is only third-order whereas the dynamics calculations use a fifth-order method.This limitation can be addressed via high-order DBI and PPI.Here, the MQSI method is not used because it oversmoothes the state variable  at each time step and leads to no production of cloud mixing ratio.
Figures 7a-7f show cloud mixing ratio profiles for the target and approximated solutions that use the DBI and PPI methods to map the solution values between meshes.The maximum polynomial degree for the DBI and PPI methods is set to 5 and 7, and the parameters  0 and  1 are both set a value of 10 −5 .For larger values of  0 and  1 , the PPI approach introduces oscillations that lead to positive bias prediction of the cloud mixing ratio.These oscillations are caused by the relaxed nature of the PPI approach, which still allows the interpolants to oscillate while remaining positive.The positive bias and oscillations can be removed using the DBI or PPI method with small values for  0 and  1 .When using the PPI method for mapping, the total amount of the cloud mixing ratio is less than the target for  = 1 and more than Manuscript submitted to ACM the target for  = 2 and  = 3. Figures 7a-7f show that using the DBI and PPI methods with  0 =  1 = 10 −5 to map data values between the advection and reaction meshes eliminates the negative values, removes the oscillations, and significantly reduces the positive bias in the cloud mixing ratio prediction.Using the DBI and PPI methods leads to a better approximation of the peak value of the total cloud mixing ratio compared to using the standard interpolation and PCHIP approaches.The best approximation of the total amount of the cloud mixing ratio is with the DBI method, which is 7.57% more than the target with a peak of  (  ) = 0.28/.
In summary, using DBI and PPI methods to map data values between both the advection and reaction meshes produces better approximation results compared to the standard interpolation and PCHIP methods.Tables 9 and 10 provide a summary of the maximum values and the total amount of cloud mixing ratios for each case.The DBI and PPI methods with a target polynomial set to  = 7 lead to a better approximation of the peak and the total cloud mixing ratios compared to the standard interpolation and PCHIP approaches.The results from Tables 9 and 10 indicate that the DBI method is the most suitable approach to map data values between meshes for the BOMEX test case.This study provided an example demonstrating how to use the DBI and PPI methods for mapping data values between meshes in the context of NWP.The BOMEX example also demonstrated that positivity alone may not be sufficient to remove oscillations in the solution, and the interpolants may need to be constrained to be between the data values for a better approximation.Fig. 6.Cloud mixing ratio   profile from the BOMEX test case at  = 5 hours with  = 600 points.A fifth-order WENO and third-order Runge-Kutta schemes with   = 0.1 are used for the dynamics (advection).6a.The black plot in 6b, 6c, and 6d represents the target profile where the same mesh is used for the dynamics and physics calculations.In 6b, 6c, and 6d, the profiles in blue use different meshes for the dynamics and physics calculations which require mapping the solution values between both meshes.A standard polynomial interpolation, a standard polynomial interpolation with "clipping", and PCHIP methods are used for the mapping in 6b, 6c, and 6d, respectively.(f) Fig. 7. Cloud mixing ratio   profile from the BOMEX test case at  = 5 hours with  = 600 points with  0 =  1 = 10 −5 .The profile in black is the target solution.The profiles on the left (7a, 7c, 7e) and right (7b, 7d, 7f) are obtained using the DBI and PPI methods, respectively, to map solution values between meshes.The maximum polynomial degrees are set to  = 5 and  = 7 for the blue and red plots, respectively.The input parameter  is set to 1, 2, and 3 for the advection (7a, 7b), reaction (7c, 7d), and third (7e, 7e, 7f) row, respectively.A fifth-order WENO and third-order Runge-Kutta schemes with   = 0.1 are used for the dynamics (advection).
Target Table 10.Maximum values of   and the total amount of the cloud mixing ratio at  = 5 hours with  = 600 points and  0 =  1 = 10 −5 .The total amount of the cloud mixing ratio is calculated by estimating the integral   .The units of   are /.

DISCUSSION AND CONCLUDING REMARKS
In this work we introduced HiPPIS: a high-order 1D, 2D, and 3D data-bounded and positivity-preserving interpolation software for structured meshes.The software implementation is based on the mathematical framework in Section 2 and the algorithms in Section 3. The software is self-contained and can be incorporated into larger codes that require data-bounded or positivity-preserving interpolation.The interface is designed to be similar to commonly used PCHIP and splines interfaces.The algorithms used in the software extend the DBI and PPI methods introduced in [29] by adding more options for the stencil construction process that can be set by the user with the parameter .For a given interval   , the algorithm starts with the stencil V 0 = {  ,  +1 } and successively appends points to the left and/or right of V 0 to form the final stencil.The stencil construction is done in accordance with the DBI and PPI conditions outlined in Equations (21a) and (21b).In addition to the different options for the stencil selection process, the software introduces a parameter  1 that can be used to adjust the bounds of the interpolants in the intervals where extrema are detected.The results in Tables 7 and 8 show that using small values for parameters  0 and  1 improves the approximation in cases where the input data are coarse.Small values of  0 and  1 further restrict how much the interpolant is allowed to grow beyond the data values.The parameters  0 and  1 are used to adjust the lower and upper bounds on each interpolant according to Equations ( 12) and ( 13).The study of the modified Runge example in Section 5.2 and TWP-ICE example in Section 5.3 demonstrated that for a profile with steep gradients or fronts, more points are required to better take advantage of the DBI and PPI algorithm.If there are not sufficiently many points in the regions with steep gradients or fronts, increasing the polynomial degree may not improve the accuracy.The results in Tables 7 and 8 show that once the resolution is sufficiently increased, the approximations improve as the polynomial degree increases.
In the BOMEX test case, prioritizing a symmetry ( = 2) or locality ( = 3) leads to better approximations compared to the ENO stencil ( = 1) using the DBI and PPI methods.Using the ENO stencil ( = 1) produces significantly less cloud mixing ratio compared to both the prioritizing symmetry and locality.In the BOMEX example with parameters  0 and  1 greater than 10 −5 , the PPI method allows for oscillations that degrade the approximation compared to the DBI and PCHIP approaches.The MQSI method is not used for the BOMEX example because it oversmoothes the different variables at each time step and leads to no production of cloud mixing ratio.
In summary, this work provided (1) a high-order DBI and PPI software for 1D, 2D, and 3D structured meshes; (2) an analysis of the mapping error when using the DBI or PPI to map data values between meshes; and (3) an evaluation of the DBI and PPI methods in the context of function approximation and interpolating data values between different meshes.As this work continues, we plan to investigate different approaches for extending the DBI and PPI methods to unstructured 2D and 3D meshes.

( 7 )
Evaluate the final interpolant   () (for DBI) or   () (for PPI) at the output points x that are in   .(8)Repeat Steps 1-7 for each interval in the input 1D mesh.

Fig. 1 .
Fig. 1.Diagram showing the components of the main module used to build the HiPPIS software.

Figure 3
Figure 3 shows approximation plots of  3 () using  = 17 uniformly spaced points with different values of  0 .The target polynomial degree is set to  = 4, 8.The bottom right part of Figure 3 shows oscillations at the left boundary Manuscript submitted to ACM

Fig. 2 .
Fig. 2. Approximation of  2 ( ) from  = 17 uniformly spaced points with different interpolation methods.The top row (Figure 2a) shows approximation results using PCHIP, MQSI, and DBI.The bottom row (Figure 2b) shows approximation results using PPI with  = 4, 8 and  0 = 1.0, 0.01.An enlarged version of the region in the red rectangle is shown on the right of each row.The value of  1 is set to 1.0.

Fig. 3 .
Fig. 3. Approximation of  3 ( ) from  = 17 uniformly spaced points with different interpolation methods.The top row (Figure 3a) shows approximation results using PCHIP, MQSI, and DBI.The bottom row (Figure 3b) shows approximation results using PPI with  = 4, 8 and  0 = 1.0, 0.01.An enlarged version of the region in the red rectangle is shown on the right of each row.The value of  1 is set to 1.0.
experiment (BOMEX) examples are used to evaluate the use of the PPI and DBI to map data values between two meshes.The Runge and TWP-ICE examples use meshes that emulate the advection and reaction meshes used in NEPTUNE.These meshes are constructed by linearly scaling the NEPTUNE vertical mesh points to the desired interval for the Manuscript submitted to ACM(a) PCHIP (b) MQSI (c) PPI with P 4 ,  0 =  1 = 1.0(d) PPI with P 4 ,  0 =  1 = 10 −4 (e) PPI with P 8 ,  0 =  1 = 1.0 (f) PPI with P 8 ,  0 =  1 = 10 −4

Figures 6b -
Figures 6b -7f are used to investigate different interpolation methods for mapping the solution values between meshes in the case where the dynamics and physics are calculated using different meshes.
Various 1D and 2D examples are employed to evaluate the use of the DBI and PPI software in different contexts.The results in Section 4 show that for Examples I, II, IV, and V, which are based on underlying smooth functions, the DBI and PPI methods produce more accurate results compared to PCHIP and MQSI.For Example III, and VI which are obtained from discontinuous and  0 -continuous functions, all the methods have comparable errors.Figures2-5show that the parameters  0 and  1 can be used to reduce undesired oscillations from the PPI method.We only report the results for  = 3 because the differences between  = 1,  = 2, and  = 3 are negligible for the examples in Section 4.Section 5 provides an analysis and numerical comparison of the mapping error when the DBI and PPI methods are used to map data values between different meshes.The error analysis in Section 5.1 shows that it is important to keep mapping errors smaller than the already existing global errors from other calculations.Removing negative values and spurious oscillations can help reduce the mapping error.Manuscript submitted to ACM and  1 .Output: { ũ,, }

Table 1 .
2 -errors when using the PCHIP, MQSI, DBI, and PPI methods to approximate the function  1 ( ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1.0, and 3, respectively.

Table 2 shows
2 -error norms when using the PCHIP, MQSI, DBI, and PPI methods to approximate the smoothed logistic function  2 ().The MQSI method has larger errors compared to the other methods.In this case, the algorithms Manuscript submitted to ACM employed by MQSI to approximate and adjust derivatives values used to construct the monotonic quintic splines are less accurate than the one used in PCHIP.For a target polynomial degree  = 3, the approximation errors using PCHIP, DBI, and PPI are comparable.Increasing the target polynomial degree improves the approximations for DBI and PPI, as shown in Table2.The errors from both the DBI and PPI methods are similar because the logistic example has no hidden extrema, and the stencils used for both methods are the same, around  = 0.The global error is dominated by the local errors in the region with steep gradients around  = 0.Figure2shows approximation plots of  2 () using  = 17 uniformly spaced points with different values of  0 and  1 = 1.The target polynomial degree is set to  = 8.For  0 = 1, we observe oscillations, as shown in the right part of Figure2.As  0 decreases, the oscillations decrease.For  0 ≤ 0.01, the errors and oscillations are negligible compared to errors in the region with the steep gradient.The oscillations are completely removed for  0 = 0.0.

Table 2 .
2 -errors when using the PCHIP, MQSI , DBI, and PPI methods to approximate the function  2 ( ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1, and 3, respectively.

Table 3 .
2 -errors when using the PCHIP, MQSI DBI, and PPI methods to approximate the function  3 ( ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1, and 3, respectively.

Table 4 .
2 -errors when using the PCHIP, MQSI, DBI, and PPI methods to approximate the function  4 (, ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1, and 3, respectively.

Table 5 .
2 -errors when using the PCHIP, MQSI, DBI, and PPI methods to approximate the function  5 (, ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1, and 3, respectively.

Table 6 .
2 -errors when using the PCHIP, MQSI, DBI, and PPI methods to approximate the function  6 (, ). represents the number of input points used to build the approximation.The parameters  0 ,  1 , and  are set to 0.01, 1, and 3, respectively.

Table 9 .
Maximum values of   and the total amount of the cloud mixing ratio at  = 5 hours with  = 600 points.The total amount of the cloud mixing ratio is calculated by estimating the integral   .The units of   are /. = 1  = 2  = 3  = 1  = 2  = 3