Domain Generalization in Time Series Forecasting

Domain generalization aims to design models that can effectively generalize to unseen target domains by learning from observed source domains. Domain generalization poses a significant challenge for time series data, due to varying data distributions and temporal dependencies. Existing approaches to domain generalization are not designed for time series data, which often results in suboptimal or unstable performance when confronted with diverse temporal patterns and complex data characteristics. We propose a novel approach to tackle the problem of domain generalization in time series forecasting. We focus on a scenario where time series domains share certain common attributes and exhibit no abrupt distribution shifts. Our method revolves around the incorporation of a key regularization term into an existing time series forecasting model: domain discrepancy regularization. In this way, we aim to enforce consistent performance across different domains that exhibit distinct patterns. We calibrate the regularization term by investigating the performance within individual domains and propose the domain discrepancy regularization with domain difficulty awareness. We demonstrate the effectiveness of our method on multiple datasets, including synthetic and real-world time series datasets from diverse domains such as retail, transportation, and finance. Our method is compared against traditional methods, deep learning models, and domain generalization approaches to provide comprehensive insights into its performance. In these experiments, our method showcases superior performance, surpassing both the base model and competing domain generalization models across all datasets. Furthermore, our method is highly general and can be applied to various time series models.


INTRODUCTION
Time series data are ubiquitous, and forecasting such data plays a crucial role in many applications such as financial forecasting [27,31], meteorology prediction [19,60], healthcare analysis [10], and demand estimation [11].The goal of time series forecasting is to predict future values based on historical observations.A major challenge in time series forecasting is the presence of domain shifts, where the underlying data distribution may vary due to data collection from various sources, locations, or conditions.Traditional forecasting models trained on data from a specific context often struggle to generalize well to unseen data.This issue arises in scenarios such as demand forecasting for new products, where accurate predictions are essential for effective planning and decision-making.
Domain generalization seeks to address this challenge by developing models that have consistent performance across different domains [63].Significant progress has been made in domain generalization, yet mainly in computer vision [22,65] and natural language processing [7].Many existing methods encounter challenges when applied to time series forecasting, primarily due to two factors: (1) They require categorical label information in their problem settings, making them better suited for classification tasks [7,22,65], and (2) the inherent complexity and dynamic nature of time series data introduce significant stochasticity, complicating generalization efforts [15].Existing domain generalization approaches fall short in effectively addressing the underlying temporal dependencies and distribution shifts in time series data.
In this article, we propose an approach to tackle the problem of domain generalization in time series forecasting.Our method focuses on addressing the challenges posed by the diverse patterns and complex characteristics of time series data from different domains.We base our approach on carefully considered assumptions concerning the presence of common patterns across domains and restrictions on data shifts within each domain.We introduce two regularization terms that enhance a model's ability to generalize effectively in time series forecasting: a basic version that ensures consistent performance across various domains, named Domain discrepancy regularization (Section 3.2.1),and an extension that takes into account the difficulty of individual domains, named Domain discrepancy regularization with domain difficulty awareness (Section 3.2.2).These regularization terms control the model's learning and fitting process across diverse domains, ensuring consistent forecast performance and generalizability to unseen domains.
In summary, the main contributions of this work are: -We introduce a novel domain generalization problem in the context of time series forecasting.We formalize the time series forecasting task under specific assumptions, laying the groundwork for further exploration (Section 3.1).-We propose a novel regularization term that improves a forecasting model's generalization capabilities by regulating cross-domain performance differences weighted by domain discrepancies (Section 3.2.1).-We present an extended version of the regularization term by incorporating a notion of domain difficulty awareness, where we assign less penalty to challenging domains, allowing the model to learn more complex patterns (Section 3.2.2).
We conduct extensive experiments on diverse synthetic and real-world time series datasets to demonstrate the effectiveness of our method (Section 4).Our method achieves higher accuracy in domain generalization tasks than existing approaches and has low training overhead, making it applicable to real-world scenarios on top of an existing forecasting model.

RELATED WORK
We survey work in time series forecasting and domain generalization on time series data.
For better modeling nonlinear relationships and capturing more complex temporal dependencies, machine learning (e.g., regression [55] and tree-based models [30]) and deep learning models have received increased attention.Recurrent neural networks (RNNs) and their variants, such as long short-term memory (LSTM) [25] and gated recurrent units (GRUs) [12], have demonstrated strong performance in capturing long-term dependencies and nonlinear patterns in time series data.Temporal convolutional networks (TCNs) [6] and WaveNet [44] have gained popularity for their fast training, which allows for easy parallelization.Attention-based models [4], such as transformer [58] and TransformerConv [36], have achieved state-of-the-art performance in many prediction tasks for sequential data.Recently, many researchers have explored graph neural networks [64] in spatial-temporal forecasting [23,60] and learning dynamic relationships among potential factors of the predictions [14,27].Time series forecasting can be classified into point estimate and probabilistic forecasting [21,38], which provide a single predicted value and a probability distribution, respectively.Probabilistic time series forecasting plays a critical role in decision-making due to its ability to quantify uncertainties [21].In this work, we focus on probabilistic forecasting.[63] refers to the problem of learning a model that can generalize well to unseen target domains that differ from the training domains.DG may help to reduce labeling efforts, handle distribution shifts, and facilitate transferability to new tasks [63].Existing approaches to DG can be classified into three main categories: (i) data manipulation, (ii) learning strategy, and (iii) representation learning.Data manipulation techniques typically enhance generalization through randomization [56], augmentation of input data [52,59], or generation of diverse samples [46].Learning strategy-based methods mainly focus on meta-learning [7,34,37] and gradient-based techniques [53].Representation learning involves learning domain-invariant representations through techniques such as kernel method, adversarial training, feature alignment, or invariant risk minimization [2,35].Most existing research has focused on particular types of data such as images, texts, or observation data for reinforcement learning.

Domain generalization (DG)
Recently, DG has been explored in time series classification.These studies highlight the use of class information to learn domain invariance using distribution matching [72], data augmentation [72], contrastive learning [26,47], and adversarial learning [39].An empirical framework for DG has been explored in the context of time series classification within the clinical domain [66].Additionally, researchers have introduced various time series benchmarks covering a diverse range of data modalities, such as videos and brain recordings [16].Yet, these benchmarks are highly biased towards classification tasks.There have also been notable contributions to temporal domain generalization [5,43]: the ability of a model to generalize well across different time periods.There are a few studies in domain adaptation for time series forecasting [29] that assume that data from the targeted domain is partially seen during training.Due to the complexity and stochasticity of time series, there is limited research on DG for time series forecasting.This is where we contribute: methods that enable domain generalization in time series forecasting.the scaling factor that modulates the penalty for regularizing two domains

METHODOLOGY
We formalize the domain generalization problem for time series forecasting and introduce our proposed method.The necessary mathematical notations are in Table 1.

Problem Formulation
We first introduce the problem of time series forecasting and extend it to domain generalization.Time series forecasting.We denote time series variables as y 1:T = {y 1 , y 2 , . . .,y T }, where y t represents the value at time t (e.g., sales in retail), and T is the number of timesteps in the history window. 1 Usually, we assume the timestep t to be constant (e.g., a day or an hour).The goal of time series forecasting is to estimate the future values y T +1:T +h = {y T +1 , y T +2 , . . .,y T +h }, where h is the forecasting horizon.We focus on multi-step prediction with h > 1, because it offers more valuable insights by providing longer prediction horizons, which are more relevant and informative in real-world scenarios.A time series dataset with N samples can be denoted by D = {(y i,1:T , a i,1:T ), y i,T +1:T +h } N i=1 , where the variable a represents possible exogenous attributes (e.g., day of the week, categorical features).For simplicity, we write D = {X , Y }, where We are interested in modeling the conditional distribution: where θ is the learnable parameters of a model.For probabilistic forecasting, we have θ = (μ θ , σ θ ) where μ θ , σ θ denotes the location and scale parameters of a distribution (e.g., a normal distribution) parameterized by θ .

Domain Generalization in Time
Domain generalization for time series forecasting.We extend the concept of time series forecasting to the scenario where the model needs to generalize well across multiple domains.We consider D = {D 1 , D2 , . . ., D K } as the set of K domains, where each domain D k consists of a time series dataset In the domain generalization problem, we assume to have access to M training domains D train = {D k } M k=1 where M < K.The domain generalization task is to learn a time series forecaster F : X → Y that generalizes well to unseen test domains D test = {D k } K k=M +1 that cannot be accessed during training.Data assumptions.Given the inherent complexity of real-world time series data, we limit the scope of domain generalization for time series forecasting.We employ the following assumptions regarding the characteristics of time series data within and across domains.These assumptions are widely used or have similar implications in general studies on domain generalization [2,15,39,63].

Assumption 1 (Common underlying patterns).
There exist common underlying patterns among different domains, despite their individual idiosyncrasies.The dissimilarity between the joint distributions for two domains falls within a range defined by a lower bound ϵ l and an upper bound ϵ u .This range captures the extent of variation allowed between the shared patterns across different domains.ϵ l should also be sufficiently large to prevent identical data across domains.Mathematically it can be expressed as: Assumption 1 imposes constraints on the common patterns/invariance that can be leveraged to improve domain generalization.In practice, we can leverage prior knowledge or domain expertise to define the domains of interest that align with the assumed common patterns.For instance, retail, meteorology, and environmental factors-related data often show recurring seasonal patterns.To quantitatively measure this assumption, metrics such as the Pearson correlation coefficient or Dynamic Time Warping [42] can be employed for comparing time series across different domains.

Assumption 2 (No abrupt distribution shifts).
There are no sudden or abrupt distribution shifts within each domain of the time series data, while gradual changes may be present.Sudden changes imply large uncertainty in the unseen time series domain and raise concerns about the efficacy of developing generalization methods.Suppose | denotes the distribution shift indicator for domain k at time t.We expect that |ΔD k (t)| remains within the bounds of a potential threshold ϵ s for all timesteps, for each domain, i.e., ∀1 < t ≤ T , 1 ≤ k ≤ K.
Assumption 2 focuses on scenarios where the within-domain data distributions maintain relative stability over time.This assumption is well-suited to certain fields such as meteorology, transportation, and environmental monitoring where data tend to exhibit gradual changes rather than abrupt shifts.However, there are fields where abrupt distribution shifts are common, such as financial markets, natural disaster data, and social media activity, which may not conform to this assumption.Nevertheless, in such cases, it is still possible to adapt to this assumption by limiting the length of the time sequence within each domain, thereby ensuring that within-domain distributions remain acceptably stable.To measure this, distribution shift detection methods can be applied, such as comparing statistical properties of various time series data and using visualization techniques. 2  113:6 S. Deng et al.
These data assumptions offer a practical starting point for devising effective domain generalization methods in time series forecasting. 3Nonetheless, we acknowledge the difficulties associated with assessing the degree to which real-world datasets conform to our assumptions, e.g., defining precise thresholds and comparing.These challenges arise because time series data are inherently complex and exhibit diverse characteristics across different domains.We believe this requires further breaking down problems in various application areas, and we leave the theoretical in-depth studies for future work.

Proposed Method
Next, we present our proposed method, domain generalization in time series forecasting using cross-domain regularizations with difficulty awareness (Cedar).It consists of two novel regularization terms: (1) domain discrepancy regularization, which ensures consistent performance across various domains with distinct patterns, and (2) domain discrepancy regularization with domain difficulty awareness, an extension of the domain discrepancy regularization by considering the performance within each domain, accounting for the difficulty in training different domains.

Domain Discrepancy Regularization.
In domain generalization, where information about the target domains is unknown, our objective is to learn a generalized model that exhibits consistent and stable performance across diverse temporal patterns in different domains.To achieve this, we propose a domain discrepancy regularization to prevent severe overfitting in all seen source domains, ensuring the robustness and generalization capability of the model when applied to new domains.It builds on the insight that dissimilar domains should not exhibit significant variations in forecasting performance.This regularization term is straightforward, as it calculates the difference of the forecasting performance between domain pairs, weighted by the discrepancy of the respective domain pair.We express the regularization term R DD as follows: ( We use d H (, ) to represent the distribution divergence between two domains.We use maximum mean discrepancy (MMD) as the difference metric in our experiments due to its easy implementation, widespread popularity, and kernel-based theoretical foundation supporting its use to capture complex relationships.Other distance metrics can also be applied (e.g., Euclidean distance and KL divergence).MMD has been used in distribution matching regularization in domain adaptation [15,61] and generalization [72].The definition of d H (, ) is: where H(D k ) represents the high-level representation of time series from domain k, which is learned from a forecasting model (e.g., hidden states of an RNN or convolutional vectors of a CNN) and evaluated on a batch of samples.Such a high-level representation captures temporal dependencies inherent in time sequence data.We compute the mean value of these representations across all samples in a batch that belongs to the specific domain.
The second term d L fcst (, ) quantifies the difference in time series forecasting performance between two domains, computed by the Euclidean distance of the average loss values for batches of Domain Generalization in Time Series Forecasting 113:7 samples in each respective domain.Formally, we express the calculation as: Here, L fcst (D k ) denotes the forecasting loss (e.g., Gaussian negative log-likelihood or L2 loss) of the batch samples in domain k; | • | denotes the absolute operation, and the parameter p can be set to 1 or 2. When p = 2, it implies a higher penalty for large differences in mean losses between domains.We can use matrix operations to efficiently calculate the multiplication of the two terms across all domain pairs.By minimizing the regularization term (Equation ( 2)), we aim to prevent significant disparities in forecasting performance between different domains, which in turn can hinder the model's ability to effectively generalize to new patterns.Our approach differs from existing methods primarily tailored for classification tasks [1,61,72].Unlike these approaches, which concentrate on aligning distributions within the same class across different domains to capture domain invariance, our method operates without class/label information.Class/label information provides a more straightforward way to group similar samples in different domains.Our focus is on promoting consistency in the model's predictive capacities across diverse domains while penalizing substantial differences in forecasting performance.

Domain Discrepancy Regularization with Domain Difficulty Awareness.
In the regularization term R DD (Equation ( 2)), we use the mean value of loss for each domain to gauge its performance in the current model, and the difference in mean loss is used to evaluate the performance disparity.However, time series data often exhibits irregular patterns or contains abnormal points and outliers, which may lead to inaccuracies in assessing the true performance difference based solely on mean loss, e.g., large stock price fluctuations can occur in financial markets, leading to inaccuracies in assessing the true performance of time segments based solely on their mean values.
Starting from this motivation, we adjust the penalty to account for the difficulty of the domains.If a domain exhibits higher variance in its loss values, then it implies greater challenges in training.Thus, we consider applying a milder penalty to that domain to give the model more flexibility in learning from that domain's data.By reducing the penalty for difficult domains, we allow the model to concentrate more on adapting to the complex aspects of those domains, which may lead to improved generalization performance.
To incorporate domain difficulty into the pairwise regularization term R DD (Equation ( 2)), we propose a simple approach based on the variance that takes into account both domains in a pair: Here, ω(D k 1 , D k 2 ) is a scaling factor that modulates the penalty based on the difficulty of each domain in a pair, defined as follows: where Std L fcst (D k ) denotes the standard deviation of the losses of batch samples in domain k.If any domain in the pair exhibits poor performance (with a large standard deviation of loss values), then we consider the domain discrepancy regularization to be less reliable.We introduce ε = 1 to prevent very small standard deviations from causing very large values or undefined values (e.g., 0) of ω, and the maximum value of ω is 1.There are other methods that can be used to quantify the difficulty of a domain, such as expert knowledge, exogenous feature analysis, or more advanced domain-specific metrics.We leave this to future work.Training and optimization.Given a time series forecaster F and the proposed regularization terms for domain generalization, we train the forecaster F by minimizing the following total loss: where L fcst is the empirical loss of the base model F , which uses empirical risk minimization (ERM) [57]

EXPERIMENTS 4.1 Experimental Settings
Datasets.We evaluate the performance of our proposed method on both synthetic and real-world time series datasets.Synthetic data.To assess the efficacy of Cedar in achieving generalization across diverse scenarios, we construct multiple synthetic time series datasets that manifest distinct patterns of invariance across domains; see Table 2 for a summary.These datasets satisfy the data assumptions from Section 3.1.We assume that there are no abrupt distribution shifts within each domain of time series, and all domains share a common characteristic, i.e., periodicity.Periodicity is a prevalent pattern in various real-world time series data, such as sales data in retail and temperature fluctuations.We use the sinusoidal function to generate periodic signals and apply Gaussian noise to those periodic signals.We use no trend (i.e., horizontal trend) when it is a common attribute.We use the previous 270 timestamps for training, the subsequent 90 timestamps for validation, and the remaining timestamps for testing.No exogenous attributes are used.
Real-world data.We also assess the domain generalization ability of Cedar using real-world time series data, focusing on retail, transportation, and finance.The dataset summary is in Table 3.
-Retail.We use Favorita [41], which is a Kaggle dataset that contains five years (2013-2017) of daily sales for store-product combinations taken from a retail chain.We construct two datasets based on Favorita, namely, Favorita-cat and Favorita-store.In Favorita-cat, we focus on the category-level sales in the same store, treating each category as a separate domain.Favorita-store comprises the time series data for a single category (Grocery I) across multiple stores.We use data from the year 2015.-Finance.We use Stock Exchange Data, which contains daily price data for indexes tracking stock exchanges collected from Yahoo! Finance. 5 We use the stock trading volume as the target variable, representing the number of shares of security traded between its daily open and close.The training data span from 2020-01-01 to 2020-08-30, the validation data extends up to 2020-11-30, and the remaining data up to 2021-05-31 are for testing.Since some stocks have large averages, we used a simple method for normalization, i.e., dividing all trading volumes by 1e7.
Data assumption validation.We assess the alignment of real-world datasets with our assumptions primarily through prior knowledge and visualization methods.For instance, retail and transportation datasets demonstrate consistent seasonal patterns, and stock volume data accounts for overall market influence (Assumption 1).To mitigate abrupt distribution shifts (Assumption 2), we restrict the time sequence length for each domain and use category-level sales for retail, daily traffic volume, and daily stock volume data.
Exogenous attributes.We add numerical covariates consisting of time indicators (e.g., day of the week) to real-world datasets.The time indicators are represented by two Fourier terms [28] to represent the periodic nature of the time [54].
Data Visualization.We visualize some datasets in Figure 1 to illustrate the disparity in patterns across domains.For the synthetic dataset PN-T, time series from different domains exhibit varying trends while sharing the same period and Gaussian noise applied to the periodic signals.For real-world datasets, we can observe that the data have diverse cross-domain and within-domain patterns but roughly follow a pattern similar to the synthetic dataset.Methods used for comparison.To evaluate the domain generalization performance of Cedar, we consider several state-of-the-art and popular domain generalization methods that are modalityagnostic (i.e., can be applied to time series data.) and are compatible with forecasting tasks as follows: -DANN [17] is an adversarial learning method that learns features that are not capable of discriminating between training and test domains.-GroupDRO [49] is the Group Distributionally Robust Optimization approach that aims to improve the robustness of models in the presence of domain shifts or changes in the data distribution.
-MLDG [34] is a meta-learning algorithm for domain generalization.The method trains for domain generalization by meta-optimization on simulated train/test splits with domainshift.
-IDGM [53] is a gradient matching approach that learns invariance by maximizing the inner product between gradients from different domains.-wERM is a weighted empirical risk minimization method adapted from the time series prediction model under distribution shift using differentiable forgetting [9].wERM-exp and wERM-mix are variants using two forgetting mechanisms, corresponding to exponential decay and a mixture of various functional forms of decay, respectively.6-MMD [61,62] is a distribution matching method that matches the distributions between representations of data of two domains.Cedar and the above domain generalization methods can be applied to different base models that forecast time series.Specifically, we consider two popular and representative time series models as the base model7 : -DeepAR [50] is an RNN-based probabilistic forecasting model.
-WaveNet [44] is a CNN-based forecasting model.Apart from the base model with domain generalization methods, we also consider the following two types of methods as our baselines: -Traditional time series models: Seasonal Naive (SN) and Exponential Smoothing (ES). 8omain Generalization in Time Series Forecasting 113:11 -Latest deep learning models: The variational recurrent autoencoder VRNN [13] that learns latent variables that capture temporal dependencies and an adaptive time series model, AdaRNN [15], which can be fit to our domain generalization for time series forecasting.9 Evaluation metrics.We employ both the point accuracy metric and range accuracy metric to evaluate the probabilistic forecasting performance, following prior studies [50,54].For point accuracy, we use the normalized root mean squared error (NRMSE) and symmetric mean absolute percentage error (sMAPE) [3]: For range accuracy, we use the normalized quantile loss function [50]: where q is the quantile value, ŷq t is the prediction for quantile q. 1 is the indicator function.We report the scores when q = 0.5, denoted by Q(0.5), and the mean quantile performance over the nine quantiles in the range q = {0.1,0.2, . . ., 0.9}, denoted by Q(mean).The evaluation scores are computed across all training/test domains.Implementation details.We implemented, trained, and evaluated all methods using Py-Torch [45] 1.7.1 with CUDA 10.2 on TITAN Xp.For all datasets, we use the scaling mechanism following prior studies [50,54].All parameters are initialized with Glorot initialization [20] and trained using the Adam [32] optimizer; the dropout rate is 0.3.The learning rate is searched from {0.0005, 0.001, 0.005}.The batch size is 64 for all models and datasets.The hidden state size is set to be consistent across layers for all models searched from {16, 32, 64}.We adopt specific configurations for the number of hidden layers and the kernel size of the convolution operation based on prior work [54].For RNN-based models (DeepAR and VRNN), the number of hidden layers is 3.For CNN-based models (e.g., WaveNet), the number of hidden layers is 5 and the kernel size is 9.For the GroupDRO, MLDG baseline models, we adopt the hyperparameter selection suggestions from previous work [24].For wERM, we use the hyperparameter initialization method introduced in the original code [9].For models that employ maximum mean discrepancy (MMD) as the discrepancy measure, we utilize the squared linear MMD [51] due to its efficiency and effectiveness.The coefficient multiplied to MMD is searched from {10 i } 0 i=−7 .For AdaRNN, the number of RNN layers is 2 following the original paper's suggestion.The MMD coefficient has to be tuned with other parameters, and we set its range to {0.001, 0.0001} to control the search space.
For Cedar, we grid-search the hyperparameter γ applied to L DD or L DDD from the range {10 i } 0 i=−7 and p in Equation (4) from {1, 2}.We do not tune γ in conjunction with other hyperparameters, such as the learning rate.We directly utilize the optimal settings of the other hyperparameters learned from the base model.It allows us to focus on the impact of γ on the model's performance, without introducing additional variations from tuning other hyperparameters.Model selection.We use 80% of domains for training and 20% for testing and adopt the trainingdomain validation approach [24]. 10We partition the available domains into training and test domains.Within the training domain, we further split the data into training and validation subsets in chronological order.Subsequently, we train models using the training subsets and choose the model that has the lowest loss on validation subsets.We run each experiment for 150 epochs with an early stopping criterion of 10 epochs.After we select the best model parameters based on the predefined criterion, we perform the evaluation on 5 random seeds for all models and report the average results.For each seed, we shuffle the training and test domains, ensuring that the model's generalization performance is assessed under varying conditions.

Results on Synthetic Data
Table 4 presents the results of the normalized quantile loss metrics Q(0.5) and Q(mean) on the four synthetic datasets.Table 5 shows the results of point accuracy metrics NRMSE and sRMSE.We apply Cedar to DeepAR and WaveNet and compare both with traditional time series models and deep learning models.
Among the traditional methods (SN and ES), ES performs well on datasets with fixed seasonality (PT-N and PN-T) but it falls behind some of the other methods.For deep learning baselines, the latent variable model and AdaRNN show poor performance and exhibit instability in different cases.This might be due to the complexity of the model structures, making it difficult to learn diverse time series patterns.The original paper of AdaRNN reports that the model's performance significantly drops when the number of time series periods/domains increases beyond 10, which explains its performance in our experiments.We also notice that its training time becomes extremely long when the number of domains is large (e.g., 30).
For domain generalization methods, we notice that widely used domain generalization methods such as GroupDRO and MLDG do not yield good performance in time series forecasting tasks.This can be attributed to their limited ability to account for certain inherent characteristics within time series data.Baselines DANN, IDGM, and MMD consistently deliver better results.The methods designed for distribution shifts, i.e., wERM-exp and wERM-mix, also perform well.Cedar and the variant model Cedar(-D) achieve favorable and stable performance compared to others when applied to both DeepAR and WaveNet.We see the performance of Cedar based on DeepAR sometimes surpasses that of WaveNet and vice versa.The effectiveness of the regularization is influenced by the temporal representation learned by the base model.The MMD method can be regarded as a naive variant of our approach, as it simply makes all domain representations similar (removing d L fcst in Equation ( 2)).Its results are inferior to ours, indicating the efficacy of leveraging cross-domain forecasting performance to enhance domain generalization.For the two proposed regularization terms, we observe that in most cases (except on PN-T and T-PN), Cedar consistently outperforms Cedar(-D) or performs on par with Cedar(-D).Hence, for the patterns observed in synthetic datasets, considering the prediction performance within individual domains is effective when applying cross-domain regularization.

Results on Real-world Data
Tables 6 and 7 list the results on the four real-world datasets.The traditional models achieve very good performance on Favorita-cat and US-traffic datasets, which can be attributed to the clear seasonality (weekly) and lower fluctuation in the category-level retail sales and daily traffic volume Domain Generalization in Time Series Forecasting 113:13       data.Cedar consistently outperforms the base models (DeepAR and WaveNet), demonstrating its effectiveness in real-world scenarios compared to the naive empirical risk minimization method.When compared to other domain generalization methods, Cedar exhibits the best overall performance.GroupDRO and MLDG show unfavorable results on both the point accuracy and range accuracy metrics.MMD, IDGM, and DANN show promising performance in certain scenarios, but they do not consistently achieve satisfactory results across all scenarios.Cedar achieves superior results on US-traffic, while on other datasets, Cedar(-D) performs exceptionally well or exhibits close performance.It indicates that considering the difficulty of each domain by examining the within-domain performance becomes less effective.This may be due to the existence of highly regular intra-domain patterns in all domains.It also suggests that there might be other methods that are potentially better for quantifying the difficulties of domains than just using loss variance.

Sensitivity Analysis
Ratio of training domains (M/K).We investigate the impact of different training domain ratios on our approach's performance.Figure 2 shows the Q(0.5) results for the base model DeepAR and the model with our proposed method Cedar, as we vary the training domain ratio from 0.2 to 0.8.The performance of other metrics shows a similar pattern.As the number of training domains increases, models tend to achieve better performance on test domains.This could be attributed to the fact that more training domains allow models to digest a greater variety of data, which enhances their ability to generalize to unseen domains.With fewer test domains, the model might have already seen similar data during training, leading to better generalization performance.Cedar generally performs better or shows competitive results in most cases.However, when the number of training domains is smaller (0.2 and 0.4), we observe that the base model performs more stable across datasets.This might be because a reduced number of training domains introduces greater challenges for generalizing to unseen domains.In such scenarios, considering the base model in the first place could be a safer option.Values of γ .In Figure 3, we illustrate the impact of different values of γ on the regularization term R DDD in Cedar by plotting the corresponding validation loss for all datasets.We observe that the optimal value of γ varies across datasets (typically achieving better results when less than 1), depending on the scale of the loss and the magnitude of differences between domain losses.In datasets with substantial differences in losses between domains, a smaller γ is preferred to avoid excessive regularization term optimization at the expense of forecasting performance.Choosing an appropriate γ is crucial to strike a balance between domain generalization and accurate forecasting performance at the sample level.

Convergence Analysis
We analyze the convergence of Cedar applied to DeepAR by recording the forecast losses on training and validation sets, along with the regularization term.In Figure 4, we plot the curves (R DDD is multiplied with the hyperparameter γ ) for the US-traffic and Favorita-cat datasets.We see that Cedar can easily be trained with smooth convergence, witness the decrease in training and validation loss.For the regularization term, we notice that it continues to decrease on the US-traffic dataset.It indicates that the regularization is effectively guiding the model to reduce the performance discrepancies between domains, promoting better generalization across different domains.It also implies that the domains have enough similarity or overlap in their data distributions, making it easier for the model to generalize across domains.On Favorita-cat, we observe an increasing pattern of the regularization term, although it still outperforms other methods.This suggests that the regularization term is effectively guiding the model to adapt to the differences between domains to some extent, even if it does not entirely eliminate the performance discrepancy (possibly due to the large difference between domains).The increasing pattern may also occur when the variance of within-domain losses becomes smaller through training.

Domain Performance Analysis
Cedar regulates the performance across domains during training, preventing overfitting in each domain and ensuring good generalization on unseen domains.To assess the within-domain performance distributions, we analyze the Stock-volume dataset, which has the smallest number of domains for better visualization.The results are presented in a boxplot (Figure 5), where the y-axis represents the loss value and the x-axis corresponds to different domains.Cedar achieves more even distributions for some training domains (e.g., [6][7][8][9], resulting in a more uniform loss distribution across domains, i.e., less underfitting and overfitting.Moreover, for the test domains, Cedar demonstrates notable performance improvements across all domains.

Computational Analysis
We conduct two experiments to evaluate the computational efficiency of Cedar, taking into account variations in both base forecasting model size and dataset size.
In experiment, we analyze the training time of Cedar in comparison with the base models DeepAR and WaveNet while varying the base model size.These experiments are performed using the synthetic dataset NT-P with default settings, e.g., a batch size of 64 and a training domain ratio of 0.8. Figure 6 displays the increase in training time (per epoch) of Cedar relative to the base model.We use the average training time of three epochs as the final measurement.We observe that the additional overhead incurred by Cedar is very small, ranging only within a few seconds.Furthermore, this overhead exhibited a slight decreasing trend as the model size increases (i.e., the hidden state size increases).Our generalization approach causes nearly constant time overhead, demonstrating its effectiveness for larger models.These findings indicate that Cedar is well-suited for real-world applications with more complex data, which typically require large models for accurate forecasting.
In the second experiment, we investigate how Cedar performs with larger time series datasets.We extend the synthetic datasets from NT-P, increasing the length of each domain's time series from 500 data points to 1k, 10k, and 100k data points.The training domain ratio (e.g., 0.8) and the ratios of training, validation, and testing timestamps remain the same as in our main experiments.To accommodate the larger dataset, we adjust the batch size to 1,024 and increase the hidden states of the base model to 1,024.The results in Figure 7 show the percentage increase in training time (per epoch) for Cedar compared to the base model.We also use the average training time of three epochs as the final measurement.Notably, when the base model is DeepAR, the percentage increase is consistently less than 9%.For WaveNet, there is some randomness, but the largest increase percentage remained under 10%.In some cases, Cedar adds almost no additional time  to the base model.These results highlight the efficiency of Cedar when dealing with larger datasets.

Beyond Our Assumptions
Cedar may not work effectively in scenarios that violate our data assumptions.To illustrate this, we present two negative examples where method could potentially encounter challenges.
Violating Assumption 1. conduct an experiment using the UCI_air dataset.This dataset is built from the Beijing Multi-site Air-quality Dataset, which contains hourly air pollutant data from 12 nationally controlled air-quality monitoring sites from 2013 to 2017. 11Each monitoring site is considered as a domain, and we use PM2.5 data from 2016-01-01 to 2016-09-10 for training, data from 2016-09-11 to 2016-12-01 for validation, and the rest of the data up to 2017-02-28 for testing.Due to a large number of missing values, we convert the data to daily values by averaging over each hour of the day.The historical window size is 28, and the prediction window is 7.In Figure 8(a), we observe a high degree of similarity and overlap across different domains, which violates Assumption 1, where ϵ l is close to 0. The presence of these highly similar patterns also raises concerns about the suitability of applying a domain generalization model in this particular context.We show the comparison results of our method with the two base models in Figure 8(b).The base models perform better.This finding emphasizes the importance of considering the shared patterns and differences between domains to design effective domain generalization models (Assumption 1).Violating Assumption 2. Assumption 2 assumes that the time series data should not exhibit abrupt distribution shifts within each domain.To test this assumption, we apply a straightforward method to manipulate the synthetic dataset NT-P by introducing random mean shifts within different segments while keeping other properties unchanged.A portion of the manipulated data NT-P* is presented in Figure 9(a), and the comparison results are shown in Figure 9(b).Our observations reveal that when the base model is DeepAR, the performance differences between Cedar and the base model are relatively minor, with the base model performing slightly better.However, for the base model WaveNet, Cedar struggles to produce favorable results.When considering domain difficulties in the regularization, the results become much less stable.This outcome can be attributed to the fact that the factors considered for training, which address domain difficulties, do not effectively apply to unseen domains that may exhibit substantial disparities.When time series data involve abrupt changes or significant distribution shifts (i.e., scenarios that contradict Assumption 2), some researchers [15,39] propose characterizing temporal distributions to segment time series based on distribution differences, thus establishing distinct domains.Their characterization method presents a potential solution to address the limitations of our current approach.We leave further analysis of this potential enhancement for future research.

CONCLUSION
We have presented Cedar, a novel approach to address the problem of domain generalization in time series forecasting.By incorporating predefined assumptions about cross-and within-domain patterns, we introduce two novel regularization methods to improve forecasting performance across different time series domains.Through comprehensive experiments conducted on both synthetic and real-world time series datasets, we have systematically compared our method against several state-of-the-art approaches, demonstrating its effectiveness.Cedar has advantages such as applicability to various forecasting models and potential use in non-time series regression problems.It can be used to enhance decision-making processes and optimize resource allocation by improving overall forecasting accuracy across diverse domains, thereby offering practical and beneficial applications in various fields.
Cedar has demonstrated its effectiveness based on two important assumptions about the data.However, when these assumptions are violated, its performance drops (Section 4.8).How can we overcome this limitation and achieve generalization in such cases?Another important line for future work is to develop adaptive regularization techniques that can dynamically adjust to different time series data characteristics and domain shifts.

REPRODUCIBILITY
The code and data used to produce the results in this article are available at https://github.com/songgaojundeng/cedar-dg

Fig. 1 .
Fig. 1.Data visualization on partial domain data in PN-T, Favorita-cat, US-traffic, and Stock-volume datasets.Each time sequence represents a domain.
Bolded values are the best scores for a column.Underlining indicates the method achieves the best performance when the base model is DeepAR or WaveNet.* indicates that our methods are significantly better than the base model, and indicates the method is significantly better than the second-best baseline given the base model (paired t-test based on Equation (10) when the N i and denominator term are removed, p-value < 0.05).ACM Trans.Knowl.Discov.Data., Vol. 18, No. 5, Article 113.Publication date: February 2024.113:16 S. Deng et al.

Fig. 2 .
Fig. 2. Prediction performance on synthetic datasets with varying ratios of training domains.

Fig. 5 .
Fig. 5. Forecast performance on training domains and test domains between the base model and Cedar on Stock-volume.

Fig. 6 .
Fig. 6.Training time analysis on the NT-P dataset with varying model sizes.

Fig. 8 .
Fig. 8. (a) Visualization of partial domain data on the UCI-air dataset, where each sequence represents a domain and (b) comparison results of our method with the base model.

Fig. 9 .
Fig. 9. (a) Visualization of partial domain data on the NT-P* dataset, where each sequence represents a domain and (b) comparison results of our method with the base model.

Table 1 .
Important Notations and Descriptions

Table 2 .
Summary of Synthetic Datasets

Table 3 .
Summary of Real-world Datasets We use the U.S. Traffic Volume Data, which is openly available on the official website of U.S. Department of Transportation.4Thetraffic volume data are collected by state highway agencies.We use data in California from 2022-04 to 2022-11.The traffic volume is aggregated on a daily basis, considering both directions of travel, and the final values are divided by 1,000 to reduce the range of values.We use traffic volume data from 2022-04-01 to 2022-07-15 for training, data from 2017-07-16 to 2017-08-20 for validation, and the rest for testing.
(T , h) denotes the sizes of the historical and prediction windows.-Transportation.

Table 4 .
Forecasting Results of Range Accuracy Metrics on Synthetic Datasets

Table 5 .
Forecasting Results of Point Accuracy Metrics on Synthetic Datasets

7758 (0.017) 0.8349 (0.014) 0.6026 (0.082) 0.5971 (0.143)
Bolded values are the best scores for a column.Underlining indicates the method achieves the best performance when the base model is DeepAR or WaveNet.No pairwise significance tests are performed, because the point accuracy metrics are calculated at the group level.ACM Trans.Knowl.Discov.Data., Vol. 18, No. 5, Article 113.Publication date: February 2024.

Table 6 .
Forecasting Results of Range Accuracy Metrics on Real-world Datasets

Table 7 .
Forecasting Results of Point Accuracy Metrics on Real-world Datasets Bolded values are the best scores for a column.Underlining indicates the method achieves the best performance when the base model is DeepAR or WaveNet.No pairwise significance tests are performed, because the point accuracy metrics are calculated at the group level.ACM Trans.Knowl.Discov.Data., Vol. 18, No. 5, Article 113.Publication date: February 2024.