Language family engineering with product lines of multi-level models

. Modelling is an essential activity in software engineering. It typically involves two meta-levels: one includes meta-models that describe modelling languages, and the other contains models built by instantiating those meta-models. Multi-level modelling generalizes this approach by allowing models to span an arbitrary number of meta-levels. A scenario that proﬁts from multi-level modelling is the deﬁnition of language families that can be specialized (e.g., for different domains) by successive reﬁnements at subsequent meta-levels, hence promoting language reuse. This enables an open set of variability options given by all possible specializations of the language family. However, multi-level modelling lacks the ability to express closed variability regarding the availability of language primitives or the possibility to opt between alternative primitive realizations. This limits the reuse opportunities of a language family. To improve this situation, we propose a novel combination of product lines with multi-level modelling to cover both open and closed variability. Our proposal is backed by a formal theory that guarantees correctness, enables top-down and bottom-up language variability design, and is implemented atop the MetaDepth multi-level modelling tool.


Introduction
Modelling is intrinsic to most engineering disciplines. Within software engineering, it plays a pivotal role in model-driven engineering (MDE) [Sch06]. This is a software construction paradigm where models are actively used to describe, analyse, validate, verify, synthesize and maintain the application to be built, among other activities [BCW17].
Models are built using modelling languages, which can be either general-purpose, like the UML [UML17], or domain-specific languages (DSLs) tailored to a specific concern [KT08, VBD + 13]. In MDE, the abstract syntax of modelling languages is defined through a meta-model that describes the primitives that can be used in models one meta-level below. This modelling approach, which is the standard nowadays, constrains engineers to confine their models within one meta-level (the "model" level). Several researchers have observed that domain modelling can benefit from the use of more than one metalevel [AK08, dLGS14, FAGdC18, Fra14, IGSS18, GPHS06, MRS + 18]. This way of modelling-called multilevel modelling [AK01] or deep meta-modelling [dLG10]-yields simpler models in scenarios that involve the type-object pattern [AK08,dLGS14,MRB97]. Moreover, it permits defining language families (e.g., for process modelling) that can be specialized to specific domains (e.g., software process modelling, industrial process modelling) via instantiation at lower meta-levels [dLGC15]. Instantiation is an open variability mechanism that permits the customization of a language by specializing its primitives for a domain, or adding new ones via so-called linguistic extensions [dLG10]. As an illustration, Fig. 1a shows a tiny process modelling language that defines the primitive TaskType, which is customized by instantiation in the lower meta-level for the software process modelling domain (Coding and Design). In turn, these two primitives could be instantiated in the meta-level below. However, multi-level modelling lacks support for expressing optionality of language primitives or alternative primitive realizations. This prevents wider language reuse and customization possibilities.
Software product lines (SPLs) encompass methods, tools and techniques to engineer collections of similar software systems using a common means of production [NC02,PBL05]. SPLs support closed variability, where a concrete software product is obtained by selecting among a finite set of available features (i.e., by setting a configuration). SPL techniques have been applied to language engineering to define product lines of languages representing a closed set of predefined language variants [GdLCS20, PAA + 16, WHG + 09]. As an example, Fig. 1b shows a process modelling language product line with two configurable features: actors and initial tasks. Selecting a configuration of features (in the figure, initial tasks but no actors) yields a language variant. Languages defined via a product line permit configuring the language primitives and their realization, but cannot be specialized for specific domains, because this requires from open variability mechanisms.
To improve current language reuse techniques, we propose combining multi-level modelling and product lines. This allows the definition of highly configurable language families that profit from both open variability (as given by instantiation) and closed variability (as given by configuration). This way, this paper makes the following contributions: (i) a novel notion of multi-level model product line; (ii) a theory that enables deferring variability resolution to lower meta-levels in a flexible way, guaranteeing the correctness of interleavings of instantiation and configuration steps; (iii) techniques supporting both top-down and bottom-up variability design, based on the possibility of advancing variability extension to both instantiation and configuration; and (iv) an implementation of these ideas on top of the MetaDepth tool [dLG10].
This work builds on our FASE'20 article [dLG20], expanding it in three main ways, covering both usage and design of language product lines. First, we complete the presented theory with required definitions and lemmas for composition of specializations steps (Definition 3 and Lemma 1; Definition 9 and Lemma 2; Definition 12 and Lemma 3). Then, we expand the theory to calculate the fully configured language (i.e., a language definition with no variability) that is equivalent to the language resulting from an arbitrary chain of language instantiations and partial configurations, as shown by Theorem 5.3. This is an important result, which shows that, in order to use a language family, users do not need to provide a full configuration before using the family. Instead, they can directly use it by instantiation, while variability can be resolved at later steps providing partial configurations as needed. The second main extension of this paper facilitates designing language product lines. In [dLG20], we assumed that a language family needed to be constructed in a top-down way, where all variability is designed up-front. This was due to the fact that the theory did not have a way to characterize extensions of the feature models and the associated model product lines (instead, the theory only covered specializations). However, in practice, product lines can be constructed both top-down and bottom-up [KC16]. Supporting these two options in our setting requires enabling exploratory modelling, where the language is instantiated, possibly partially specialized, and then new variants can be added, which the designer might like to include in the original language definition. For this purpose, we introduce new notions of extension morphisms (Definitions 14 and 15), along with compatibility conditions ensuring that variability extensions can be advanced to both instantiation and specialization (Lemma 4, Theorem 5.4 and Corollary 1). Accordingly, we use the new concepts and the theory to describe flexible processes for using language families (Sect. 5.1), and for their top-down and bottom-up construction (Sect. 5.2). As a third main extension, we have expanded the tool with new functionality to support extension, and provide a walkthrough of its use for three activities: top-down creation of language families (Sect. 6.1), use of language families (Sect. 6.2) and bottom-up extension of language families (Sect. 6.3). Finally, we provide additional examples and explanations, as well as a more thorough comparison with related work.
The rest of this paper is organized as follows. Section 2 introduces multi-level modelling and identifies the challenges tackled in this paper. Section 3 provides a light formalization of multi-level modelling, which is extended with product line techniques in Sect. 4. Section 5 exploits the introduced concepts for: (a) the flexible use of language families, by proving that variability configuration can be deferred to model instantiation; and (b) the bottom-up and top-down construction of language families, by showing that variability extension can be advanced to both configuration and instantiation. Section 6 describes tool support. Section 7 discusses related research, and Sect. 8 ends with the conclusions and future work. An appendix includes the proofs of the theorems and lemmas in the paper.

Multi-level modelling: intuition and challenges
In this section, we first introduce the main concepts of multi-level modelling guided by an example (Sect. 2.1), and then we discuss some challenges when applying multi-level modelling to language engineering (Sect. 2.2).

Multi-level modelling by example
Multi-level modelling supports the definition of models using multiple meta-levels [AK08,dLGS14]. To understand its rationale, assume we would like to create a language to define commerce information systems (a standard example often used in the multi-level modelling literature [AK08,dLGS14]). The language should allow defining product types (like books or food) which have a tax, as well as products of the defined types (like Othello or banana) which have a price. Moreover, some product types may need to define specific properties, like the number of pages in books. Figure 2a shows a solution for this scenario using two meta-levels. In this solution, the meta-model uses the type-object pattern [MRB97] to emulate the typing relationship between Product and ProductType. In addition, classes Attribute and Slot permit defining properties in ProductTypes and assigning them a value in Products (called dynamic features pattern in [dLGS14]). The model in the bottom meta-level represents an information system for Kiosks. It defines the product types Book and Food, as well as the products sold by a particular kiosk: the Othello book and Bananas.
On reflection, one can realize that this solution emulates two meta-levels within one, as we convey with the dashed line in Fig. 2a. Therefore, we show an alternative multi-level solution using three meta-levels in Fig. 2b. The top level defines just ProductType, which is instantiated at the next level to create Book and Food product types, which in turn are instantiated at the bottom level to create specific products. Hence, elements in this approach are uniformly called clabjects [Atk97] across meta-levels (from the contraction of the words class and object), since they are types for the elements in the level below, and instances of the elements in the level above. For example, clabject Book is a type for Othello and an instance of ProductType. This multi-level solution leads to a simpler model (with fewer elements) because a clabject suffices to represent both ProductType and Product.
Clabjects may need to control the properties of their instances beyond the next meta-level. For example, the direct instances of ProductType need to have a tax, and the instances of its instances (which we call indirect instances) have a price. This is possible by the use of a deep characterization mechanism called potency [Atk97,AK01]. The potency is a natural number, or zero, which governs the instantiation depth of models, clabjects and features. Fig. 2b depicts the potency after the "@" symbol, and the elements that do not declare potency take it from their container. As an example, attribute ProductType.price takes its potency from ProductType, and this from the Commerce model, which declares potency 2. When an element is instantiated, the instance receives the potency of the element minus 1. Elements with potency 0 are pure instances and cannot be instantiated. For example, attribute ProductType.tax with potency 1 is instantiated into Book.tax and Food.tax, which therefore have potency 0 and can receive values. As model Commerce has potency 2, it can be instantiated at the two subsequent meta-levels.
The potency of a model is often called its level [AK08]. Sometimes, it is not possible to foresee every possible property required by clabject instances several metalevels below-like the number of pages in books-or we may need to introduce new primitives at lower levels-like a new clabject to model the authors of books. To handle those cases, multi-level modelling supports linguistic extensions [dLGC15]. These are elements (clabjects or features) with no ontological type, but with a linguistic type that corresponds to the meta-modelling primitive used to create it (see Orthogonal Classification Architecture in [AK02] for more details). As an example, Book.numPages is a linguistic extension modelling a property specific to Book but not to other product types. Instead, in the two-level solution in Fig. 2a, the properties of specific ProductTypes need to be explicitly modelled by classes Attribute and Slot, leading to more complexity.

Improving reuse in multi-level modelling: some challenges
Multi-level modelling enables language reuse by supporting the definition of language families. For example, Fig. 3 shows at the top a generic process modelling language that can be used to define process modelling languages for different domains, like education, software engineering, or production engineering. The language is designed to consider three levels. Level 2 at the top contains the language definition, consisting of primitives (i.e., clabjects) to define task and gateway types. Level 1 contains language specializations for specific domains. The figure shows the case for the software engineering domain, which defines the task types Requirements and Design, and two gateway types: ReqDep to transition from requirement tasks to either design or requirement tasks, and DesignDep to declare dependencies between design tasks. Finally, level 0 contains domain-specific processes. The one in the figure declares two requirements tasks, one design task and one gateway. This example shows how instantiation permits customizing the language primitives offered at the top level for particular domains, and how linguistic extensions (e.g., attribute Design.style at level 1 in Fig. 3) allow adding domain-specific primitives and properties to language specializations. However, the following scenarios require further facilities that enable a better fit for particular domains, increase language reuse and facilitate language family engineering.
• Alternative realizations A language primitive may be realised in different ways, each more adequate than the others depending on the domain. For example, in Fig. 3, dependencies between task types are modelled by Gateway-Type. However, in domains that do not require distinguishing types of gateways or n-ary dependencies, a simpler representation of dependencies as a binary reference between TaskTypes is enough (see Fig. 4a). Unfortunately, multi-level modelling does not support this kind of variability, which enables alternative realizations of available language primitives. • Primitive excess Some offered language primitives may be unnecessary in simple domains. This can be controlled by not instantiating the primitive. However, withdrawing the needless primitives may be a better option because it simplifies the language usage and avoids some problematic situations. First, if the needless primitive is an attribute (like initial in Fig. 4), then it becomes instantiated by force, polluting the model with unnecessary information. Second, some mandatory primitives may not be needed in certain domains. For example, in Fig. 4b, the language designer assumes that any TaskType (e.g., Requirements) will be performed by one ActorKind (e.g., Analyst or DomainExpert). However, there may be domains that do not involve actors (e.g., if tasks are automated), but the mandatory relation perfBy still forces having instances of ActorKind associated to instances of TaskType. • Deferred variability resolution and exploratory modelling The decision about the inclusion or not of a primitive may not be clear when the language is instantiated for a domain, but this can be determined later at lower meta-levels. For example, in Fig. 4c, an engineer might hesitate whether, in addition to the expected task duration (attribute duration), tasks should store their real duration as well (attribute rDuration with potency 2); in such a case, the engineer may prefer deferring the decision to level 1 or 0, where the need becomes evident.
In general, resolving all variability in a language family at the top level may be hasty in some cases, since the suitability of a primitive may be apparent only when a language has reached certain specificity (i.e., at lower meta-levels). Moreover, enabling modelling before resolving every possible language variability option may be good for exploratory purposes. • Top-down and bottom-up variability design Language variability can be designed up-front, following a topdown process. However, some language variants may emerge when working in a specific domain, making it desirable to lift the discovered variant bottom-up [CdLG12] from a lower meta-level to the top one. Fig. 4d shows an example. The left side shows a process modelling language that does not support actors (level 2), but its refinement for software engineering requires software engineers-a kind of actor-so it defines the linguistic extension SoftwareEngineer (level 1). Since other domains may need to support actors, SoftwareEngineer could be lifted to the top level renamed more generically to ActorKind, as the right side of Fig. 4d 5. Illustrating the violation of the well-formedness rules in Definition 1 for potency. a A clabject with higher potency than the model level. b A slot with higher potency than the one of its container clabject. c A reference with higher potency than its target clabject.
To tackle these challenges, we incorporate variability into multi-level models taking ideas from SPLs. As a first step, in the next section we formalize multi-level models.

A formal foundation for multi-level modelling
We start defining the structure of models equipped with deep characterization, which we call deep models. We represent models at different meta-levels in a uniform way, in order to cope with an arbitrary number of metalevels. For simplicity of presentation, and because they are not essential to demonstrate our ideas, we omit inheritance, cardinalities and integrity constraints in our formalization.

Def. 1 (Deep model).
A deep model is a tuple M p, C , S , R, src, tar , pot , where: • p ∈ N 0 is called the model potency, or level.
• C , S and R are disjoint sets of clabjects, slots and references, respectively.
• src : S ∪ R → C is a function assigning the owner clabject to slots and references.
• tar : R → C is a function assigning the target clabject to each reference.
• pot : C ∪ S ∪ R → N 0 is a function assigning a potency to clabjects, slots and references s.t.: In the previous definition, we assign a level p to deep models. Elements in a deep model have a potency via function pot , which must satisfy three conditions: (1) the potency of an element should not be larger than the model level, (2) the potency of slots and references should not be larger than the one of their container clabject, and (3) the potency of references should not be larger than the one of the clabjects they point to. Please note that we use the term slot to refer uniformly to attributes (or their instances) at any meta-level.
Example Figure 5 illustrates the rationale of the three well-formedness rules in Definition 1 concerning potency. Specifically, each deep model violates one of the rules. Figure 5a depicts a deep model where the level (2) is lower than the potency of its contained clabject (3). This would be problematic two levels below, where the indirect instances of TaskType have potency 1 but cannot be instantiated because their container model has potency 0 and hence is non-instantiable. Similarly, Fig. 5b shows a slot with higher potency (3) than its container clabject (2). In this case, the indirect instances of the clabject with potency 0 will contain a slot rDuration with potency 1. The slot would receive a value one level below, when it reaches potency 0, but this is not possible because its container clabject cannot be instantiated further. Finally, in Fig. 5c, reference budget with potency 2 points to clabject Budget with potency 1. As a consequence, the reference can only be instantiated at the next level regardless its potency 2, due to the lower potency of Budget.
Next, we define a general notion of mapping (a morphism) between deep models as a tuple of three (total) functions between the sets of clabjects, slots and references. Each morphism has a depth (a natural number or 0) controlling the distance between the levels of the involved models. We use two particular types of mappings to represent the type relation between deep models at adjacent meta-levels (when the morphism depth is 1), and extensions of a deep model to add linguistic extensions (when the depth is 0).  1. p 0 + d p 1 2. ∀ e ∈ X 0 • pot 0 (e) + d pot 1 (m X (e)) (for X ∈ {C , S , R}) 3. Each function m C , m S , m R commutes with functions src i and tar i (see Fig. 6)

Def. 2 (D-morphism, type and extension). Given two deep models
In Definition 2, condition 1 ensures that the D-morphism connects models of suitable levels (at a distance of d levels), condition 2 checks that the potency of elements in M 0 decreases according to the depth of the Dmorphism, and condition 3 ensures that the D-morphism is coherent with the source and target of slots and references (just like in standard graph morphisms [EEPT06]). We use total functions to represent the type, which ensures that each element in a deep model has a type. Linguistic extensions are not typed, but they are modelled as an extension D-morphism of a (typed) deep model into a larger model. This avoids resorting to partial functions to represent the type, which would complicate the formalization [RdLG + 14, WMR20]. Identity extensions map isomorphic deep models.
D-morphisms can be composed by composing the three mappings and adding their depths, as the next definition and lemma show. Composition of D-morphisms is necessary as a basis for the results of Sect. 5. Remark The composition of two (indirect) type D-morphisms is an indirect type D-morphism. The composition of two level-preserving D-morphisms is level preserving, and it is an extension (resp. identity) if both D-morphisms are extensions (resp. identities).

Lemma 1 (D-morphism composition yields a D-morphism
A multi-level model is made of a root deep model, and a sequence of pairs of instantiations and extensions. The length of this sequence is equal to the root model level. The extensions are allowed to be identity extensions. Example Figure 7 shows a multi-level model (a small excerpt of the one in Fig. 3) according to Definition 4. Slots are represented as rounded nodes, instead of inside the owner clabject box. Figure 3 hides the slots with potency bigger than 0 that are typed, like Design.duration at level 1, but such instances do exist and are explicitly shown in Figure 7 (see slot duration' in models M 1 and M' 1 ). The figure shows a clabject TaskType in the root model M' 0 , its instance called Design in model M 1 , a subsequent extension that adds a style slot to Design (model M' 1 ), an instantiation of it (model M 2 ), and an identity extension (model M' 2 ). Whenever a model does not include linguistic extensions, like M 2 , we use the identity extension D-morphism. Since slot initial' in model M' 1 has potency 0, it is not instantiated in the model with level 0 (M 2 ). Finally, it would be possible to derive the (indirect) type of M 2 with respect to M' 0 by defining a construction akin to a pullback (in a category made of multi-level models and D-morphisms) that yields the part of M 2 typed by M 1 [Lan71].

Multi-level model product lines
In order to solve the challenges identified in Sect. 2.2, we extend deep models with closed variability options by borrowing concepts from product lines. We use feature models [KCH + 90] to represent the allowed variability.

Def. 5 (Feature model). A feature model FM
F , consists of a set F of propositional variables called features, and a satisfiable propositional formula over the variables in F , specifying the valid feature configurations.
Example Figure 8 shows the feature model for the running example using (a) the standard feature diagram notation [KCH + 90], and (b) Definition 5. The feature model permits choosing if the process modelling language will have primitives to define actors (feature actors, cf. Fig. 4b), initial tasks and their enactment at level 0 (features initial and enactment, cf. Fig. 4c), as well as selecting whether gateways are to be represented either as references or objects (features simple and object, cf. Fig. 4a). The feature model includes the mandatory features ProcessLanguage, Gateways and Tasks as syntactic sugar to obtain a tree representation, but they are not needed in our formalization.
The selection of one option within the variability space offered by a feature model is done through a configuration. A configuration specifies sets of selected and discarded features, assigning the value true to the former and false to the latter. To enhance flexibility of use, we also support partial configurations where some features are not given any value (i.e., they are neither selected nor discarded). We will use partial configurations to allow deferring the resolution of some variability options to lower meta-levels.
Def. 6 (Configuration). Given a feature model FM F , , a configuration of FM is a tuple C The set of all configurations of FM is denoted by CFG(FM ).
Given two configurations , C 0 is smaller than or equal to C 1 , In the previous definition, F + contains the selected features (i.e., those given the value true), F − the discarded features (i.e., those given the value false), and F \ (F + ∪ F − ) is the set of features whose value has not been set. A configuration must be compatible with the feature model, so Definition 6 demands that the formula of the feature model is not false after substituting the features in F + by true and the features in F − by false. If the configuration is total, then the condition entails that must evaluate to true. The relation < between configurations defines a partial order where total configurations are maximal elements, and the empty configuration (i.e., the configuration that does not select or discard any feature) is the minimal element.
Remark We sometimes use the term invalid configuration for a tuple C Example Figure 8c shows an example of configuration, which selects features ProcessLanguage, Gateways, Tasks and initial; and discards the feature actors. Since features simple, object and enactment remain undefined, it is a partial configuration. The result from substituting the selected and discarded features by their values in the feature model formula is the following: [{ProcessLanguage, Gateways, Tasks, initial}/true, {actors}/false] ∼ (¬simple∧object) ∨ (simple∧¬object)). Next, we assign a level to feature models, and potencies to features, in order to control the level at which features should be assigned a truth value.

Def. 7 (Deep feature model).
Next, we define a mapping between deep feature models, called F-morphism. Similar to D-morphisms (cf. Definition 2), F-morphisms have a depth that can be positive or 0. In addition, they include a configuration, and a mapping for the features excluded from the configuration (i.e., those without a value). This is necessary, since we want to represent specialization relations between the two feature models [TBK09] by means of the (partial) configuration of the morphism. We identify two special kinds of F-morphisms: one representing a type relationship between two feature models (where the morphism depth is 1 and the configuration empty), and the other expressing a specialization relationship between two feature models via a total or partial configuration (where the morphism depth is 0).

Def. 8 (F-morphism, type and specialization). Given two deep feature models
Definition 8 requires that the F-morphism depth fills the gap between the feature model levels, and between the potencies of the mapped features. FM 0 may have fewer features than FM 1 , in case the configuration C assigns a value to the missing features with respect to FM 1 . In particular, the injectivity condition of m F and requiring m F (F 0 ) F 1 \(F + 1 ∪F − 1 ) ensures that only the features left undefined by C are mapped from FM 0 . Moreover, when the configuration C assigns a value to some feature, the definition requires that the formula 1 after replacing the features in C by their value true or false, is equivalent to 0 after replacing the features in F 0 by their mapping in F 1 . This corresponds to a (partial) evaluation of the formula 1 as a result of a feature model specialization.
Example Figure 9 shows two F-morphisms, with tp a type and sp a specialization. F-morphism tp : FM 1 → FM 2 relates two deep feature models FM 1 and FM 2 , where the level and potencies of FM 1 are one less than those in FM 2 , and the formulae are the same modulo feature renaming. Specialization sp : FM 0 → FM 1 has depth 0 and partial configuration C Hence, the levels and potencies are maintained, but the feature set F 0 is decreased by removing from F 1 the features that appear in C . According to condition 1 in Definition 8, {Gateways} {Gateways, simple, object} \ ({object} ∪ {simple}). According to condition 2 in the definition, the formula 0 is equivalent to replacing object by true and simple by false in 1 .  F-morphisms are composable by adding their depths and constructing the union of the positive (resp. negative) features in the configurations, as the next definition and lemma show. Composition of F-morphisms is necessary as a basis for the results of Sect. 5.1.

Lemma 2 (F-morphism composition yields an F-morphism). Given two F-morphisms
Proof. By checking that the composition satisfies the five requisites for F-morphisms in Definition 8: correct depth, injectivity, composed configuration being correct, and conditions 1 and 2. See proof details in "Appendix".
Example In Fig. 9, the composition of sp with tp results in the F-morphism tp • sp, which has depth 1 and configuration C F + {object}, F − {simple} . This F-morphism models the combined action of instantiation and variability specialization, but is neither a type nor a specialization according to Definition 8.  In the previous definition, we use function Var to return all variables (i.e., all features) within a propositional formula. Intuitively, given a configuration, we can derive a product (a deep model) of the PL by deleting the model elements whose PC evaluates to false when substituting its variables by their value true or false. To avoid dangling edges in product deep models, Definition 10 requires the PC of slots and references not to be weaker than the PC of their owning clabject (condition 1), and the PC of references not to be weaker than the one of their target clabject (condition 2). In addition, the variability of an element must be resolved in a level that contains the element or an instance of it. To this aim, condition 3 requires the potency of an element not to be smaller than the potency of the variables within its PC. Example Figure 10 shows a deep model PL for process modelling languages. The left compartment contains the deep feature model, the one to the right contains the deep model, and the PCs are represented between square brackets close to the deep model elements they are mapped into. If an element does not specify a PC (like TaskType), then its PC is assumed to be true. This deep model PL permits two alternative realizations for gateways: either as the reference next if the feature simple is selected, or as the clabject GatewayType if the feature object is selected instead. This variability needs to be resolved before instantiating the language for a specific domain, as features simple and object have potency 0. The PL also offers the choice to add or not the primitive ActorKind to the language by means of the feature actors; since this feature has potency 1, this decision can be taken either before specializing the language or at level 1 to enable exploratory modelling. Finally, the PL allows selecting whether tasks can be initial or hold enactment information. By condition 3 in Definition 10, feature initial in the feature model cannot have potency 2 because the feature is used in the PC of attribute TaskType.initial, which has potency 1. The feature model depicts features ProcessLanguage, Gateways and Tasks in colour and without a potency; this is so as these features are mandatory (i.e., true in any valid configuration), and while the figure shows them to obtain a tree-like feature model, the formalization of the example does not include them.
Next, we introduce mappings between deep model PLs (called PL-morphisms) as a tuple of morphisms between their constituent deep models and deep feature models. As in the previous cases, we are interested in type morphisms, linguistic extensions, and specializations of deep model PLs via a (partial) configuration.

Def. 11 (PL-morphism, type, extension, specialization). Given two deep model PLs
. Remark There is no condition on the equality of depths of m D and m F , as M 0 and DFM 0 have the same level, and in turn, the levels of M 1 and DFM 1 are equal. The condition for PL-morphisms demands that the PCs in the deep model M 0 are modified according to the selection of features in configuration C of m F . In addition, in specialization PL-morphisms, M 0 should contain just the elements whose PC is not false after substituting the features in F + by true, and the ones in F − by false. Therefore, the definition of specialization PL-morphism requires that only the elements in M 1 whose PC is not false after substituting the features by their values, are mapped from M 0 ; while this mapping m D needs to be injective. Moreover, by Definition 10 of deep model PL, no element in M 0 can have a PC that is false.
Other kinds of PL-morphisms are possible, for example, adding features to a feature model in the same or lower levels to increase its variability. We will introduce this possibility in Sect. 5.2. Example Figure 11 shows four PL-morphisms (tp, tp , sp, sp ) and a function ex , which fails to be a PL-morphism. Both tp and tp are types: they relate models at adjacent levels, where one is an instance of (typed by) the other. Types always have depth 1 and use the empty configuration C ∅, ∅ (cf. Definition 8), and so, a model element and its instances have the same PC (see, e.g., ActorKind and its instances SoftEng and UIDsgner). Both sp and sp are specialization PL-morphisms. This is so as they preserve levels and potencies, and map injectively the model elements whose PC does not evaluate to false when applying the configuration C . Actually, since the configuration C of both PL-morphisms is total, the PC of the elements in DM 3 and DM 2 evaluates to true, and hence, these models do not have more closed variability options to configure (i.e., they are final products of the PL).
The figure also shows a deep model DM 4 , which is an attempt to extend DM 1 by a linguistic extension made of the clabject Skill and its incoming reference exp. However, the result is not a valid deep model PL as the PC of exp (true) is weaker than the PC of its owner clabject SoftEng (actors). This makes ex fail to be a PL-morphism. DM 4 can become a deep model PL by adding to exp and Skill the PC actors. In such a case, morphism ex (with empty configuration) would become a PL-morphism.
Finally, we define PL-morphism composition, and show that it leads to a valid PL-morphism.

Lemma 3 (PL-morphism composition yields a PL-morphism). Given PL-morphisms
Proof. For proving the first part of the lemma, we use Lemmas 1 and 2 (composition of D-and F-morphisms), and then check the condition in Definition 11. For the second part, we check that the resulting morphism is injective, level-preserving and that the co-domain keeps only the elements with non-false PC. See proof details in "Appendix".

Engineering and using language families via multi-level model product lines
In this section, we apply and extend the theory presented so far to cover two scenarios in language family engineering. The first one (Sect. 5.1) is its usage to represent a language family with variability that is specialized via instantiation and (partial) configurations. The second one (Sect. 5.2) covers the creation process of a language family with variability, which can be done either top-down (by working at the top-level to incrementally extend the language variability) or bottom-up (by pulling up linguistic extensions and variability options to upper levels).

Usage of a language family
A deep model PL can be used as a language family with variability. Figure 12 illustrates this usage scenario. The deep model with variability at the top describes a language family. This language is created by a language family designer in step 1. While the figure depicts a language family with depth 3, our framework is general and supports any depth.
Then, the figure depicts two scenarios to the left and right. In the branch (a) to the right, a DSL designer (labelled DSL-1 designer) customizes the language family by giving a total configuration (step 2a). This resolves all "closed" variability options offered by the language family definition before using the language. Then, the DSL designer instantiates the language to create the DSL-1 meta-model (step 3a), which DSL-1 users can instantiate to build models (step 4a). Compared with a standard software language engineering process, here the DSL designer uses a language family meta-model as a basis to create DSL-1, instead of using a meta-modelling language like the OMG's Meta-Object Facility [MOF16]. The advantage is that the language family meta-model contains relevant primitives for the DSL scope (e.g., TaskType, ActorKind), which do not need to be invented anew, but specialized for the domain via instantiation. Moreover, the language family meta-model can define services like transformations or code generators, which can be reused for every DSL of the family [dLGC15]. The branch (b) to the left illustrates a more flexible usage scenario, where some "closed" variability options are not resolved at the top level, but later at lower levels, including level 0. In this case, a DSL designer (labelled DSL-2 designer) specializes the language family definition by providing a partial configuration (step 2b), and then instantiates the resulting language to define the DSL-2 meta-model (step 3b). However, in this case, some variability options remain open at level 1. Hence, the DSL users can specialize the language further according to their needs (step 4b), and then use it to create models (step 5b). In the figure, some variability remains open at level 0, meaning that the DSL users can create models with variability and resolve this variability later (step 6b).
We need to tackle two issues to properly realize both scenarios. The first one relates to derivation. Since our theory models configurations by means of specialization morphisms, the question is whether for any (total or partial) configuration, we can find a corresponding valid specialization morphism. This enables steps 2a, 2b, 4b, and 6b in Fig. 12. The second one pertains scenario (b). Since we want to allow deferring the variability resolution to lower levels, the question is whether there is always an equivalent scenario (a) where all the variability is resolved in a first step. If so, this entails that the expressiveness of a language family with variability is independent on how it is used.
We first tackle the issue concerning derivation. When the configuration C of a specialization PL-morphism sp : DM 0 → DM 1 is total, DM 0 is a product of DM 1 with no variability options to choose from, being equivalent to a deep model (without the PL part, cf. Definition 1). This is so as the feature model would be empty, and all PCs of the model elements would be true. However, the question remains whether for any valid configuration Proof. We construct a deep model PL DM , where its model M has same level as M , and contains the elements of M with non-false PC. Similarly, the feature model of DM is restricted to the features that have not been set by C (i.e., those remaining undefined) and the formula is the result of evaluating DFM 's formula on configuration C . Then, we prove that such deep model PL is valid according to Definition 10. Finally, we build a specialization PL-morphism from DM to DM , showing that it fulfils Definition 11 and is unique. See proof details in the "Appendix". We are now ready to characterize the process of derivation from a deep model PL via a configuration, using specialization PL-morphisms and the results of Theorem 5.1.

Def. 13 (Derivation). Given a deep model PL
there is a specialization PL-morphism sp : DM → DM using any (total, partial) configuration C of DFM . DM is called a (total, partial) product of DM .
Next, we look into the second issue, which is the soundness of deferring the configuration of an element after it is instantiated. The question is whether, in every situation that allows configuring an element after its instantiation, we obtain the same result by resolving the element variability first and then instantiating. This result is important as, regardless of the order in which configurations and instantiation are performed, we can calculate the language that results from applying the configurations as the first step, by advancing the configuration steps over the instantiations.
The next theorem captures the fact that if we can instantiate and then configure, then we obtain the same result if we configure and then instantiate. Fig. 13 commutes.

Theorem 5.2 (Specialization can be advanced to instantiation). Given three deep model PLs
Proof. We use Theorem 5.1 to construct a deep model PL DM 3 and a specialization morphism sp : DM 3 → DM 0 , using the configuration of specialization morphism sp. Then, a well-defined, unique type PL-morphism from DM 2 to DM 3 can be constructed by restricting tp. See proof details in "Appendix".

Remark
The converse is not true in general, that is, instantiation cannot always be advanced to specialization. The reason is that type morphisms to features with potency 0 are disallowed, and so they must be configured first.
Example Figure 11 shows a deferred configuration. Deep model PL DM 0 is instantiated into DM 1 , and then specialized using the configuration C F + {}, F − {actors} to yield DM 2 . As the figure shows, we obtain the same result by first specializing DM 0 using C , which yields DM 3 , and then instantiating DM 3 into DM 2 . Deep model PL DM 3 is relevant since it corresponds to the fully-configured language (i.e., with no unresolved variability) employed to build DM 2 .
Finally, we use Theorem 5.3 to ensure that, given any arbitrary chain of specializations (via partial configurations) and instantiations, we can calculate the fully configured language by applying all configurations in one step. This allows going from scenario (b) to scenario (a) in Fig. 12 by iterating the construction of Theorem 5.2 (cf. Fig. 13).

Theorem 5.3 (Equivalent fully configured language). Given a chain of PL-morphisms:
where each tp i is a unique type PL-morphism and each sp i is a specialization PL-morphism, there is: 1. a unique deep model PL DM FC ∈ Der (DM 0 ), called fully configured language, and 2. a specialization PL-morphism sp 0 : DM FC → DM 0 with a configuration that selects and discards the same features as the configurations of sp 2 , ..., sp n+1 , such that DM n+1 is an (indirect) instance of DM FC .
Proof. This proof uses Theorem 5.2 to advance specialization to type PL-morphisms, and the fact that PLmorphisms can be composed. See proof details in "Appendix".  Example Figure 14 shows an example of calculation of the equivalent fully configured language (DM FC ) of the chain DM 0 ← DM 1 ← DM 2 ← DM 3 ← DM 4 . The diagram depicts a scenario where the language family meta-model (DM 0 ) is instantiated for the industrial process domain (DM 1 ), and two instances of TaskType (Laminate and Mill) and one of ActorKind (Operator) are defined. Subsequently, the designer decides to discard ActorKinds from the language since both task types are automated. This is done by the specialization morphism sp 2 , where the configuration sets feature actors to false, and the result is DM 2 . Then, this industrial process model language description DM 2 is instantiated into DM 3 . At this point, the modeller realizes that the real task duration (rDuration attribute) is always the same as the expected task duration (duration attribute), since tasks are automated. Hence, a specialization sp 4 that sets feature enactment to false is performed, yielding model DM 4 . DM 4 does not have any variability options to configure. Now, we use Theorem 5.3 to calculate the equivalent fully configured language (DM FC ) in the figure. This is the indirect deep model type that results from doing all the specializations before any instantiation, and corresponds to the language definition without variability needed to create DM 4 . First, we use Theorem 5.2 to build DM FC 1 . This is the deep model type of DM 4 without variability options at level 1. Then, we use the composition morphism sp 2 • sp 4 to iterate the same construction to yield DM FC . Finally, we compose tp 1 • tp 3 to obtain an indirect type morphism DM 4 → DM FC .
For clarity, note that models DM 1 , DM 2 and DM FC 1 explicitly show the attributes rDuration and/or duration, but these are normally hidden in practical tools.

Engineering a language family with open and closed variability
In this subsection, we turn our attention at how language families are constructed with our approach, based on the notion of extension. As it is common in standard practice of SPLs [KC16], we anticipate two processes called top-down and bottom-up, as depicted in Fig. 15.
In the top-down approach, shown in branch (a) to the right of the figure, the variability options are defined up-front together with the language meta-model. This requires support for extension to increase both the feature model and the meta-model of the language family (step with label 2a). For this purpose, we will define a new kind of morphism between deep model PLs to represent such extensions.
In the bottom-up approach, shown in branch (b) to the left of the figure, concrete language instantiations at low levels guide the creation or extension of the language family definition at the top level. For example, the language family designer may instantiate a draft version of the language family meta-model for exploratory modelling (step 2b), and then realize that the given domain requires adding linguistic extensions to the instance or extending the feature model with additional variability options (step 3b). If such extensions are deemed general, a lifting process can promote them one level up, in the figure from level 1 to level 2 (step 4b). This way, the original language family definition becomes extended. Moreover, if the designer specializes the language family via a configuration before performing the extension at level 1, then we need to advance the extension to the specialization to incorporate the new variability to the top level. For this purpose, we will have to characterize compatibility conditions between extension and specialization. While the proposed bottom-up techniques may enable the creation of a language family from scratch out of a set of existing individual sample languages, we will tackle this scenario in future work.
In the remainder of this section, we first introduce extension morphisms to enable increasing the variability of deep model PLs. Then, we define mechanisms to advance extension to specialization and to instantiation in order to enable the bottom-up construction of deep model PLs. We start by defining extension morphisms (EF-morphisms) between deep feature models. Since we aim for interleaving specializations and extensions, we allow feature model extensions only if they preserve all (partial) configurations of the feature model. Definition 14 formalizes this intuition.
Def. 14 (EF-morphism). Given two deep feature models DFM i ) with the same level, a variability extension deep feature model morphism (EF-morphism in short), written DFM 0 m e ⇒ DFM 1 , is an injective set morphism m e : F 0 → F 1 such that: Condition 1 in the definition preserves the potencies. Condition 2 ensures that any (partial) configuration F + 0 , F − 0 of FM 0 can be extended to a (partial) configuration of FM 1 by expanding the sets F + 0 and F − 0 . Hence, modulo feature renaming by m e , we have C 0 ≤ C 1 . In addition, condition 2 prevents extending invalid configurations of FM 0 to valid configurations of FM 1 .
Example Figure 16 shows two examples of allowed and disallowed extensions of deep feature models. In Fig. 16a the deep feature model DFM 0 is extended with a new optional feature called enactment. Morphism m 1 is a valid extension, since any non-valid configuration in DFM 0 cannot be extended to a valid one in DFM 1 , and any valid configuration in DFM 0 can be extended to a valid one in DFM 1 . However, morphism m 2 in Fig. 16b is not a valid EF-morphism. In this case, the extension attempt adds an optional feature called enactment and makes the existing feature initial mandatory. This is not a valid EF-morphism since configurations of DFM 2 making initial false ( F + {}, F − {initial } ) are valid, but cannot be extended to valid configurations of DFM 3 .  Fig. 18. Examples of advancing EF-morphisms to F-morphisms. a Advancing through a specialization F-morphism. b Advancing through a type F-morphism.

Remark
We cannot use the F-morphisms from Definition 8 to represent extensions, since F-morphisms include a (partial) configuration modelling a specialization. For example, in the EF-morphism m 1 in Fig. 16a, this implies being able to assign a true or false value to feature enactment, which is not possible because the feature does not exist in DFM 0 . Moreover, condition 1 in Definition 8 of F-morphisms requires DFM 0 and DFM 1 to have the same feature set, and condition 2 requires they have equivalent formula modulo their partial evaluation using the morphism configuration. Next, we need to check that EF-morphisms can be advanced to specialization (to enable advancing variability extensions to configurations) and to type F-morphisms (to allow lifting variability extensions to upper levels). Lemma 4 formalizes this. The advancement of extensions through type F-morphisms is also possible. Figure 18b shows an example, where the potency of feature enactment in DFM ext 0 is obtained by adding the depth of morphism t to the potency of the feature in DFM ext .

Def. 15 (EPL-morphism). Given two deep model PLs
Example Figure 19 shows an example EPL-morphism m. On the one hand, its constituent EF-morphism expands the feature model with an additional optional feature enactment. On the other, its D-morphism adds a new attribute rDuration to clabject TaskType. The PC of attribute initial is refined to initial∧enactment, which the condition in Definition 15 allows since initial∧enactment ⇒ initial. The new attribute rDuration is assigned a PC, which is also allowed since the attribute is not mapped from DM 0 .
Remark We cannot use standard PL-morphisms to represent EPL-morphisms, as only the former but not the latter include a configuration. In the example of Fig. 19, the configuration could assign the value false to enactment to emulate that DM 0 does not define attribute rDuration, but this attribute could define any PC (e.g., enactment∨initial) that may not evaluate to false. Moreover, the PC of initial in DFM 1 is initial∧enactment, which does not evaluate to initial when substituting enactment by false.
We are ready to enunciate the main results of this subsection. First, Theorem 5.4 states that EPL-morphisms can be advanced to (both type and specialization) PL-morphisms under certain conditions. Then, Corollary 1 states that these conditions are enough for type PL-morphisms, while specialization PL-morphisms require an additional condition guaranteeing the compatibility of the extension with the configuration chosen for the specialization.

Theorem 5.4 (EPL-morphisms can be advanced to PL-morphisms)
Let m m D , m F : DM 0 → DM 1 be a PL-morphism and m e m D e , m e : DM 0 → DM ext an EPL-morphism, such that:  Proof. We first construct the deep model PL DM ext 0 , where the model part is built by a construction similar to a pushout in graphs, and the deep feature model is built as in the proof of Lemma 4. Then, the morphisms m and m e are constructed and shown to commute. See proof details in "Appendix".
In the previous theorem, condition 1 requires that the elements in DM 0 that are mapped to the same element in DM 1 have the same PC in DM ext . In addition, condition 2 forbids that the extension modifies the PC of clabjects to avoid dangling references and slots when advancing the extension morphism. Proof. Regarding 1) by construction m and m have same depth, and so if m is type, so is m . For 2) in addition, we need to check injectivity, and the condition on the PCs as per Definition 11. See details in "Appendix".

Corollary 1 (Preservation of type and specialization PL-morphisms). Let
Example Figure 21 illustrates the advancement of EPL-morphisms to PL-morphisms. In particular, Fig. 21a shows an example of extension that is advanced to instantiation. We start from a situation where DM 1 has been instantiated into DM 0 , which has two instances of TaskType, and next, DM 0 has been extended into DM ext by adding a new feature enactment and a linguistic extension rDuration to both task types. Then, by using Theorem 5.4, we build model DM ext 0 and morphisms tp and m . This means that we can first extend DM 1 (via m ) to yield DM ext 0 and then instantiate DM ext 0 (via tp ) to yield DM ext . However, not any DM ext permits this advancement, as the condition of Theorem 5.4 states. For example, should we assign the PC enactment∧initial to Laminate.rDuration, we would not obtain a correct type PL-morphism m . This is so as Laminate.rDuration and Mill.rDuration are both mapped to the same clabject TaskType.rDuration in DM ext 0 , but they would have different PCs. Figure 21b shows that Theorem 5.4 can also be applied to advancing extension to specialization. In the figure, we start from a situation where DM 1 has been specialized via the PL-morphism sp with configuration C F + {}, F − {initial } to yield DM 0 , and then, DM 0 has been extended via the EPL-morphism m to yield DM ext . DM ext contains an extra feature enactment and a linguistic extension rDuration. We can use Theorem 5.4 to obtain DM ext 0 and morphisms m and sp . This way, we can first extend DM 1 (via m ) and then specialize it (via sp ). The PC of rDuration in DM ext satisfies the condition of Corollary 1, since enactment[initial/false] = enactment false. Should the PC be enactment∧initial, then sp would not be a proper specialization. This is so as enactment∧initial[initial/false] = false, and so the field TaskType.rDuration should not be present in DM ext . In this case, the extension would collide with the specialization, precluding the advancement.

Tool support
We have implemented the notions presented so far atop MetaDepth [dLG10]. This is a textual multi-level modelling tool which supports an arbitrary number of meta-levels and deep characterization through potency. It integrates the Epsilon family of languages for model management [PKR + 09], which permits defining code generators and model transformations for multi-level models.
MetaDepth was used to define language families via multi-level modelling in [dLGC15], but it did not support the definition of closed sets of variability options by means of PLs. For this work, we have extended the tool to allow creating deep feature models and multi-level models with PCs, specializing deep model PLs via configurations, extending them with new features, and advancing those extensions to instantiations. The extended tool is available at http://metadepth.org/pls, together with examples of use.
In the following we showcase the use of the tool for three different activities: top-down creation of language families (Sect. 6.1), use of language families (Sect. 6.2) and bottom-up extension of language families (Sect. 6.3). Overall, these activities cover the scenarios explained in Sects. 5.1 and 5.2.

Top-down creation of language families
MetaDepth has a uniform textual syntax to specify models at any meta-level, similar to the UML Human-Usable Textual Notation [OMG04]. In addition, the tool supports the specification of domain-specific textual syntaxes for deep language families [dLGC15]. While the approach fosters a textual approach to modelling, the tool is able to produce read-only graphical views with the models being built, which may help in their comprehension.
Listing 1 specifies the deep model in the right part of Fig. 10, using MetaDepth's syntax. First, line 1 states the name of the deep feature model (defined in Listing 2) associated to the deep model. Then, line 2 declares a deep model with level 2, named ProcessModel. This contains three clabjects: TaskType (lines 3-13), ActorKind (lines 15-16) and GatewayType (lines 18-22). PCs are specified as @Presence annotations. This is possible as, similar to Java [CdL16], MetaDepth permits defining annotation types by providing their syntax, parameters, and the kind of elements they can annotate (models, clabjects or fields) [CdL18]. This definition is a meta-model, and so, when annotations are parsed, they are transformed into a model conforming to such meta-model. The model with the parsed annotations contains references to the annotated model (e.g., ProcessModel in this case). Representing annotations as models, allows well-formedness checking of the specific annotations with respect to their definition (i.e., the annotation values are not just uninterpreted strings). Regarding the PC of fields, for usability reasons, our implementation internally conjoins the PC of fields with the PC of their owner clabject. For example, the PC of reference Gateway.src is object, because the PC of Gateway is object. This guarantees that condition 1 in Definition 10 is satisfied, while conditions 2 and 3 are checked by constraints. Finally, please note that, while the condition parameter of the Presence annotation is a String, we internally check that it is a well formed boolean formula, which uses the features of the feature model identified in the Variability annotation of the model.
We have created a meta-model for deep feature models, and designed a domain-specific textual syntax for it, similar to the FAMILIAR tool [ACLF13]. Listing 2 shows the MetaDepth definition of the deep feature model in Fig. 10 (but we changed the potency of features simple and object to 1). Line 1 declares a feature model called ProcessOptions with level 2. Line 2 declares the root feature ProcessLanguage, and its children features Gateways, Tasks and actors. Children features can specify a potency after the "@" symbol, and be declared optional using the "?" symbol. Line 3 declares the children of Gateways, which are alternative as specified by the keyword alt. Line 4 declares the children of Tasks, which are optional.

Using language families
To use a deep model with PCs, like the one in Listing 1, it needs to be instantiated. Annotations in MetaDepth can attach actions to be triggered upon certain modelling events, like instantiation or value assignment. These actions are defined via a meta-object protocol (MOP) [CdL18,KR91]. This way, we have defined a MOP with actions for the PC annotations, to help instantiating deep model PLs. Specifically, when an element of a deep model with variability is instantiated (like ProcessModel in Listing 1), its PC is copied to the instance. Moreover, a constraint forbids instantiating a deep model PL if the associated deep feature model has features with potency 0. Listing 3 displays a small instance of the deep model PL of Listing 1, as would be created by a modeller. Line 1 declares the model type (ProcessModel) and the model name (SoftwarePM). Then, the model declares two instances of TaskType called Requirements and Design. The former sets the value of field initial to true, while the latter sets it to false and declares a linguistic extension called description. Please note that, this model has potency 1, since the type has potency 2. This potency does not need to be explicitly specified, but it is calculated by the tool.   In addition to instantiation, the user of the language family needs to configure the language. For this purpose, we have created a command called config to specialize a deep model PL via a configuration (see Listing 5). When the command is executed, the PCs attached to model elements are evaluated (partially if the configuration is partial), and they are removed if their value is false. The applied configuration (i.e., the boolean values assigned to the features) is stored in the deep feature model itself (cf. model ProcessOptions in Fig. 22). The config command-when used without the with argument-can also be used to obtain the current configuration of a deep model PL. Overall, this simple example language already admits 16 total configurations, which can be succinctly represented as a PL, increasing its reuse possibilities.
Applying the configuration of Listing 5 eliminates the initial and next fields of TaskType in Listing 1, which are therefore eliminated from the instance models, like the one in Listing 4. Please note that by setting object to true, we are implicitly setting simple to false, since they are alternative features. Overall, the resulting deep model PL is shown in Listing 6.

Bottom-up extension of language families
As described in Sect. 6.3, another scenario of interest is the bottom-up creation of language families. In this scenario, extensions to both the feature model and the model are done at lower meta-levels, and then promoted to the top-level. For this purpose, we have created a command add feat, which adds features to the feature model. Listing 7 shows an example, which adds an optional feature named details, with potency 2, under the Tasks feature. The command updates the feature model, checking that the extension is possible according to Definition 14. Then, the new features can be used within the PC formulae of the model elements. For example, we can use the introduced details optional feature to tag modelling elements that are useful to provide detailed insights of the different tasks of specific software process models, as Listing 8 depicts. In the listing, we have added the PC details to Design.description, and to clabject Issue. The latter is a linguistic extension that allows attaching comments to design or requirements tasks at the level below. At this point, the designer may realize that such an extension might be useful for other domains beyond software process modelling. Hence, we have created a command called promote to advance such extensions to instantiation, pulling them up to the upper meta-level. This process checks the pre-conditions in Theorem 5.4, required for the advancement to become possible. Listing 9 displays the resulting top-level model. In this model, field TaskType.description (line 10) was created, together with a new clabject IssueType (lines 17-20). The latter serves as a type for clabject Issue in Listing 8, and where field IssueType.tasks is the type for both Issue.dsgnTasks and Issue.reqTasks. Technically, the command promote uses two types of multi-level refactorings [dLG18]: createClabjectType and createFeatureType. The former creates a clabject type at the upper meta-level for a clabject linguistic extension, while the latter creates a field type at the upper meta-level for a field linguistic extension. Please note that we follow a naming convention for introduced clabject types (the name of the lower-level clabject followed by "Type"), while names of introduced reference types (like IssueType.tasks) need to be given by the modeller.

Related work
Next, we review related research coming from language product lines (Sect. 7.1), variability in multi-level modelling (Sect. 7.2) and software product lines (Sect. 7.3).

Language product lines
Some researchers have proposed increasing the reusability of modelling languages by incorporating SPL techniques (see [MGD + 16] for a survey). For example, in [WHG + 09], DSL meta-models can be configured using a feature model. In [PAA + 16], the authors propose featured model types: meta-models whose elements have PCs, and with operations that are offered depending of the chosen variant. In [GdLCS20], meta-models can have variability, and their instantiability is analysed at the PL level. However, all these works only consider closed variability, while our work also supports open variability through instantiation (since we consider multi-level models).
In [BPRW20], the authors propose a framework-based on MontiCore [KRV10] and the principles of concernoriented language development [CKM + 18]-for defining language families with support for both open and closed variability. The framework relies on language components encapsulating syntax (through a grammar) and semantics (via code generators). Closed variability is achieved via a feature configuration that selects the components to be composed to form a language. For open variability, the composed language may contain extension points (e.g., expressing the need for an expression language) that other components need to satisfy, and parameters that can be assigned values. Other approaches follow similar ideas. For example, language definitions are modularized via roles in [WTZ09], while Melange [DCB + 15] and Neverlang [VC15] also support modularization. In the first case, this is done via an algebra of operators for extending, restricting, and assembling separate language artefacts. In the second, by providing syntax definitions with placeholders, and modules that may implement language features.
Similar to the previous approaches, the closed variability in our approach is also achieved by configuring a feature model. However, instead of relying on required interfaces, extension points, or roles, we enable open variability via instantiation and linguistic extensions. We believe that both styles for open variability are complementary and suited for different scenarios. Our notion of open variability via refinement is better suited to specialize a generic language (e.g., a process modelling language) to specific domains (e.g., software process modelling, industrial process modelling). Instead, open variability via replacement of components is better suited to express alternative realizations for a concept (e.g., different types of expression languages).
The previous approaches [BPRW20, WTZ09, DCB + 15, VC15] also consider the language semantics. MetaDepth is integrated with the Epsilon family of languages [PKR + 09], which have been extended to work in a multi-level setting [dLGC15]. However, making these languages aware of variability via PLs is up to future work. Our plan in this aspect is to build on the ideas reported in [dLGCS18].
Reinhartz-Berger and collaborators [RSC15] present a preliminary proposal to support the configuration of classes with optional attributes. It is based on a kernel language with support for multiple meta-levels but lacking deep characterization. The proposal is incipient as it is neither formalized nor implemented. In [CFRS17], the authors analyse the limitations of feature models alone to describe a set of assets, and propose using multilevel models instead. As multi-level models have limitations to express variability-as described in Sec. 2.2-we propose to combine feature models and multi-level models.
Nesic and collaborators [NNG17] explore the use of MLT [FAGdC18] to reverse engineer sets of related legacy assets into PLs. MLT is a multi-level modelling approach based on powertypes and first order logic. In their work, the authors represent variability concepts like PCs and product groups within MLT models. This embedding may result in complex models where elements can represent either variability concepts or domain concepts. Instead, we separate PCs and feature models to avoid cluttering the multi-level model. Our goal is to define highly reusable language families, for which we provide feature models to describe variability options, and offer the possibility to defer configurations; instead, the approach in [NNG17] lacks an explicit representation of feature models. Finally, we provide both a theory and a working implementation.
Other formalizations of potency-based multi-level modelling exist, like [RdLG + 14] or the more recent [WMR20]. Those theories do not account for variability, but they could be extended with feature models, in a similar way as we do.

Software product lines
Our deferred configurations can be seen as a particular case of staged configurations [CHE05]. These permit selecting a member of the PL in stages, where each stage removes some choices. In our approach, the potency controls the level where the variability can be resolved. Staged configurations are also useful in software design reuse. In this setting, Kienzle and collaborators [KMCA16] propose Concern-Oriented Reuse, a paradigm where reusable modules (called concerns) define variability interfaces as feature models. The variability of a reused concern can be resolved partially, in which case, the undefined features are re-exposed in the interface of the resulting concern. We also support deferring the variability resolution, but composing deep model PLs is future work.
Taentzer and collaborators formalized model-based SPLs using category theory [TSSC17]. Different from ours, their formalization does not capture typing (it is within a single meta-level), while their morphisms can expand the feature model, but cannot be used to model partial configurations. Borba and collaborators have studied PL refinement, which adds new products maintaining the behaviour of existing ones [BTG12]. In our case, our variability extension morphisms preserve partial configurations.
To cope with large variability spaces, partitioning techniques can be applied to feature models to yield socalled multi-level feature models [CHE05, RPG + 18]. However, the term multi-level does not refer to multiple levels of classification (as in our case), but to multiple partitions of a feature model.
In our work, we use F-morphisms (cf. Definition 8) to represent a partial configuration relation between two (deep) feature models, and EF-morphisms (cf. Definition 14) to represent an extension of a (deep) feature model. Related to this, syntactic and semantic differences between feature models have been studied in the PL community [AHC + 12, DKMR19,TBK09]. In [TBK09] the authors present different types of relations between feature models, like refactoring (the products in both models remain the same), specialization (the set of products is reduced), and generalization (the set of products increase), along with algorithms based on SAT solving to compute them. In our case, F-morphisms correspond to specializations, while EF-morphisms are similar to generalizations (they in addition demand that invalid configurations cannot be extended to valid ones). Furthermore, our interest is in understanding whether extensions can be advanced to configurations (and to typing). In [AHC + 12] the authors propose additional techniques for both syntactic and semantic differencing, to help in understanding and reasoning about differences. These differences can be combined with composition and decomposition operators. Open-world semantics for feature models, together with semantic diffs based on those semantics, are introduced in [DKMR19]. Such semantics includes all configurations-even containing features not belonging to the original feature model-which do not contradict the feature model formula. Hence, this notion is similar to our allowed extensions, and to the generalization relations in [TBK09].
Other modelling notations support variability. For example, Clafer [JSM + 19] is an approach that unifies feature and class modelling. It supports both class and (partial) object models, feature models and their (partial) configurations and logic constraints. However, it does not support multi-level modelling or deep characterization. Similar to delta-oriented programming [SBB + 10], in -modelling, a core model (representing one product) is enriched with a set of changes (with application conditions) to capture further products [Sch10]. The approach has been proposed in combination with MDE, showing that model configuration and refinement (e.g., a component being refined by a set of classes) commute. This is in line with our Theorems 5.2 and 5.4, but we are interested in instantiation (instead of refinement), and need to incorporate potency for deep characterization. Therefore, in our case instantiation and specialization (configuration) do not commute, but the latter can be advanced to former. In addition, we have also studied the advancement of extension to both instantiation and specialization.
Within model-driven software product line engineering [CAK + 05], some researchers have analysed techniques to manage variability across multiple models and artefacts [GW21,SPJ18]. In [GW21] the authors compare how different tools and approaches deal with the propagation of PCs across different models. They report that automated propagation is a feature that is scarcely supported. A multi-level model can be seen as a megamodel [BJRV05] made of a set of models related via instantiation relations. In our case, we do support automated propagation, for example, when instantiating a deep model PL (cf. Sect. 6.2), as well as when advancing extension to instantiation (cf. Sect. 6.3). Closed MM + FM Annotative 1 (meta-) Meta-model PLs [GdLCS20] Closed MM + FM Annotative 1 (meta-) Monticore [BPRW20] Open & closed MM with extension points + FM Compositional 1 (meta-) Wendel et al. [WTZ09] Open & closed MM with Roles Compositional 1 (meta-) Neverlang [VC15] Open & closed Language Modules Compositional 1 (meta-) DeepTelos [JN16] Open MLM based on most general instances Instantiation arbitrary FMMLx [Fra14] Open MLM based on instantiation levels Instantiation arbitrary Melanee [AG16] Open MLM based on potency Instantiation arbitrary MultEcore [MRS + 18] Open MLM based on potency Instantiation arbitrary MLT [FAGdC18] Open MLM based on powertypes Instantiation + Classificat. arbitrary OMLM [IGSS18] Open MLM based on potency Instantiation arbitrary TOTEM [JdL20] Open MLM based on potency Instantiation arbitrary Our approach Open & closed MLM based on potency + FM Instantiation + Annotative arbitrary In the programming world, Batory [Bat06,SB02] proposes mixin layers, a composition mechanism to add features to sets of base classes (so called two-level designs). Higher-level designs can be obtained by applying the same techniques. In [Bat06], these higher-level designs are called multi-level models. Again, the use of the term multi-level is different from ours, which refers to models related by classification relations.
As a summary, Table 1 classifies the approaches along their variability support (open, closed), the mechanisms involved (e.g., meta-models, feature models, etc.), the style (annotative, compositional, via instantiation or classification relations), and the meta-models on which the variability take place. The upper part of the table classifies approaches for language product lines, while the lower part contains approaches for multi-level modelling. Overall, our proposal is the first one adding variability to multi-level models with support for deep characterization via potency.

Conclusions and future work
In this paper, we have proposed a new notion of multi-level model PL to improve current reuse techniques for modelling languages. This is so as it permits both open variability (by successive instantiations leading to language refinements for specific domains), and closed variability (by selecting among a set of variants). We have presented a theory for the proper construction and use of language families. The theory contains results ensuring the proper interleave of instantiation, configuration and extension steps. The ideas have implemented on top of the multi-level modelling tool MetaDepth.
In the future, we plan to provide a categorical formalization of the theory, which would bring operations like intersection via common parts (pullbacks) and merging (pushouts) of deep model PLs. We would like to develop analysis techniques for multi-level model PLs, e.g., to check instantiability properties in the line of [GdLCS20]. Our goal is to make multi-level model PLs ready for MDE. This would entail the ability to define MDE services like transformations and code generators on multi-level model PLs. Technically, our plan is to use the Epsilon languages supported by MetaDepth, and follow ideas from existing works on PLs of transformations [dLGCS18], and transformation of PLs [SFR + 14]. We would like to develop mechanisms for the assisted derivation of deep language families out of existing DSL meta-models, using as a basis the techniques for bottom-up modelling presented in Sect. 5.2. Finally, to proper model language families, we need to consider the concrete syntax as well. For this purpose, we plan to build on approaches to define graphical and textual syntaxes for multi-level models [Ger17,dLGC15], making them aware of closed variability through feature models (e.g., in the style of [GWG + 20]).

Proof of Lemma 2 (F-morphism composition yields an F-morphism):
In this lemma, we need to proof that, given two F-morphisms, m 1 For this purpose, we proof the three conditions for D-morphisms of Definition 8, as follows: • We need to prove that l 0 + (d 1 + d 2 ) l 2 . Since both m 1 and m 2 are F-morphisms, we have that l 0 + d 1 l 1 and l 1 + d 2 l 2 . Therefore l 0 + d 1 l 2 − d 2 and so l 0 + (d 1 + d 2 ) l 2 as required. • We need to show that m 2 F • m 1 F is injective, and it is, since both m 2 F and m 1 F are injective and the composition of injective set functions is injective. In addition, we need to show that ∀ f ∈ F 0 • pot 0 (f ) + d 1 + d 2 pot 2 (m 2 F • m 1 F (f )). Since m 1 and m 2 are F-morphisms, we have that ∀ f ∈ F 0 • pot 0 (f ) + d 1 pot 1 (m 1 F (f )) and ∀ f ∈ F 1 • pot 1 (f ) + d 2 pot 2 (m 2 F (f )). Since the last statement applies to all f ∈ F 1 , it also applies to m 1 F (f ), and so we have , for which we need to check that . Since both m 1 and m 2 are F-morphisms, we have that m 1 F (F 0 ) . From the first statement, we have that F 1 m 1 F (F 0 ) ∪ F + 1 ∪ F − 1 , and substituting in the second statement we have that In the latter, statement, doing an additional substitution for F + 1 , F − 1 on both sides should preserve equivalence, and so

Proof of Lemma 3 (PL-morphism composition yields a PL-morphism):
This lemma has two parts. In the first one, we need to prove that given PL-morphisms Since both m 1 and m 2 are PL-morphisms, we have that In the latter statement, since both terms are equivalent, performing an extra substitution [m 2 For the second part of the lemma, we need to proof that, if m 1 and m 2 are specializations, so is m 2 • m 1 . This means proving three conditions: • Injectivity: Since both m D 1 and m D 2 are injective so is m D 2 • m D 1 . • Level-preserving: Since both m 1 and m 2 are level-preserving, so is m 2 • m 1 since d 1 d 2 0 d 1 + d 2 .
• Keeping elements with non-false PC: We require that m 2 • m 1 's co-domain contains exactly the elements e ∈ C 2 ∪ S 2 ∪ R 2 s. • M and DFM have the same level (l ).
• The three conditions over φ and pot hold, since they hold for φ and pot .
Finally, we build a specialization PL-morphism sp m M , sp F : DM → DM as follows: For m F , according to Definition 8, we need to show that: should have received a mapping from another node, since sp was a correct specialization. This means that we should map sp non-injectively, which is not possible by the definition of specialization. Adding new slots or references follows the same reasoning. Similarly, we could delete an element from DM , but in that case an element from DM with φ(m M C (e))[F + /true, F − /false] false would not receive a mapping, which is not allowed by specialization morphisms. Finally, we cannot change the source or target of slots or references in DM , since then they would not commute properly, as required by correct D-morphisms. Please note that, while DM is unique, there might be several (equivalent) ways to map it to DM .
Proof of Theorem 5.2 (Specialization can be advanced to instantiation): Let C F + , F − be the configuration of the specialization PL-morphism sp : DM 2 → DM 1 . From DM 0 and C , we construct (uniquely) a deep model DM 3 and a specialization PL-morphism sp : DM 3 → DM 0 as described in the proof of Theorem 5.1. Then, we build a type PL-morphism tp tp D , tp F : DM 2 → DM 3 as follows: . This is so as φ 1 (sp D C (e))[F + /true, F − /false] false due to Definition 11 of specialization PL-morphism. And now, since the configuration of tp is empty, we have φ 0 (tp D C (sp D C (e))[F + /true, F − /false] false. This means that, according to Definition 11, this element is in the co-domain of sp D C , and is assigned to c by tp D C . The same reasoning applies to sets S 2 and F 2 . Function tp F F | F 2 is also well formed, since the same configuration C was used to derive DM 2 and DM 3 . This reasoning also shows that tp • sp sp • tp , as Theorem 5.2 demands.
Finally, please note that, once sp is constructed, tp is unique (while there can be several ways to build sp ).

Proof of Theorem 5.3 (Equivalent fully configured language):
We need to check that given a chain of PLmorphisms:  We use the diagram in Fig. 24 for the proof. We start by applying Theorem 5.2 to advance sp n+1 to tp n , obtaining the (unique) deep model PL DM FC n−2 and PL-morphisms sp n−1 : DM FC n−2 → DM n−1 (a specialization PL-morphism) and tp n : DM n+1 → DM FC n−2 (a type morphism). Now, by Lemma 3 we can compose sp n−1 and sp n−1 , which yields a specialization PL-morphism. Now, we apply again Theorem 5.2 to advance sp n−1 • sp n−1 to tp n (cf. Fig. 24). This yields the (unique) deep model PL DM FC n−3 and PL-morphisms sp n−3 : DM FC n−3 → DM n−3 (a specialization PL-morphism) and tp n−2 : DM FC n−2 → DM FC n−3 (a type morphism). We can iterate this procedure as needed. In the final result, we can obtain an indirect type morphism, by composing the calculated type PL-morphisms (tp n , tp n−2 ,...).

Proof of Lemma 4 (EF-morphisms can be advanced to F-morphisms):
We need to prove that given an F- • The level is set to l ext ), which is the disjoint union of F 1 and F ext where the elements sharing a preimage in F 0 are identified, which is a pushout in the category of sets [EEPT06]. • The potency is defined as: ∀ e ∈ F 0 • pot ext 0 (e) pot 1 (m F (e)), ∀ e ∈ F 1 \ m F (F 0 ) • pot ext 0 (e) pot 1 (e), and ∀ e ∈ F ext \ m e (F 0 ) • pot ext 0 (e) pot ext (e) + d . (The potency is taken from DFM 1 for those elements in F 0 or F 1 , and from DFM ext otherwise. Please note that the potency of mapped features in DFM 0 and DFM ext is the same since m e is an EF-morphism). Now, we need to show that FM ext 0 is correct according to Definition 5, for which we need to show that ext 0 is satisfiable. This holds since ext 0 1 ∨ ext and both 1 and ext are satisfiable. Then, we need to show that DFM ext 0 is correct according to Definition 7. This requires to show that the potency of each element is less than or equal to DFM 0 's level. This is so for those elements receiving potency from pot ext , since the level of DFM 0 and DFM ext is the same, by Definition 15, and the level of DFM ext is equal or larger (l 0 + d ). The potency is less or equal than the level also for elements receiving the potency from pot 1 , because the level of DFM 1 is l 0 + d , which is the level of DFM ext 0 . Next, we need to show that there are commuting morphisms m d , m F , C : as follows: • The depth d is equal to the depth d of F-morphism m.
• The mapping m F is defined as , we build an identity mapping).
• The configuration C is equal to the configuration C of m.
The F-morphism m so constructed is valid according to Definition 8, since: • The morphism depth fills the gap between the levels: l ext +d l ext 0 . This holds since l ext l 0 and l ext 0 l 0 +d . • Morphism m F is injective, since m e is injective, and we built an identity for features in F ext \ m F (F 0 ). • The morphism depth d fills the gap between the potencies: ∀ f ∈ F ext • pot ext (f ) + d pot ext 0 (m F (f )). To show this, we split F ext in m e (F 0 ) and F ext \ m e (F 0 ) and check the property on these two sets: -On the one hand, ∀ f ∈ F 0 • pot ext 0 (f ) pot 1 (m F (f )) (by the way we have constructed DFM ext 0 ), and ∀ f ∈ F 0 • pot 0 (f ) + d pot 1 (m F (f )) (since m is an F-morphism). Therefore we have ∀ f ∈ F 0 • pot ext 0 (f ) pot 0 (f ) + d . Because m e is an EF-extension, by Definition 14, we have that ∀ f ∈ F 0 • pot 0 (f ) pot ext (m e (f )), and so ∀ f ∈ m e (F 0 ) • pot ext 0 (f ) pot ext (m F (f )) + d as required. -On the other, by construction of DFM ext 0 , we have ∀ f ∈ F ext \ m e (F 0 ) • pot ext 0 (f ) pot ext (f ) + d , and because m F is an identity on those elements, ∀ f ∈ F ext \ m e (F 0 ) • pot ext 0 (f ) pot ext (m F (f )) + d as required.
• . By the second term, any configuration C ∈ CFG(DFM ext ) belongs to CFG(DFM ext 0 ). Hence, any configuration C ∈ DFG(DFM ext ) s.t. C ≤ C of DFM ext is valid in DFM ext 0 . Conversely, any non-valid configuration C ∈ DFG(DFM ext ) s.t. C ≤ C makes the second term false. However, the first term should also be false, since such configuration would make 0 false (by the second part of condition 2 in Definition 14) and hence not equivalent to 1 as required by m being an F-morphism.
Second, we build m e as follows: f ∈ m F (F 0 ) ⇒ m e (f ) m −1 F (f ), and f ∈ F 1 \ m F (F 0 ) ⇒ m e (f ) f . Morphism m e so constructed is valid according to Definition 14, since (1) the potency pot ext 0 of all elements e ∈ F 0 ∪ F 1 is taken from pot 1 (e), and (2) every configuration C of DFM 1 makes 1 [m F (F 0 )/F 0 ] true and hence a configuration of DFM ext 0 . In addition any invalid configuration C of DFM 1 makes 1 [m F (F 0 )/F 0 ] false, but also 0 false (since m F is an F-morphism, and 1 [F + /true, F − /false] and 0 are equivalent). Hence, ext should also be false, since m e is an EF-morphism, and therefore ext 0 should also be false as required. • The presence condition function is constructed as follows: ∀ e ∈ X 1 \ m D (X 0 ) • φ ext 0 ([e]) φ 1 (e); ∀ e ∈ X ext \m D e (X 0 )•φ ext 0 ([e]) φ ext (e); ∀ e ∈ X 0 •φ ext 0 ([m D (e)]) φ 1 (m D (e))∧φ ext (m D e (e)) (for X C , S , R). Now, we need to show that DFM ext 0 is correct according to Definition 10, for which we need to check: • DFM 0 and M 0 have the same level, which is the level of DM 0 plus the morphism's depth d .
• The function φ maps elements to a (non-false) propositional formula. This is so as neither φ 1 nor φ ext map elements to false propositional formulae. • The function φ satisfies the following three conditions: • φ ext 0 (s) ⇒ φ ext 0 (src ext 0 (s)). This holds since (a) it holds for both DFM 1 and DFM ext , (b) the PC of elements in C 0 that are common in DFM 0 is not changed by m or m e , according to condition 2 in this theorem, and (c) the PC of elements in S ext ∪ R ext can be strengthened (since m e is an EPLmorphism). However, strengthening the premise of an implication preserves the implication (e.g., if we have φ ext (s) ⇒ φ 0 (s) and φ 0 (s) ⇒ φ 0 (src(s)) then φ ext (s) ⇒ φ 0 (src(s))). 2. ∀ r ∈ R ext 0 • φ ext 0 (r ) ⇒ φ ext 0 (tar ext 0 (r )). This holds by the same reason as the previous property. 3. ∀ e ∈ C ext 0 ∪ S ext 0 ∪ R ext 0 , ∀ v ∈ Var (φ ext 0 (e)) • pot ext 0 (v ) ≤ pot ext 0 (e). For the elements mapped from DM 1 , this holds as it holds for DM 1 , and the potency is copied in those case. For those elements mapped from DM ext and not common in DM 0 , the potency of the elements in C ext 0 ∪ S ext 0 ∪ R ext 0 is increased by d , just like the feature variables. Therefore, if it holds for DM ext , it holds for DM ext 0 . Next, we construct the PL-morphism m m D , m F : DM ext → DM ext 0 as follows: • The D-morphism m D d , m C , m S , m R is built as follows: -The depth d is taken as the depth of m D (which is d ).
-Each element is mapped to its equivalent class under ≡: ∀ e ∈ X ext • m X (e) [e] (for X C , S , R).
• m F is constructed as in the proof of Lemma 4.
We need to show the three PL-morphism well-formedness conditions in Definition 2: 1. Property p ext + d p ext 0 holds since p ext 0 p 0 + d and p ext p 0 . 2. Property ∀ e ∈ X ext • pot ext (e) + d pot ext 0 (m X (e)) (for X ∈ {C , S , R}) holds by construction for those elements e ∈ X ext \ m D e (X 0 ). For those elements in m D e (X 0 ), their potency is taken from pot 1 . However, that potency is pot 0 + d , and since pot ext pot 0 and d d , we obtain the desired result. 3. Each function m C , m S , m R commutes with functions src i and tar i , which holds since DM ext 0 has been constructed as a pushout in the category of graphs [EEPT06]. Then, according to Definition 15 of EPL-morphism, we need to show that ∀ e ∈ C 1 ∪S 1 ∪R 1 • φ ext 0 (m e (e)) ⇒ φ 1 (e)[F 1 /m e (F 1 )]. This holds for elements e in X 1 \ m D (X 0 ), since the PC is φ ext 0 ([e]) φ 1 (e), and therefore φ 1 (e) ⇒ φ 1 (e). It also holds for elements e in m D (X 0 ), since their PC is φ ext 0 ([m D (e)]) φ 1 (m D (e))∧φ ext (m D e (e)), and so φ 1 (m D (e)) ∧ φ ext (m D e (e)) ⇒ φ 1 (m D (e)) as required.

Proof of Corollary 1 (Preservation of type and specialization PL-morphisms):
First we proof that if m is a type PL-morphism, so is m . PL-morphism m m D , m F is type if m D and m F are types. Since the depth of m D and m F is that of m D and m F , then m is also type.
Then, we assume m is specialization. Then, according to Definition 11, m F is specialization, m D is injective, level-preserving and satisfying ∀ e ∈ C 1 ∪ S 1 ∪ R 1 • (φ 1 (e)[F + 1 /true, F − 1 /false] false ⇔ ∃ e ∈ C 0 ∪ S 0 ∪ R 0 • m D (e ) e). If m F is specialization, its depth is 0, and so is the depth of m F , and hence m F is type as well. If m D is injective, so is m D because injectivity is preserved in pushouts in graphs (cf. fact 2.17 in [EEPT06]). If m D is level preserving, so is m D since they have the same depth.
Finally, regarding the property on the PC, let's assume that for some element e in X ext 0 • φ ext 0 (e)[F + /true, F − /false] false (for X C , S , R). If ∃ e ∈ X 1 with m e (e ) e then since m is a specialization morphism, ∃ e ∈ X 0 with m D (e ) e . In such a case, by construction, ∃ e ∈ X ext such that m e (e ) e , and m D (e ) e as required. If instead e ∈ X 1 with m e (e ) e, then by construction ∃ e ∈ X ext with m D (e ) e.