Sustainable placement of VNF chains in Intent-based Networking

Intent-based networking (IBN) aims at automatically driving network configurations and operations to fulfil high-level intents specified by infrastructure providers and application operators. Deploying Virtual Network Function (VNF) chains onto the available network resources is a way to model and fulfil intents. Focussing on the Cloud-Edge continuum, this article describes and compares a mathematical programming and a declarative heuristic methodology to solve the problem of placing VNF chains to fulfil multiple intents in IBN settings. Taking the infrastructure operators' viewpoint, our approach aims at balancing the trade-off between environmental sustainability and profit. We release an open-source prototype of the proposed methodology and assess its scalability and functioning via simulation over lifelike data.


INTRODUCTION
With the development of pervasive Cloud-Edge networks, the demand for modern network services characterised by the availability of multimedia services and automatic resource management has grown [22].Intent-based networking (IBN) is a technology designed to automate the management of networks based on the concept of intent, defined as a set of operational goals and desired outcomes of the network [5].The purpose of IBN is to allow service providers and application developers to define a desired network behaviour in a high-level language, freeing them from interacting with low-level network configurations.In IBN, intents are translated into suitable configurations to achieve desired goals and assure them over time.
Intents can be fulfilled by deploying a chain of Virtual Network Functions (VNF) that achieves the objectives of the considered intent by implementing a set of requested functionalities [38].VNFs can run on commercial hardware instead of dedicated proprietary hardware, allowing better use of network resources and improved scalability.On the one hand, the concatenation of VNFs makes it possible to compose arbitrarily complex network services that enable different modern applications (e.g.video-streaming, virtual reality, remote surgery).On the other hand, such VNFs need to be placed in a QoS-and context-aware manner, e.g.considering network latency/bandwidth, available hardware capabilities, and operational costs [8].In [2], we have precisely proposed a declarative solution and its open-source prototype, Dips, that supports the modelling of one input intent into a VNF chain, which is subsequently placed onto Cloud-Edge resources in a QoS-and context-aware manner.
Environmental sustainability is another critical concern, as the energy needed to power data centres up is expected to reach 3% of the world's electricity demand by 2030 [18].Considering also the energy consumption of networks, the production of electronic devices and end-user usage, this value could increase by up to 21% [16], causing a dramatic growth of carbon emissions due to Information and Communication Technologies (ICT).Naturally, next-generation networks like IBN will need to deal with sustainability aspects to reduce their energy consumption and, more importantly, their carbon footprint [15].To this end, some works have recently investigated the environmental sustainability of software running on Cloud-Edge resources, e.g.[6,12].
This article introduces and evaluates a heuristic declarative methodology as an alternative to a Mixed-Integer Linear Programming (MILP) solution to fulfil the intents of IBN-enabled Cloud-Edge networks.This is achieved by placing VNF chains while considering sustainability trade-offs from the perspective of the infrastructure provider.Such a new methodology builds upon and extends our previous work [2] and has been prototyped into a new open-source prototype named MultiDips 1 .Our efforts go beyond the current state-of-the-art since MultiDips: • enables estimating the energy consumption and carbon emissions related to the deployment of VNF chains, along with the infrastructure provider's profit, thus enabling determining suitable trade-offs between profit and carbon emissions, • handles the fulfilment of multiple intents through VNF chains, having both application and infrastructure operators express their intents and enabling the latter to set global desiderata over the infrastructure status, and • defines and assesses against the optimal MILP baseline a set of heuristic strategies that speed up decision-making times while balancing profit, carbon emissions and overall hardware usage.
The rest of this article is organised as follows.Sect. 2 illustrates the considered problem through a motivating scenario, and Sect. 3 defines a MILP formulation of such a problem.Sect. 4 describes a declarative heuristic solution for the considered problem.Sect. 5 assesses the two proposed approaches in MultiDips.Sect.6 and Sect.7 discuss closely related work and conclude by pointing to some directions for future work, respectively.
infraOp that express one intent each.The following scenario epitomises our considered problem by relying on lifelike data.
streamAppOp works for a startup specialising in video streaming services, and he intends to efficiently distribute an application that enables users to enjoy streaming video content, as sketched in Fig. 1.The application is provided via a streamingService VNF chain consisting of two services: • a stateless edgeStreamVF service situated at the network's edge, closer to end-users or devices.Its primary responsibility is to manage content delivery and quality optimisation.Each service is described by its processing time and Cloud-Edge computing layer constraint, as depicted in Fig. 1. streamAppOp can also detail the expressed intent with some expected properties: I want to distribute an application that allows users to enjoy streaming video content while ensuring that the content must be appropriately compressed to minimise the requested bandwidth consumption.
The compression property requested by the stakeholder can be achieved by adding a new function to the original VNF chain, which will conduct all the required (de)compression operations.Instead, the bandwidth requirement will be considered during the chain placement on the given infrastructure.The chain enriched with compression features is sketched in Fig. 2. Similarly, the second stakeholder cloudStorageOp intends to store his company's data in the Cloud securely.To do so, she leverages an application made of a single network function, i.e., cloudStorageVF hosted in the Cloud, which provides secure and efficient data storage services (Fig. 3).In this case, the expressed intent is enriched with an optional latency requirement, leading to the VNF chain sketched in Fig. 4: I want to store my company's data in the Cloud securely.Preferably, the service should be reachable in less than 100 ms.Assume to deploy the described VNF chains onto the Cloud-Edge infrastructure sketched in Fig. 5.Such an infrastructure consists of 4 heterogeneous nodes connected via end-to-end links for the sake of model simplicity.Each node is characterised by its energy cost, the amount of available hardware resources -expressed as a triple considering RAM, vCPU and storage capacity -and its Power Usage Effectiveness (PUE) 2 .End-to-end links are described by the latency and the bandwidth they feature.For instance, edge1 is an edge node with a PUE of 1.15, 16GB of RAM, 7 vCPUs and 900GB of free storage, and the cost of energy of 0.383 = C/kWh.It has a connection to cloud1 node that has an average latency of 70ms and a bandwidth of 200Mbps.
We address the problem of satisfying intents on IBN networks by placing chains of VNFs.Given the topology (nodes and links) of a Cloud-Edge infrastructure and a set of intents to be satisfied, solving the problem entails: (1) defining a chain that satisfies the intent and (2) determining the placement of each VNF of the chain on one of the infrastructure's nodes.In our solution, we also address sustainability issues, with a particular emphasis on lowering the infrastructure's CO2-eq emissions 3 .
Finally, the infrastructure provider infraOp wants to limit the energy impact of its infrastructure by imposing a limit on carbon emissions: I want to ensure that the carbon emissions from our infrastructure do not exceed 0.300 kg/month, as part of our dedication to environmental sustainability.

MATHEMATICAL FORMULATION
In this section, we illustrate a MILP solution for the described problem, summarising the notation in Table 1 Given a set of intents  , their associated VNF chains flattened in the set  , the set of required and featured hardware resources , and an infrastructure made of a set of nodes  , we want to assign every component in  to a node in  .Let  be the matrix that assigns each component  to the node  that will host it, the binary variable    = 1 indicates that component   is hosted on node   , otherwise    = 0.The matrix  contains the coefficients    that quantify the energy used for placing   on   .
Constraints.To ensure the validity and sustainability of a placement, we consider the following constraints: • Each   must be assigned to at most one   .Thus, the sum of values of each row in X must be at most equal to 1: • For every intent , its chain is only placed in its entirety in order to work properly.Therefore, all or none of the VNFs belonging to its target chain must be placed.In the former case   = 1, otherwise   = 0: • The total amount of resources {,  , } ∈ , required by each   (i.e.  ) placed on node   , must be lower than its free resources   : • The layer required by   , (i.e.ℓ  , edge if ℓ  = 0, cloud otherwise) and the layer of the node on which it is placed (i.e.ℓ  ), must be the same: • The latency featured by each link between nodes   and   (i.e.  ) plus the processing time of   (i.e.  ), must be lower than the latency requested by the interaction between VNFs   and  ℎ (i.e. ℎ ), : • The total amount of bandwidth required by each pair of VNFs   and  ℎ (i.e. ℎ ), flowing through the link between nodes   and   must be lower than the bandwidth featured by the link   : • Emissions of   placed on   (i.e.   ) depends on the energy used for the placement    and on the average carbon intensity of   (i.e.  ): Moreover, we must also consider the carbon emissions on each link used by   placed on   to communicate with other VNFs (i.e.   ).These emissions depend on the requested bandwidth and on a factor proportional to the MB of bandwidth used on the link between   and   (i.e.  ): The total carbon emissions must be lower than the threshold value  2 , defined in the global intent by the infrastructure provider: Eqs. ( 5), ( 6), ( 8) and ( 9) contain the product of two binary variables    , which implies that our considered problem is a Mixed Integer Quadratic Problem (MIQP).We can linearise it into a MILP by exploiting new binary variables   ℎ =    •  ℎ (i.e. the logical AND between    and  ℎ ) and the following additional constraints on them: The first two inequalities require the variable   ℎ = 0 when at least one of the other two variables is 0. The third one imposes Since we aim to maximise the profit, this formulation leads to the following optimisation problem: where the profit of placing  on  (i.e.   ) depends on the gain per unit of hardware type   minus the cost of the energy needed for placing   on   .This energy is related to the Power Usage Effectiveness ( ) of   and the power consumption at full load for all its resources: Thus, the profit can be calculated as follows: The goal of our placement strategy is to find the best assignment of V to N, in order to maximise the overall profit while considering hardware, latency, bandwidth and energy consumption constraints.This means finding an assignment for the variables in X which maximises Eq. ( 10), knowing the profits computed via Eq.( 12).Eqs. ( 1) to ( 12) constitute the formulation of our problem, with ,  ∈ {0, 1}.
These clauses mean that a holds when  1 ∧ . . .∧   holds.When a clause lacks any premises (n = 0), it is a fact.Moreover, predicate definitions within Prolog can incorporate disjunctions (represented by ;) and negations (\+).Variable names start with uppercase letters, and lists are denoted by square brackets (e.g.[L|Ls], where L denotes the initial element and Ls is the remainder of the list).Prolog programs can be queried, and the Prolog interpreter tries to answer each query by applying SLD resolution [42].The outcome of a query is a computed answer substitution that instantiates the variables in the query.As an example, consider the query ?-nice(W). in the context of the following logic program: nice(X) :-honest(X), gentle(X).honest(alice). honest(barbara).gentle(barbara).
In this case, the computed answer substitution is {barbara / W }, obtained by first rewriting the query by applying the first clause for honest/1 and failing, and then applying the second clause for honest/1 and then the clause defining gentle/1.This approach is called backtracking, i.e. the reasoner goes back if a partial solution does not lead to a complete solution and tries other options until a solution is found or all possibilities are exhausted.
Being declarative, Prolog programs provide a more concise, readable, and extensible solution in comparison with procedural or mathematical approaches.Besides, being based on SLD resolution, they are intrinsically explainable, viz., they provide proofs for any found solution.specifying the StakeHolder that issued the intent, a unique IntentId and TargetId specifying the target service chain to be delivered.

Declarative solution
Target chains are denoted by facts like where Chain is a list of VNFs.Each VNF is denoted by facts like vnf(VNFId, Layer, ProcessingTime).
where VNFId uniquely identifies the VNF, characterised by its Layer, thus the requested type of deployment node (i.e.cloud or edge), and its average ProcessingTime.It is possible to define several VNF versions that scale according to the number of users they will serve via facts like vnfXUser(VNFId, Size, UsersRange, HWReqs).
where the given Size can serve the specified UsersRange, represented as a pair (e.g., (LowerBound, UpperBound)), and necessitates HWReqs hardware resources.Hardware requirements are described by a triple specifying the GB of RAM, the number of vCPU and the GB of storage needed by the sized VNF (e.g., (3GB, 4, 256GB)).Moreover, an intent is described as a set of properties that must or should hold.Thus, those properties will refer to a given IntentId.We split properties into two subsets.The first subset considers the ones altering the initial VNF chain and are described by facts like propertyExpectation(IntentId, Property, Level, From, To).
declaring that a certain Property must be guaranteed for all data traffic through edge or Cloud (i.e. the Layer) nodes, over the subchain identified by From and To.In these properties, the terms From and To are not mandatory: if the former is not defined, the whole chain starting with From VNF is considered.If the term From is absent, it is considered from the chain's origin to the To VNF.If both terms are not defined, the whole chain is considered.A VNF needed to satisfy the required Property, which will then be added to the chain in its assembly phase, is denoted as in changingProperty(Property, VNF).
The second subset contains the properties that the placement of the final VNF chain must respect.They are described by facts like where a node has a unique NodeId, a Layer (i.e.cloud, edge), and the amount of currently available hardware resources HWCaps always described by the triple (GB RAM, number of vCPUs, GB storage).
A node is also characterised by its total hardware resources TotHW and its PUE.To compute the profit made by the infrastructure operator and the environmental impact of the infrastructure itself, we associate to a node also its EnergyCost, which varies according to its geographical location, and the list of EnergySources with which it is supplied.Since the energy produced by a renewable resource is not continuous in time, each energy source is associated with the percentage of probability of its use.Lastly, hourly Energy for each hardware resource type is modelled based on the Load and the constant <val>, corresponding to the power absorbed by the node component.
Lastly, end-to-end links between infrastructure nodes N1 and N2, within their featured Latency and Bandwidth are declared as in link(N1, N2, Latency, Bandwidth).
totHW(edge1, (32,12,2000)).pue(edge1, 1.15 Placement.The declarative solution to the considered problem implements a backtracking search approach.Several heuristics have been introduced to sort intents and infrastructure nodes, in an attempt to drive the search to the most promising paths in the search space and speed up the execution times The main logic4 of our prototype works in three steps, listed in Fig. 6 and includes the following three steps: (1) first, multiDips/2 predicate (lines 1-5) first collects all intents expressed by the application provider(s), whose target chains need to be placed 5 (lines 2-3).( 2) second, rankIntents/3 predicate (line 4) assigns a score to each intent in the IntentsList, based on the chosen heuristic.
It then sorts the intents by increasing score, thus defining the order in which they will be processed.(3) third, callDips/3 (lines 6, 7-11) predicate incrementally builds the solution, starting from the OrderedIntentsList and an empty placement.callDips predicate uses dips/3 [2] (line 8) to search for a valid solution for each intent in the given list, driven by the ordering of nodes chosen by the infrastructure provider.The empty placement is retrieved if no eligible is found (line 8).Afterwards, mergePlacements/3 (line 9) merges the partial solution found for the previous intents with the placement obtained for the last intent.This process is performed recursively for the whole intent list (lines 10-11).
Heuristics.The main point of the placement process is the choice of heuristics for sorting intents and nodes (lines 4 and 8 in Fig. 6).These orderings aim at ensuring that the first valid placement is found quickly and can guarantee a good profit, low carbon emissions or uses as few nodes as possible, according to the chosen heuristic.For  intents ordering, we propose two alternatives, described in Defs.4.1 and 4.2.
Definition 4.1 (Hungriest).Given two intents  and  ′ , with VNF chains  and  ′ , be   the set of hardware resources of a VNF , we define the following ordering among  and  ′ : In Def.4.1 intents are sorted according to the total amount of hardware resources requested by each VNF in their assembled target chain.This heuristic represents a classical "fail-fast" selection, often employed in addressing bin packing problems [34].In this context, the objective is to allocate a set of objects with varying sizes (i.e., VNFs) to a finite number of bins, each with predefined capacities (i.e., infrastructure nodes), with the aim of minimizing the overall number of utilized bins.Experiments conducted by [21] further support the effectiveness of this heuristic for the specific problem at hand.
In Def.4.2, intents are sorted according to the length of their VNF chains, i.e., those with more VNFs to be managed.This methodology enhances our contextual awareness.It accomplishes this by recognising that each intent imposes bandwidth and latency requirements between pairs of VNFs, and infrastructure links have limited resources.Thus, priority is given to the placement of longer chains, which are more demanding on these requirements as they require more data passes between VNFs.
For sorting nodes, we implemented one heuristic whose behaviour varies according to the value taken by three weights, which govern the node's achievable profit, its carbon footprint, and the amount of free hardware, respectively.These three values are computed for each node and normalised using the Min-Max normalisation technique [36] before using their weighted sum to calculate the score assigned to nodes, thus deciding their order.Definition 4.3 (Node score).Be   ,    and  ℎ three constants, ∥ ∥ the Min-Max normalised value of  ,  the achievable profit of node  ,  the carbon emissions of  , and  the amount of available resources on  , we compute the score   as: Thus, given two nodes  and  ′ , with a score of   and   ′ , we have the following order between  and  ′ : Considering Eq. ( 15), we can distinguish three limit cases: • (Profit) If   = 1, we favour the profit during the exploration of those nodes on which it is most cost-effective to place a VNF, based on their energy consumption and energy cost.Doing so makes the first valid found placement more likely to guarantee a good profit.• (Carbon) If    = 1, we prefer the least polluting nodes so that the intent fulfilment does not cause an excessive increase in the CO2-eq emissions of the infrastructure.• (FreeHW ) If  ℎ = 1, we prioritise nodes with more resources so that VNF chains are concentrated on fewer nodes instead of scattered throughout the infrastructure.It is also possible to balance these three weights, considering the infrastructure capabilities, the limit imposed by the global intent and the properties required by the various intents.We call such a heuristic Balanced.

EXPERIMENTAL ASSESSMENT
In this section, we provide an experimental assessment 6 to compare the two proposed approaches.These experiments compare the optimal solution found with the MILP model described in Sect. 2 and executed via a Python script exploiting the GuRoBi solver [1], with the Prolog heuristic combinations described in Sect.4.2 and implemented with SWI-Prolog 9.0.4 [40].

Setup
Intents.In the experimental setup, we incorporate the two intents outlined in Sect.2, alongside three additional intents, each with briefly described requirements7 : • gsIntent, which requires high-bandwidth, low-latency links between VNFs and high RAM capacity in the used Cloud nodes, • ssIntent, which requires high-bandwidth connections between Edge and Cloud and a balance of memory, processing and storage capacity for the nodes hosting the VNFs, • hbIntent, in which in addition to requiring low latency to the Cloud, many properties impose the addition of VNFs, increasing the length of the associated VNF chain to be placed, but with little demand on hardware resources, • csIntent, which has a chain made of only 2 VNFs but requires a very high storage capacity, • srIntent, where again a balance of memory, processing and storage capacity is needed for the nodes chosen to place its chain, but with reduced bandwidth utilisation and an optional low latency requirement.As input for the experiments, we used sets of intents of various sizes, respectively 5, 10, 25, 50 and 100, randomly duplicating the intents, thus the requirements described above.Infrastructure.VNF chains associated to intents are placed onto heterogeneous random infrastructures of 25, 50 and 100 nodes and the whole set of end-to-end links.For each node, its layer is chosen between edge (75%) and cloud (25%).According to the chosen layer, for each node, we generate its total resources, PUE 8 , and the power absorbed by RAM, CPU and storage at maximum load.All values are inspired by the most powerful instances of the Amazon EC2 9 Cloud computing service and the Amazon EBS10 Cloud storage service.For the PUE, we referred to the study by Pegus et al. [37].To estimate the power consumption of each component under maximum load, we relied on data provided by Jiang et al. [24].For the Edge nodes, we considered characteristics akin to high-end personal computers, referring to Amazon EC2 as a benchmark.The available resources of each node were calculated based on the total resources, accounting for a random occupancy rate of up to a maximum of 35%.
The random generation of energy sources powering a node is independent of its type.Two or three randomly selected sources power each node, each associated with a utilisation percentage of up to 100%.Finally, we generate the cost incurred per kilowatt/hour, choosing it randomly from the energy costs for the second half of 2022 of 35 countries, reported by the European Community 11 .
Regarding links, the bandwidth and latency are chosen randomly within a range, which varies according to the type of nodes connected.Specifically, the bandwidth between Edge-Edge pairs is selected in the range [100, 200] MB and the latency in the range [10,50] ms for Edge-Cloud pairs in the ranges [200,300] MB and [50, 150] ms, and between Cloud-Cloud pairs in the ranges [300,1000] MB and [5,25] ms.To simulate hypothetical different geographical distances, links connecting two Edge nodes have a lower maximum latency than an Edge-Cloud link.Instead, data centres are connected with high-capacity links, guaranteeing low latency.After this generation, we recomputed latencies using the Floyd-Warshall algorithm [41] to determine all the shortest paths between pairs of nodes, using the previously generated latencies as weights.
Lastly, we consider a global intent to limit the used energy or the infrastructure's carbon emissions.For each experiment, we distinguish two global intents: a high one, where the limit is calculated based on the worst-case scenario, which involves determining the maximum energy consumption when all nodes are operating at full capacity.This value is then multiplied by the CO2-eq emissions per kWh associated with the most environmentally polluting available energy source.The outcome is a fixed upper limit that serves as a strict threshold, ensuring it does not impact the search for placements within the infrastructure.The second generated limit is a low one, dividing the high one by ten.In practical terms, this requirement implies that the infrastructure's carbon emissions should be one-tenth of what it would emit if solely powered by a highly polluting energy source.Alternatively, when energy consumption is constrained, the infrastructure should operate at just one-tenth of its full-load energy consumption capacity.
Placement strategies & metrics.We tested all combinations of the heuristics for sorting intents with those for sorting nodes.Thus, the Hungriest and Longest heuristics for intents are paired with the Profit, Carbon and FreeHW heuristics for nodes, where 100% of the weight is set on the respective considered component, and the Balanced heuristic with 33% of the weight on each of the three components.
Experimental results include the hourly profit obtained by the infrastructure manager, the hourly carbon dioxide equivalent emissions, and the execution time of each proposed strategy.Results are averaged on a different number of independent runs, varying the infrastructure at each iteration and sharing a 24-hour timeout, after which the experiment is aborted even if not terminated.The number of variations for each experiment is 50 for high emission-limit infrastructures and 25 for low emission-limit infrastructures due to the higher execution times that have passed or approached the 24-hour timeout.

Results
We discuss only results with the most extensive infrastructure size (i.e. 100 nodes) since the other cases are very similar in behaviour 12 .Moreover, we focus on the low-emission limit for carbon footprint, as it is the only one that significantly influences the search for eligible and sustainable placements.
In Figs.7a and 7b, we observe the hourly profit and carbon emissions for the examined scenario as the number of input intents varies.Notably, the MILP solution consistently outperforms in profit, which is expected by design.However, with 50 intents, it provides only a marginal profit advantage of 0.4% over Hungriest-Balanced, demonstrating the strong performance of this heuristic combination.Even with 100 intents, the Hungriest-Balanced heuristic retains its competitive edge, with a profit deficit of just -11.23% compared to MILP, a standing shared with Hungriest-Carbon (-10.45%).
Concerning carbon footprint, we observe how, even with a low number of intents, the Carbon heuristic for exploring nodes enables a marked reduction in CO2-eq .The difference is even more striking compared to the profit-driven heuristic, which is very bad in this respect.The Balanced heuristic also performs well, ranking as the second-best heuristic.Note how, in the 50-and 100-intent cases, all other proposed strategies reach the emission limit imposed by the global intent, further motivating the profit results obtained in this configuration.When the limit was reached, the most sustainable heuristics succeeded in satisfying more intents, guaranteeing a higher profit.
Regarding the execution times in Fig. 8, they increase proportionally to the number of intents for all the approaches.But we can quickly notice that the Prolog heuristics are 2 to 4 orders of magnitude faster than the mathematical one because the former stops after finding the first eligible solution, and the MILP looks for the optimal   one.However, there is a higher peak for Hungriest-Balanced and Hungriest-Profit combinations with 100 intents because, in the worst case, many VNF chains cannot be placed entirely without exceeding the carbon emissions limit.In this situation, Prolog repeatedly backtracks, changing the placement in search of a non-existing solution.
It is worth noticing that in the cases with 50 and 100 intents, the solution search using the MILP solver reached the timeout of 24 hours (also for the other infrastructure sizes), so the obtained profit is suboptimal.

Lessons learnt
Upon reviewing the results, we can identify four key scenarios: (1) When available resources exceed requirements, the Hungriest-Carbon combination offers an optimal solution, closely aligning with the MILP model.It maintains profits within a 2% range of the optimum, operates swiftly in under 0.3 seconds, and significantly reduces CO2-eq emissions compared to other strategies.(2) In scenarios where required and available resources are balanced, Hungriest-Carbon still proves advantageous.While the profit may fall up to 11.23% short of the optimum, it offers faster execution (0.1 seconds) and reduces CO2-eq emissions by 20.7%.(3) In cases where emission limits imposed by global intent restrict placements more than resource availability, the exploratory heuristics exert a substantial influence.Only Balanced and Carbon keep pace with the optimal solution.However, given Balanced's variable performance and smaller CO2eq advantage, the preferred choice remains Hungriest-Carbon, especially as ordering intents by resource requirements (Hungriest) consistently yields superior results.(4) In scenarios with significantly fewer available resources than required or where bandwidth and latency do not meet intent requirements, the MILP model outperforms Hungriest-Carbon by up to 18%.
It becomes evident that, over the long term, the Hungriest-Carbon heuristic strategy consistently outperforms the MILP strategy.To illustrate this, let's consider a queue theory analysis, specifically the M/M/1 model [39], where 50 intents are received within a 30minute timeframe.When we factor in the timing and profitability trends depicted in the graphs above, it is evident that the heuristic approach delivers results swiftly and sustains a competitive edge in terms of profitability.Due to the time invested in solution discovery and the associated waiting period, the MILP strategy shifts from an hourly advantage of 7.66% to a monthly disadvantage of 48.5% compared to the heuristic approach.These results indicate that the Hungriest-Carbon heuristic combination outperforms other heuristics and proves to be the superior choice compared to the MILP-based strategy.This underscores the practical benefits of opting for heuristic methods in addressing Intent-Based Networking (IBN) challenges, where long-term profitability and operational efficiency are paramount.

RELATED WORK
Due to the inherent complexity and variability of high-level business policies, the translation and intent resolution of Intent-Based Networking (IBN) systems present significant challenges.Similarly, various factors such as resource constraints, service requirements, and network conditions influence the complex problem of Virtual Network Functions (VNF) placement in service chains.Several survey papers investigate these issues, which form the core of ongoing research lines [3,9,25].
Jacobs et al. [17] proposed a language for representing intents, close to natural language, called NILE (Network Intent LanguagE).NILE's grammar offers constructs describing various network components, actions, service types, traffic and protocols.Lumi, an intent management system based on Natural Language Processing (NLP), uses NILE as an intermediate language.Also, other studies [4,14] follow the path of NLP, pushed by the biggies of the IT world such as Google, Amazon and Meta.
Leivadeas and Falkner [19] use an alternative template-based method to allow users to express their intents by customising a template with attributes.The advantage of this approach is the ease of translation to network policies and configurations, at the expense of flexibility, reduced by the requirement to follow a given template.
Tuncer et al. [32] describes a more adaptable method for turning intents into practical activities.They specify intents regarding actions on a given item (e.g., network traffic, video content) subject to specified constraints using a key-value syntax.A service mapper takes an action as input and pulls the relevant service description from a library comparable to a template.The description is detailed enough to uniquely identify a service and concise enough to facilitate rapid matching based on action attributes.The selected descriptor's fields are then instantiated using the values specified in the intent.
Lastly, in our previous work [2] , we proposed a declarative methodology to support intent modelling and the subsequent translation of intents related to VNF service provisioning in a distributed Cloud-Edge infrastructure.Recently, we have also exploited declarative solutions [11,23] to handle application placement constraints, e.g., assess the security policies or data characteristics, and to determine a probabilistic placement and network routing of VNF chains in Cloud-IoT scenarios [7].Nevertheless, Pianini et al. [20] and Casadei and Viroli [26] introduced a novel service coordination approach rooted in aggregate computing with a declarative methodology.This approach is geared toward the efficient management of opportunistic resources.It blends elements of centralised and decentralised solutions and leverages a self-organising peer-to-peer architecture to manage churn and mobility effectively.
Related to VNF chain placement, Kuo et al. [30] have demonstrated that finding a valid placement is an NP-hard problem, especially considering the availability of resources in the network, not only the Quality of Service [35].Among the surveyed solutions [13], there are several objectives that the authors want to reach.
Chen et al. [29] aimed to minimise placement costs while considering service quality constraints.They employed Integer Linear Programming (ILP) and a Hidden Markov Model (HMM)-based heuristic for large scenarios.Similarly, Pei et al. [31] pursued the objective of minimising costs in geographically distributed clouds, factoring in resource and placement costs, including computing power and network usage.
Mohamad and Hassanein [27] suggested a novel approach for placement in edge computing scenarios.Instead of deploying new VNFs, they advocated reusing already operational but underutilised VNFs.They constructed an ILP model to minimise placement costs by reducing resource usage while ensuring compliance with Quality of Service (QoS) requirements.
Zhang et al. [28] and Xu et al. [33] partitioned the energy consumption of VNF chains into two key components: server energy, encompassing the energy supporting VNF instances and the resources like CPUs, and physical link energy, associated with data transmission.Server energy, in turn, consists of the energy to keep the server operational and the energy to power the VNF-related resources.Link energy usage varies based on the link state (transmission or standby) and the bandwidth consumption.
Liu et al. [10] approached the placement problem by modelling it as a Markov decision process, a method for strategic decisionmaking under uncertainty.They applied a reinforcement learning (RL) algorithm to identify a resource-efficient placement that adheres to user-defined Service Level Agreements (SLAs) and minimises network latency.
Overall, the strategies presented in this article go beyond the energy-aware approach, incorporating the environmental impact of nodes.They comprehensively consider the energy consumption of a node's primary components, the energy sources powering it, and their associated emissions.This approach results in a carbon-aware solution that can identify viable and sustainable placements for VNF chains.Additionally, the declarative heuristic strategy efficiently manages multiple intents and complex scenarios involving extensive infrastructure and numerous nodes and intents.

CONCLUDING REMARKS
In this paper, we prototyped a Prolog declarative tool MultiDips and a Mixed-Integer Linear Programming (MILP) solution to satisfy multiple intents on Cloud-Edge IBN infrastructures by placing VNF chains.In particular, in solving the problem, we considered sustainability aspects (profit vs. CO2-eq emissions).These solutions were then experimentally evaluated, showing how the MILP finds the optimal placement, i.e. the one that maximises profit, increasing, however, in unbearable execution times.
Concerning the declarative heuristic strategy, the Hungriest-Carbon configuration excels over others.It reliably delivers solutions with profits closely matching the optimum in all examined contexts, simultaneously reducing the carbon footprint of placements and expediting search times.This consistency holds even in complex scenarios featuring many intents and large infrastructures.
In our future work pursuing this line, we intend to: • refine heuristic strategies, such as Balanced, by fine-tuning weight distribution could outperform the winning Carbon heuristic.Moreover, considering VNF interactions during intent ordering can bring us closer to optimal profits.• implement intent assurance, particularly regarding continuous intent satisfaction.Addressing VNF migration due to infrastructure changes can be efficiently tackled with continuous reasoning [6], focusing on affected segments.• implement intent activation by integrating MultiDips on top of an NFV Management and Orchestration stack.
It ensures smooth and low-latency video streaming by dynamically adapting content delivery based on users' available bandwidth and device capabilities, and • a stateful cloudStreamVF service that operates in a Cloudbased data centre.It manages the central video content repository and the coordination with edgeStreamVF service.It stores the original high-quality video content and seamlessly transitions between different quality levels based on user preferences and network conditions.

Definition 4 . 2 (
Longest).Given two intents  and  ′ , with VNF chains  and  ′ , and their length denoted by | | and | ′ | we define the following ordering among  and  ′ :
Property is required at a certain Level, hard if strictly necessary, soft otherwise between VNFs From and To.The property is fulfilled if it meets the threshold Value expressed in Unit s, based on the comparison via a given Operator (e.g.larger, smaller, . . .).