Sustainable Adaptive Security

With software systems permeating our lives, we are entitled to ex-pect that such systems are secure-by-design, and that such security endures throughout the use of these systems and their subsequent evolution. During my PhD, I aim to engineer sustainable adaptive security solutions that reflect such enduring protection in the dynamically changing security theatre of cyber-physical systems. I have chosen the example of a smart home as a cyber-physical system to motivate & illustrate sustainable adaptive security, discuss challenges for sustainably secure systems, and my research plan for engineering them.


INTRODUCTION & RESEARCH CHALLENGES
Security threats are on the rise.Many recent critical cyber security incidents arose from newly discovered threats [9], and the number of zero-day vulnerabilities reported is also increasing [34,36].There is a need to build systems that are not only secure by design, but that can also detect and mitigate newly discovered security threats over extended periods of time.We live in a world where technology influences how we interact with the physical space around us.This interplay between the cyber and the physical worlds [38] allows threats to extend over a wider attack surface [19,42].The dynamicity of cyber-physical systems such as smart homes results in evolving security goals [25] & requirements [15] caused by new assets, changes to existing assets (e.g, new features due to software updates), or changing domain assumptions (e.g., changing network interconnects).While such evolution is not unique to cyber-physical systems alone, it is more apparent in them [4,38].Identifying and managing such evolution is required for securing systems in the long term.Further, the security vulnerabilities of a system may be new [9,34,36] or unmitigated (but known), to the stakeholder responsible for securing the system.Consequently, the system must endure against known attacks, new variants of known attacks and previously unknown attacks as well.There is a need for techniques that detect new and unknown attacks and reason about them in terms that enable the enactment of suitable security controls.Further, the security controls themselves must be selected taking the immediate need to mitigate the symptoms of an attack (short term) or to prevent recurrence of the attack (long term) into account.
Recent research has also shown that human intervention [22] can improve security decision making (i.e., threat detection), support security monitoring (e.g., physical phenomena such as heating appliances) and the enactment of effective security controls when fully automated solutions cannot (e.g., enabling voice recognition).Thus the three main research challenges that motivate my PhD and illustrate the need for sustainable security are the evolving security goals & requirements of cyber-physical systems, new attacks that arise due to the constantly changing threat landscape, and the limitations of fully automated solutions in threat detection and mitigation.
We define sustainable security as "the ability to preserve the evolving security goals & requirements of a system" [30].Sustainable Adaptive Security systems are ones that dynamically modify security controls to preserve the evolving security goals & requirements of a system and mitigate newly identified threats.During my PhD, I aim to engineer sustainable adaptive security solutions that reflect enduring protection by implementing techniques to (1) detect new and previously unknown attacks, (2) identify sustainable mitigation strategies, (3) manage evolving security requirements, while incorporating human intervention as needed in each of the techniques.

RELATED WORK
The notion of sustainable security proposed in this PhD is novel and it is difficult to identify work that addresses its requirements.However, some aspects of the research challenges discussed earlier are addressed in the literature on adaptive security, requirements engineering and threat modelling, cyber physical and/or IoT security, and network security.
Works in requirements engineering provide techniques to specify security requirements [10,15,23], analyse [29] and adapt them at runtime [6,39].However, they require complete system models and none of them manage evolving security requirements.The works on managing evolving security requirements [5,32] do not provide any methods to infer them at run time for cyber-physical systems which are dynamic in their changes and in the case of smart homes, potentially unique to each household.
Adaptive security approaches detect and mitigate threats by dynamically applying security controls to a system to continuously satisfy its security goals [35].However, recent surveys have shown that adaptive security approaches are better suited to managing known vulnerabilities and there is a need for behavioural analysis to identify new vulnerabilities [40].Works on uncertainty and perpetual assurances [7,41] do not address the challenges in providing enduring security to cyber-physical systems.
Surveys in Cyber-Physical Systems & IoT security find the need for techniques that identify new vulnerabilities [20], mitigate the threats detected [26], and diagnose the root cause of anomalies [24].While many intrusion detection techniques exist, they do not detect new vulnerabilities [3,13,21,44], diagnose the malicious behaviours detected [28], or mitigate the attacks detected.Even the anomaly detection techniques that identify new malicious behaviours in cyber physical systems [16,43] do not diagnose them, apply security controls, or seek human intervention when required.
The literature on network security has extensively explored techniques to detect new vulnerabilities using various machine learning [8,45] and deep learning techniques [1].However, none of them diagnose the anomalies identified, manage evolving security requirements or seek human intervention when needed.
There is an opportunity to address these gaps in the literature using logic-based learning techniques [11] that are expressive with good tooling support for automation.Abductive reasoning, a process that maps effect to cause, has long been used for diagnosis and generating explanations [31].When coupled with behavioural anomaly detection techniques which have been shown to be effective against new vulnerabilities [26], abductive reasoning can identify a plausible explanation for an anomaly trace.Inductive learning techniques that have been used in the adaptive systems and requirements engineering literature to generate reactive plans when a system changes [2,37], can be used to identify sustainable mitigation strategies, i.e., ones that are suitable in the short and long term taking the security requirements of the system and users' usability preferences into consideration.Further, neuro-symbolic learning techniques [12] that have been used to learn security policies at runtime can be extended to identify evolving security requirements and by extension manage them.

RESEARCH QUESTIONS & CONTRIBUTIONS
To address the challenges discussed in Section 2, I would like to focus on the following research questions.
Research Question 1 (RQ1): How can abductive reasoning be used to improve the threat detection and diagnosis of adaptive security systems?A sample hypothesis would be, the application of abductive reasoning on the network anomalies identified will enable the threat detection and diagnosis of new attacks in a cyber-physical system.
I have developed an attack diagnosis technique using (1) a machine learning based anomaly detector, and (2) an abductive reasoning technique in refutation mode [33] using the clingo Answer Set Programming tool [18].Here diagnosis refers to the identification of the class of an attack and the violated security requirement.Multiple anomaly detection algorithms were evaluated and iForest was the most suitable.Abductive reasoning was chosen since it generates diagnoses in the desired format, and is shown to be tractable using clingoASP.Reasoning in refutation mode encodes anomalies in the clingo notation, and excludes security requirements of the system one at a time until the model is satisfied, thus identifying the violated security requirement.The novelty of this work is that it enables the detection of new attacks by identifying anomalous behaviours in the usage of the devices and violated security requirements, in contrast to existing techniques that use transfer learning or supervised learning to identify variations of known attacks.
Research Question 2 (RQ2): How can inductive learning be used to select sustainable threat mitigation strategies (short and long-term security measures)?A sample hypothesis would be, the application of inductive learning techniques in the selection of threat mitigation strategies will result in more sustainable security outcomes in the short and long-term.The expected contributions of this work would be an inductive learning technique that adaptively generates the most sustainable security control in the short term (e.g., sandboxing compromised device) and long term (e.g., enabling voice recognition to prevent ultrasonic voice attacks) without the need for harcoded security controls.Here, user intervention can be sought in the enactment of the security controls that cannot be automated.
Research Question 3 (RQ3): Can security requirements be modeled and evolved using Neuro-Symbolic Learning?A sample hypothesis would be, Neuro-symbolic learning combined with adaptive selection of security requirements can be used to identify evolving security requirements of a cyber physical system at runtime.The expected contributions of this work are a technique that adaptively selects security requirements from a set of device templates and improves them at runtime by extending previous works in neuro-symbolic learning [12], enabling the management of evolving security requirements.The device templates would be a base set of security requirements (e.g., a device does not communicate using unencrypted protocols) created for similar classes of devices from existing smart home attack taxonomies [17], that are selected or removed from the system model depending on their connection to the smart home network.The security requirements are then improved per device at runtime using neuro-symbolic learning.Here, user intervention is required to specify usability preferences that inform the selection of suitable security controls.

EVALUATION PLAN
For RQ1, I am currently using the Precision-Recall Area Under the Curve (PR-AUC) metric to evaluate the anomaly detector's ability to distinguish benign from malicious network traffic, with the model being tuned for a balanced F1-score.The precision metric is used to evaluate the abductive reasoning logic's ability to generate the same diagnoses as the ground truth dataset of labelled attacks.The two datasets used are, (1) CICIoT 2023 [27], containing data from 105 real devices and 33 simulated attacks, and (2) IoT-23 [14], which simulates malicious IoT network traffic.
For RQ2, I intend to evaluate the correctness of the mitigation strategy selected by the system by creating a ground truth dataset with security controls that the system should select when confronted with a chosen list of anomalies.
For RQ3, I intend to evaluate the correctness of the requirements that are generated by the technique at runtime when events that necessitate an evolution of those requirements are simulated.To achieve this, I intend to create a groundtruth dataset of the security requirements for a smart home by extending previous attack taxonomies [17].
228 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.