|
|
Configuring resource managers using model fuzzing: a case study of the .NET thread pool |
| |
Joseph L. Hellerstein
|
|
Pages: 1-8 |
|
Resource managers (RMs) often expose configuration parameters that have a significant impact on the performance of the systems they manage. Configuring RMs is challenging because it requires accurate estimates of performance for a large number of configuration ...
Resource managers (RMs) often expose configuration parameters that have a significant impact on the performance of the systems they manage. Configuring RMs is challenging because it requires accurate estimates of performance for a large number of configuration settings and many workloads, which scales poorly if configuration assessment requires running performance benchmarks. We propose an approach to evaluating RM configurations called model fuzzing that combines measurement and simple models to provide accurate and scalable configuration evaluation. Based on model fuzzing, we develop a methodology for configuring RMs that considers multiple evaluation criteria (e.g., high throughput, low number of threads). Applying this methodology to the .NET thread pool, we find a configuration that increases throughput by 240% compared with the throughput of a poorly chosen configuration. Using model fuzzing reduces the computational requirements to configure the .NET thread pool from machine-years to machine-hours. expand
|
|
|
Business-impact analysis and simulation of critical incidents in IT service management |
| |
C. Bartolini,
C. Stefanelli,
M. Tortonesi
|
|
Pages: 9-16 |
|
Service disruptions can have a considerable impact on business operations of IT support organizations, thus calling for the implementation of efficient incident management and service restoration processes. The evaluation and improvement of incident ...
Service disruptions can have a considerable impact on business operations of IT support organizations, thus calling for the implementation of efficient incident management and service restoration processes. The evaluation and improvement of incident management strategies currently in place, in order to minimize the business-impact of major service disruptions, is a very arduous task which goes beyond the optimization with respect to IT-level metrics. This paper presents HANNIBAL, a decision support tool for the business impact analysis and improvement of the incident management process. HANNIBAL evaluates possible strategies for an IT support organization to deal with major service disruptions. HANNIBAL then selects the strategy with the best alignment to the business objectives. Experimental results collected from the HANNIBAL application to a realistic case study show that business impact-driven optimization outperforms traditional performance-driven optimization. expand
|
|
|
A universal method for composing business transaction models using logs |
| |
Joel W. Branch,
Chatschik Bisdikian,
Ho Yin Starsky Wong,
Dakshi Agrawal
|
|
Pages: 17-24 |
|
In this paper, a novel procedural framework for increasing the efficiency of the model composition process for the end-to-end transactional behavior of various classes of business-level computer applications is presented. The objectives of the framework ...
In this paper, a novel procedural framework for increasing the efficiency of the model composition process for the end-to-end transactional behavior of various classes of business-level computer applications is presented. The objectives of the framework are reducing the composition time and the level of engagement of domain experts in deploying business management solutions. Starting with application footprints (i.e., raw log data of various types) and without the need for a priori understanding of their syntax and semantics, the framework permits the analysis of the footprints based on an agnostic tokenization of log data. The framework then produces candidate states and relationships that form the basis for composing the transactional models via the framework's state and relationship manipulation utilities. The main architectural components of the framework are presented and their underlying principles discussed. A case study highlighting the use of the framework, based on an implementation of it, is given and a discussion regarding additional features of the framework and its relation to other research activities is also provided. expand
|
|
|
Probabilistic decentralized network management |
| |
Marcus Brunner,
Dominique Dudkowski,
Chiara Mingardi,
Giorgio Nunzi
|
|
Pages: 25-32 |
|
This work proposes a probabilistic management paradigm for solving some major challenges of decentralized network management. Specifically, we show how to cope with 1) the overhead of redundant information gathering and processing, 2) the decentralized ...
This work proposes a probabilistic management paradigm for solving some major challenges of decentralized network management. Specifically, we show how to cope with 1) the overhead of redundant information gathering and processing, 2) the decentralized management in dynamic and unpredictable environments, and 3) the considerable effort required for decentralized coordination of management functions. To this end, we describe a framework for probabilistic decentralized management in the context of In-Network Management (INM). We demonstrate how this framework can be applied to a network of information, a novel clean-slate approach towards an information-centric future Internet. We show by means of a simulation study in the area of performance and fault management that we can significantly reduce the effort and resources dedicated to management, while we are able to achieve a sound level of accuracy of the overall network view. expand
|
|
|
Robust and scalable trust management for collaborative intrusion detection |
| |
Carol J. Fung,
Jie Zhang,
Issam Aib,
Raouf Boutaba
|
|
Pages: 33-40 |
|
The accuracy of detecting intrusions within an Intrusion Detection Network (IDN) depends on the efficiency of collaboration between the peer Intrusion Detection Systems (IDSes) as well as the security itself of the IDN against insider threats. In this ...
The accuracy of detecting intrusions within an Intrusion Detection Network (IDN) depends on the efficiency of collaboration between the peer Intrusion Detection Systems (IDSes) as well as the security itself of the IDN against insider threats. In this paper, we study host-based IDNs and introduce a Dirichlet-based model to measure the level of trustworthiness among peer IDSes according to their mutual experience. The model has strong scalability properties and is robust against common insider threats, such as a compromised or malfunctioning peer. We evaluate our system based on a simulated collaborative host-based IDS network. The experimental results demonstrate the improved robustness, efficiency, and scalability of our system in detecting intrusions in comparison with existing models. expand
|
|
|
A rule-based distributed system for self-optimization of constrained Devices |
| |
Javier Baliosian,
Jorge Visca,
Eduardo Grampín,
Leonardo Vidal,
Martín Giachino
|
|
Pages: 41-48 |
|
During the last years there has been a strong research effort on the autonomic communications and self-management paradigms. Following this impulse, the academic community and the industry have proposed several architectures and techniques to allow network ...
During the last years there has been a strong research effort on the autonomic communications and self-management paradigms. Following this impulse, the academic community and the industry have proposed several architectures and techniques to allow network devices to make their own configuration decisions. Those proposals often include resource-expensive technologies such as complex inference machines, ontological modeling and probabilistic prediction that may not be suitable for the most pervasive and inexpensive network-enabled devices. This paper addresses this facet of the autonomic systems introducing RAN. This system aims to be a complete rule-based, distributed system specially designed and implemented to enable autonomic behavior on very constrained devices, such as domestic wireless routers with resources as low as 16 MB of RAM and 4 MB of storage memory. The RAN system was developed to serve the objectives of Rural Ambient Networks, a project that targets the so-called Digital Divide deploying low-cost wireless mesh infrastructure in rural communities. In this context, RAN, in autonomic and distributed manners, optimizes the network configuration to minimize the monetary cost that the community has to pay for using the IT infrastructure. Finally, this work presents an evaluation of RAN that shows how it makes possible to perform sophisticated optimization decisions with a very small overhead in terms of CPU and memory. expand
|
|
|
Policy control management for web services |
| |
Arlindo L. Marcon,
Altair O. Santin,
Luiz A. de Paula Lima,
Rafael R. Obelheiro,
Maicon Stihler
|
|
Pages: 49-56 |
|
The decentralization of corporate policy administration aiming to maintain the unified management of user permissions is a hard task. The heterogeneity and complexity of corporate environments burdens the security administrator with writing equally complex ...
The decentralization of corporate policy administration aiming to maintain the unified management of user permissions is a hard task. The heterogeneity and complexity of corporate environments burdens the security administrator with writing equally complex policies. This paper proposes an architecture based on Web Services, policy provisioning, and authorization certificates, to build up a loosely coupled unified administrative control for corporate environments. A certificate-based permission management scheme is used to derive new policies in the local domains of each branch. These new policies will update the corporate repository which, in turn, will configure the corresponding policies in the local domains of each branch. The Web Services technology provides the underlying protocols for the development of a prototype which shows the feasibility of our proposal. expand
|
|
|
Predictive routing of contexts in an overlay network |
| |
Hahnsang Kim,
Kang G. Shin
|
|
Pages: 57-64 |
|
While mobile nodes (MNs) undergo handovers across inter-wireless access networks, their contexts must be propagated for seamless re-establishment of on-going application sessions, including IP header compression, secure Mobile IP, authentication, authorization, ...
While mobile nodes (MNs) undergo handovers across inter-wireless access networks, their contexts must be propagated for seamless re-establishment of on-going application sessions, including IP header compression, secure Mobile IP, authentication, authorization, and accounting services, to name a few. Routing contexts via an overlay network either on-demand or based on prediction of an MNs' mobility, introduces a new challenging requirement of context management. This paper proposes a context router (CXR) that manages contexts in an overlay network. A CXR is responsible for (1) monitoring of MNs' cross-handover, (2) analysis of MNs' movement patterns, and (3) context routing ahead of each MN's arrival at an AP or a network. The predictive routing of contexts is performed based on statistical learning of (dis)similarities between the patterns obtained from vector distance measurements. The proposed CXR has been evaluated on a prototypical implementation based on an MN mobility model in an emulated access network. Our evalua- tion results show that the prediction mechanisms applied on the CXR outperform a Kalman-filter-based method [34] with respect to both prediction accuracy and computation performance. expand
|
|
|
Planning-based configuration and management of distributed systems |
| |
Kyriaki Levanti,
Anand Ranganathan
|
|
Pages: 65-72 |
|
The configuration and runtime management of distributed systems is often complex due to the presence of a large number of configuration options and dependencies between interacting sub-systems. Inexperienced users usually choose default configurations ...
The configuration and runtime management of distributed systems is often complex due to the presence of a large number of configuration options and dependencies between interacting sub-systems. Inexperienced users usually choose default configurations because they are not aware of the possible configurations and/or their effect on the systems' operation. In doing so, they are unable to take advantage of the potentially wide range of system capabilities. Furthermore, managing inter-dependent sub-systems frequently involves performing a set of actions to get the overall system to the desired final state. In this paper, we propose a new approach for configuring and managing distributed systems based on AI planning. We use a goal-driven, tag-based user interaction paradigm to shield users from the complexities of configuring and managing systems. The key idea behind our approach is to package different configuration options and system management actions into reusable modules that can be automatically composed into workflows based on the user's goals. It also allows capturing the inter-dependencies between different configuration options, management actions and system states. We evaluate our approach in a case study involving three interdependent sub-systems. Our initial experiences indicate that this planning-based approach holds great promise in simplifying configuration and management tasks. expand
|
|
|
Service management architecture and system capacity design for PhoneFactor™: a two-factor authentication service |
| |
Haiyang Qian,
Chandra Sekhar Surapaneni,
Stephen Dispensa,
Deep Medhi
|
|
Pages: 73-80 |
|
PhoneFactor™ is a token-less two-factor authentication service for user remote logons [13]. This allows users of an organization to be authenticated through an automated phone call to the user's phone before access is allowed. In this ...
PhoneFactor™ is a token-less two-factor authentication service for user remote logons [13]. This allows users of an organization to be authenticated through an automated phone call to the user's phone before access is allowed. In this paper, we present the service management architecture of PhoneFactor that depends on both the Internet and the public switched telephone network (PSTN), and we identify two key quality of service parameters, the system response time and call blocking probability, where the latter can impact the former. Furthermore, through traffic analysis of the measurement data from the deployed PhoneFactor service, we found that the inter-arrival time of requests follows the Generalized Pareto distribution while the system response time and the call duration (for the authentication part through the phone call) follow the log-normal distribution. Given these distributions, we then present system capacity design methodologies by comparing them to known results for systems that are analytically derivable. expand
|
|
|
Autonomic service hosting for large-scale distributed MOVE-services |
| |
Bruno Van Den Bossche,
Filip De Turck,
Bart Dhoedt,
Piet Demeester
|
|
Pages: 81-88 |
|
Massively Online Virtual Environments (MOVEs) have been gaining popularity for several years. Today, these complex networked applications are serving thousands of clients simultaneously. However, these MOVEs are typically hosted on specialized server ...
Massively Online Virtual Environments (MOVEs) have been gaining popularity for several years. Today, these complex networked applications are serving thousands of clients simultaneously. However, these MOVEs are typically hosted on specialized server clusters and rely on internal knowledge of the services to optimize the load balancing. This makes running MOVEs an expensive undertaking as it cannot be outsourced to third party hosting providers. This paper details two Integer Linear Programming approaches to optimize the MOVE deployment through load balancing and minimizing the delay experienced by the end-users. Optimization includes assigning MOVE components to resources and replication of components to increase the scalability. One approach assuming full application knowledge of a dedicated MOVE and one with no internal knowledge and geared toward a generic MOVE hosting platform. For both cases an optimizing heuristic is evaluated and the obtained results are compared. expand
|
|
|
A systematic and practical approach to generating policies from service level objectives |
| |
Yuan Chen,
Subu Iyer,
Dejan Milojicic,
Akhil Sahai
|
|
Pages: 89-96 |
|
In order to manage a service to meet the agreed upon SLA, it is important to design a service of the required capacity and to monitor the service thereafter for violations at runtime. This objective can be achieved by translating SLOs specified in the ...
In order to manage a service to meet the agreed upon SLA, it is important to design a service of the required capacity and to monitor the service thereafter for violations at runtime. This objective can be achieved by translating SLOs specified in the SLA into lower-level policies that can then be used for design and enforcement purposes. Such design and operational policies are often constraints on thresholds of lower level metrics. In this paper, we propose a systematic and practical approach that combines fine-grained performance modeling with regression analysis to translate service level objectives into design and operational policies for multi-tier applications. We demonstrate that our approach can handle both request-based and session-based workloads and deal with workload changes in terms of both request volume and transaction mix. We validate our approach using both the RUBiS e-commerce benchmark and a trace-driven simulation of a business-critical enterprise application. These results show the effectiveness of our approach. expand
|
|
|
CHANGEMINER: a solution for discovering IT change templates from past execution traces |
| |
Weverton Luis da Costa Cordeiro,
Guilherme Sperb Machado,
Fabrício Girardi Andreis,
Juliano Araújo Wickboldt,
Roben Castagna Lunardi,
Alan Diego dos Santos,
Cristiano Bonato Both,
Luciano Paschoal Gaspary,
Lisandro Zambenedetti Granville,
David Trastour,
Claudio Bartolini
|
|
Pages: 97-104 |
|
The main goal of change management is to ensure that standardized methods and procedures are used for the efficient and prompt handling of changes in IT systems, in order to minimize change-related incidents and service-delivery disruption. To meet this ...
The main goal of change management is to ensure that standardized methods and procedures are used for the efficient and prompt handling of changes in IT systems, in order to minimize change-related incidents and service-delivery disruption. To meet this goal, it is of paramount importance reusing the experience acquired from previous changes in the design of subsequent ones. Two distinct approaches may be usefully combined to this end. In a top-down approach, IT operators may manually design change templates based on the knowledge owned/acquired in the past. Considering a reverse, bottom-up perspective, these templates could be discovered from past execution traces gathered from IT provisioning tools. While the former has been satisfactorily explored in previous investigations, the latter - despite its undeniable potential to result in accurate templates in a reduced time scale - has not been subject of research, as far as the authors are aware of, by the service operations and management community. To fill in this gap, this paper proposes a solution, inspired on process mining techniques, to discover change templates from past changes. The solution is analyzed through a prototypical implementation of a change template miner subsystem called CHANGEMINER, and a set of experiments based on a real-life scenario. expand
|
|
|
DACS scheme as next generation policy-based network management scheme |
| |
Kazuya Odagiri,
Rihito Yaegashi,
Masaharu Tadauchi,
Naohiro Ishii
|
|
Pages: 105-108 |
|
As the work for managing a whole network effectively without a limited purpose, there is the work of PBNM (Policy-based network management). PBNM has two structural problems such as communication concentration from many clients to a communication control ...
As the work for managing a whole network effectively without a limited purpose, there is the work of PBNM (Policy-based network management). PBNM has two structural problems such as communication concentration from many clients to a communication control mechanism called PEP (Policy Enhancement Point) and the necessity of the network system updating at the time of introducing PBNM into LAN. Moreover, user support problems in campus-like computer networks such as trouble user support at updating a client's setups and coping with annoying communication can not be improved by PBNM. To improve these problems, we show a next generation PBNM which overcomes theses problems and has the function which does not exist in existing PBNM, and called it DACS (Destination Addressing Control System) Scheme. By DACS Scheme, communication concentration from many clients to PEP is solved, and system updating becomes unnecessary. Moreover, user support at updating the client's setups and coping with annoying communication by DACS Scheme becomes very effective. expand
|
|
|
A new approach for multi-sink environments in WSNs |
| |
Ricardo Silva,
Jorge Sá Silva,
Milan Simek,
Fernando Boavida
|
|
Pages: 109-112 |
|
Wireless Sensor Networks are low cost networks constituted by modest devices with limited resources, whose main function is monitoring. Based on the low price of these devices, it will be cheap to deploy a large amount of nodes to monitor a large area. ...
Wireless Sensor Networks are low cost networks constituted by modest devices with limited resources, whose main function is monitoring. Based on the low price of these devices, it will be cheap to deploy a large amount of nodes to monitor a large area. However, to provide an efficient ad hoc network using these limited devices, new and optimized algorithms should be proposed. Most of the current work about WSNs are based on simulation studies and do not take in consideration engineering processes. This paper presents a Multi-Sink Node alternative to multi-hop solutions. The proposed solution also provides a new system for the discovery of devices and services over IPv6, allowing nodes to be automatically incorporated in the nearest WSN. This paper also presents a paradigm to efficiently provide mobility, granting a fast handover of nodes between different WSNs, without loosing the connection. expand
|
|
|
Towards an information model for ITIL and ISO/IEC 20000 processes |
| |
Michael Brenner,
Thomas Schaaf,
Alexander Scherer
|
|
Pages: 113-116 |
|
As IT service providers are adopting more comprehensive approaches towards IT Service Management (ITSM), they increasingly need to rely on ITSM software solutions in their day-to-day operations. However, when wishing to integrate ITSM software from one ...
As IT service providers are adopting more comprehensive approaches towards IT Service Management (ITSM), they increasingly need to rely on ITSM software solutions in their day-to-day operations. However, when wishing to integrate ITSM software from one vendor with that of another, the lack of underlying standards becomes woefully apparent. Without any standardized information model for ITSM processes, efficient and integrated ITSM will remain a vision. While in the telecommunications sector, a lot of work has been invested into developing the Shared Information/Data Model (SID), a companion model for the industry-specific process framework enhanced Telecom Operations Map (eTOM), no equivalent for the more general process frameworks of ITIL and ISO/IEC 20000 is in sight. This paper introduces an approach towards an information model for ITSM processes. The presented method leverages work done for SID, by adapting and complementing SID concepts and content to produce an information model compliant to ISO/IEC 20000 requirements and ITIL recommendations. expand
|
|
|
Rapid service creation environment for service delivery platform based on service templates |
| |
Ling Jin,
Ping Pan,
Chun Ying,
Jinhua Liu,
Qiming Tian
|
|
Pages: 117-120 |
|
The requirement of quickly creating new value-added telecom services is increasingly becoming a business imperative today. Telecom operators and service providers are facing the challenge how to reduce cost and time-to-market of creating new services. ...
The requirement of quickly creating new value-added telecom services is increasingly becoming a business imperative today. Telecom operators and service providers are facing the challenge how to reduce cost and time-to-market of creating new services. The reduction is expected to be 1 or 2 orders of magnitude from months or weeks to days, even hours. In traditional Service Creation Environment, a full lifecycle of requirements analyzing, design, development, testing and deployment always has to be undergone. Such way of services creation always results in poor reuse of software assets and involves mass resources. Some technologies have been developed to speed up steps in this lifecycle, such as BPEL or SCXML. These orchestration techniques and tools are already used to reduce cost in design and development stages but the creation procedure still rely on attendance of IT specialists. This paper presents a way to shorten and simplify the whole service creation lifecycle and to reach the reduction target by building a template based Service Creation Environment. A model is used to separate service work flow definition and service parameters configuration and to achieve rapid development for different roles. A prototype of template based service creation environment is also introduced in this paper. expand
|
|
|
Designing stand-by gateway for managing a waste of networked home-device power |
| |
Jungmee Yun,
Jinwook Chung,
Sanghak Lee
|
|
Pages: 121-124 |
|
The Internet protocols were designed when there were relatively few devices connected to the Internet and these devices were in use most of the time. Studies show many of these computers especially in home have their power management features disabled ...
The Internet protocols were designed when there were relatively few devices connected to the Internet and these devices were in use most of the time. Studies show many of these computers especially in home have their power management features disabled in order to maintain their network presence and network connections. Past research proposes to use a low power proxy to 'stand in' for a computer, allowing it to go to sleep and thus save power while still maintaining its network presence. This paper describes an experimental stand-by gateway that can be used to develop the requirements for such a proxy. With the standby gateway, we propose to develop a language to be used by applications to provide the necessary code to the proxy to maintain network connections and presence for a sleeping computer. And we investigate the possibility of putting various components on home gateHIay to sleep during periods of low traffic activity. Out results show that sleeping is indeed feasible in the home network. expand
|
|
|
Rate-based SIP flow management for SLA satisfaction |
| |
Jing Sun,
Ruixiong Tian,
Jinfeng Hu,
Bo Yang
|
|
Pages: 125-128 |
|
SIP flow management should respect the specific characteristics of SIP protocol applied in multimedia or telecom services for meeting stringent quality of service (QoS) requirement. The specific characteristics include explicit session structure for ...
SIP flow management should respect the specific characteristics of SIP protocol applied in multimedia or telecom services for meeting stringent quality of service (QoS) requirement. The specific characteristics include explicit session structure for correlating a series of SIP messages, stringent response time required by real-time applications, extra overhead imposed by SIP message retransmission, service differentiation for meeting different business demands. We designed a front-end flow management (FEFM) system for SIP application servers to address these issues. We present the overall architecture of FEFM, the functionalities of primary modules in FEFM and the evaluation results in this paper. The evaluation results show that FEFM has the ability to achieve the tradeoffs among overload protection, QoS assurance and service differentiation. expand
|
|
|
A generic end-to-end monitoring architecture for multimedia services |
| |
A. Cuadra,
F. Garces,
J. A. del Sol,
G. Nieto
|
|
Pages: 129-132 |
|
This paper describes a generic architecture for monitoring multimedia services, such as IPTV, MobileTV or VoIP, by analyzing the traffic of real users from an end-to-end perspective. The main data source is the information gathered by different types ...
This paper describes a generic architecture for monitoring multimedia services, such as IPTV, MobileTV or VoIP, by analyzing the traffic of real users from an end-to-end perspective. The main data source is the information gathered by different types of probes deployed all over the monitored network. In addition, these detailed records are correlated inside the core of the architecture in order to generate quality of service figures from a centralized point. Passive probes monitor users' services; meanwhile active probes emulate users to offer an end - to-end vision of the services. Furthermore, the information from the network is enriched with external databases, as inventory tools, service model and CRM (Customer Relationship Management). Additionally, main use cases are shown for monitoring the quality of service of an IPTV platform, using the proposed architecture under the designation of OMEGAQ. expand
|
|
|
Secure interworking & roaming of WiMAX with 3G and Wi-Fi |
| |
Vamsi Krishna Gondi,
Nazim Agoulmine
|
|
Pages: 133-136 |
|
The roaming between different WiMAX (Worldwide Interoperability for Microwave Access) networks as well as the interworking between WiMAX and other access technologies will be a key enabler for global WiMAX deployment. To provide secure and seamless roaming ...
The roaming between different WiMAX (Worldwide Interoperability for Microwave Access) networks as well as the interworking between WiMAX and other access technologies will be a key enabler for global WiMAX deployment. To provide secure and seamless roaming capability for mobile users across different access network domains, belonging to the same or different operators, we propose a roaming & interworking solution using intermediary entities, called Roaming Interworking Intermediary (RII). A generic RII-based interworking and roaming architecture between WiMAX, 3GPP (third Generation Partnership Project) and WLAN networks is presented. A test-bed has been setup, using real pre-WiMAX and Wi-Fi equipments and real operational cellular network, to demonstrate and evaluate the proposed solutions. The robustness, feasibility and efficiency of the proposed architecture are proven through different user scenarios. expand
|
|
|
Four questions that determine whether traffic management is reasonable |
| |
Scott Jordan
|
|
Pages: 137-140 |
|
As part of the wider debate over net neutrality, traffic management practices of Internet Service Providers have become an issue of public concern. The Federal Communications Commission has asked for public input on whether deep packet inspection and ...
As part of the wider debate over net neutrality, traffic management practices of Internet Service Providers have become an issue of public concern. The Federal Communications Commission has asked for public input on whether deep packet inspection and other traffic management practices are reasonable forms of network management. Little attention has been paid to this issue within the academic networking community, and most Internet policy researchers have recommended a case-by-case analysis. This paper proposes four questions that can be used to determine whether a traffic management practice is reasonable or unreasonable. expand
|
|
|
Real-time root cause analysis in OSS for a multilayer and multi-domain network using a hierarchical circuit model and scanning algorithm |
| |
Masanori Miyazawa,
Tomohiro Otani
|
|
Pages: 141-144 |
|
One of the major issues for telecom operators today is how to rapidly identify the cause of failure and affected services within a multi-layer and multi-domain network to achieve high-quality service on an end-to-end basis. To assess this issue, this ...
One of the major issues for telecom operators today is how to rapidly identify the cause of failure and affected services within a multi-layer and multi-domain network to achieve high-quality service on an end-to-end basis. To assess this issue, this paper describes a real-time root cause analysis mechanism, which can pinpoint an accurate root cause and identify the influence on services. We investigated an interworking mechanism; based on a web service interface between an inventory and fault management systems and developed prototypes of them as part of an operation support system (OSS), which is capable of managing not only a core network and a metro ring network, but also a customer network. By introducing a hierarchical circuit model in the inventory management system and the proposed scanning algorithm over multiple layers and domains implemented in the fault management system, our developed root cause analysis was successfully verified using the testbed network environment; indicating relatively fast and scalable operation. expand
|
|
|
Extending the CIM-SPL policy language with RBAC for distributed management systems in the WBEM infrastructure |
| |
Li Pan,
Jorge Lobo,
Seraphin Calo
|
|
Pages: 145-148 |
|
In spite of the large effort behind the development of the WBEM and CIM standards for the management of distributed systems, there has been very little work addressing security in those standards. In this paper we present a Role-based Access Control ...
In spite of the large effort behind the development of the WBEM and CIM standards for the management of distributed systems, there has been very little work addressing security in those standards. In this paper we present a Role-based Access Control (RBAC) policy language to render fine-grained access control policies for WBEM and CIM. The language is an extension of CIM-SPL, a preliminary DMTF policy language standard. The CIM-SPL RBAC extension fully complies with the WBEM standards. Access control policies can be specified for CIM object constructs according to the standard NIST RBAC model as well as with an extended model adapted for CIM. This extension provides a policy-based RBAC mechanism in the WBEM infrastructure. expand
|
|
|
Probabilistic fault diagnosis for IT services in noisy and dynamic environments |
| |
Lu Cheng,
Xue-song Qiu,
Luoming Meng,
Yan Qiao,
Zhi-qing Li
|
|
Pages: 149-156 |
|
The modern society has come to rely heavily on IT services. To improve the quality of IT services it is important to quickly and accurately detect and diagnose their faults which are usually detected as disruption of a set of dependent logical services ...
The modern society has come to rely heavily on IT services. To improve the quality of IT services it is important to quickly and accurately detect and diagnose their faults which are usually detected as disruption of a set of dependent logical services affected by the failed IT resources. The task, depending on observed symptoms and knowledge about IT services, is always disturbed by noises and dynamic changing in the managed environments. We present a tool for analysis of IT services faults which, given a set of failed end-to-end services, discovers the underlying resources of faulty state. We demonstrate empirically that it applies in noisy and dynamic changing environments with bounded errors and high efficiency. We compare our algorithm with two prior approaches, Shrink and Maxcoverage, in two well-known types of network topologies. Experimental results show that our algorithm improves the overall performance. expand
|
|
|
Session resumption for the secure shell protocol |
| |
Jürgen Schönwälder,
Georgi Chulkov,
Elchin Asgarov,
Mihai Cretu
|
|
Pages: 157-163 |
|
The secure shell protocol (SSH) is widely deployed to access command line interfaces of network devices and host systems over an insecure network. Recently, the IETF has produced specifications how to run network management protocols such as NETCONF ...
The secure shell protocol (SSH) is widely deployed to access command line interfaces of network devices and host systems over an insecure network. Recently, the IETF has produced specifications how to run network management protocols such as NETCONF or SNMP over SSH. SSH computes new session keys whenever a new SSH session is established. This computationally expensive operation causes significant latency and processor load on low end devices and thus makes short-lived connections very expensive. To address this problem, we describe a session resumption feature that allows clients to resume sessions without having to compute new session keys. expand
|
|
|
PCE-based hierarchical segment restoration |
| |
Mohamed Abouelela,
Mohamed El-Darieby
|
|
Pages: 164-171 |
|
Providing network QoS involves, among other things, ensuring network survivability in spite of network faults. Fault recovery mechanisms should reduce recovery time, especially for real-time and mission-critical applications while guaranteeing QoS requirements, ...
Providing network QoS involves, among other things, ensuring network survivability in spite of network faults. Fault recovery mechanisms should reduce recovery time, especially for real-time and mission-critical applications while guaranteeing QoS requirements, in terms of bandwidth and delay constraints and maximizing network resources utilization. In this paper, we propose a scalable recovery mechanism based on hierarchical networks. The proposed mechanism is based on inter-domain segmental restoration and is performed by a recovery module (RM) introduced for each domain of the hierarchy. The RM cooperates with Path Computation Element (PCE) to perform recovery while maintaining QoS. Segmental restoration ensures faster recovery time by trying to recover failed paths as close as possible to where the fault occurred. The recovery mechanism aggregates fault notification messages to reduce the size of the signaling storm. In addition, the recovery mechanism ranks failed paths to reduce recovery time for high-priority traffic. We present simulation results conducted for different network sizes and hierarchy structures. Two metrics were considered: recovery time and signaling storm size. A significant decrease in the recovery time with increasing number of hierarchical levels for the same network size is observed. The larger the number of hierarchy levels, the smaller the number of network nodes in each domain and, generally, the faster the routing computations and routing tables search times. In addition, the recovery mechanism results in reducing recovery time for high priority traffic by nearly 90% over that of lower priority traffic. However, increasing the number of hierarchical levels results in a linear increase in signaling storm size. expand
|
|
|
SecSip: a stateful firewall for SIP-based networks |
| |
Abdelkader Lahmadi,
Olivier Festor
|
|
Pages: 172-179 |
|
SIP-based networks are becoming the de-facto standard for voice, video and instant messaging services. Being exposed to many threats while playing an major role in the operation of essential services, the need for dedicated security management approaches ...
SIP-based networks are becoming the de-facto standard for voice, video and instant messaging services. Being exposed to many threats while playing an major role in the operation of essential services, the need for dedicated security management approaches is rapidly increasing. In this paper we present an original security management approach based on a specific vulnerability aware SIP stateful firewall. Through known attack descriptions, we illustrate the power of the configuration language of the firewall which uses the capability to specify stateful objects that track data from multiple SIP elements within their lifetime. We demonstrate through measurements on a real implementation of the firewall its efficiency and performance. expand
|
|
|
Using argumentation logic for firewall configuration management |
| |
Arosha K. Bandara,
Antonis C. Kakas,
Emil C. Lupu,
Alessandra Russo
|
|
Pages: 180-187 |
|
Firewalls remain the main perimeter security protection for corporate networks. However, network size and complexity make firewall configuration and maintenance notoriously difficult. Tools are needed to analyse firewall configurations for errors, to ...
Firewalls remain the main perimeter security protection for corporate networks. However, network size and complexity make firewall configuration and maintenance notoriously difficult. Tools are needed to analyse firewall configurations for errors, to verify that they correctly implement security requirements and to generate configurations from higher-level requirements. In this paper we extend our previous work on the use of formal argumentation and preference reasoning for firewall policy analysis and develop means to automatically generate firewall policies from higher-level requirements. This permits both analysis and generation to be done within the same framework, thus accommodating a wide variety of scenarios for authoring and maintaining firewall configurations. We validate our approach by applying it to both examples from the literature and real firewall configurations of moderate size (≅ 150 rules). expand
|
|
|
Evaluating WS-security and XACML in web services-based network management |
| |
Estêvão Miguel Zanette Rohr,
Lisandro Zambenedetti Granville,
Liane Margarida R. Tarouco
|
|
Pages: 188-194 |
|
The use of Web services in network management has became a reality after recent researches and industry standardization effort. Although performance is a critical issue, as well as security support, no investigation so far has observed how secure Web ...
The use of Web services in network management has became a reality after recent researches and industry standardization effort. Although performance is a critical issue, as well as security support, no investigation so far has observed how secure Web services communications perform when employed for network management. In this paper we present a first investigation in this subject by evaluating the performance of WS-Security and XACML in a scenario where remote processes information is retrieved. Our evaluation shows that encryption and access control increase the response time more than other aspects like message signature or authentication. We also observe that messages carrying security information are 10 times larger than unsecure messages, which may prevent the retrieval of a large number of information and short periods of time. expand
|
|
|
Performance management via adaptive thresholds with separate control of false positive and false negative errors |
| |
David Breitgand,
Maayan Goldstein,
Ealan Henis,
Onn Shehory
|
|
Pages: 195-202 |
|
Component level performance thresholds are widely used as a basic means for performance management. As the complexity of managed systems increases, manual threshold maintenance becomes a difficult task. This may result from a) a large number of system ...
Component level performance thresholds are widely used as a basic means for performance management. As the complexity of managed systems increases, manual threshold maintenance becomes a difficult task. This may result from a) a large number of system components and their operational metrics, b) dynamically changing workloads, and c) complex dependencies between system components. To alleviate this problem, we advocate that component level thresholds should be computed, managed and optimized automatically and autonomously. To this end, we have designed and implemented a performance threshold management sub-system that automatically and dynamically computes two separate component level thresholds: one for controlling Type I errors and another for controlling Type II errors. We present the theoretical foundation for this autonomic threshold management system, describe a specific algorithm and its implementation, and evaluate it using real-life scenarios and production data sets. As our present study shows, with proper parameter tuning, our on-line dynamic solution is capable of nearly optimal performance thresholds calculation. expand
|
|
|
Optimizing correlation structure of event services considering time and capacity constraints |
| |
Bin Zhang,
Ehab Al-Shaer
|
|
Pages: 203-210 |
|
Constructing optimal event correlation architecture is crucial to large-scale event services. It plays an instrumental role in detecting composite events requested by different subscribers in scalable and timely manner. However, events generated from ...
Constructing optimal event correlation architecture is crucial to large-scale event services. It plays an instrumental role in detecting composite events requested by different subscribers in scalable and timely manner. However, events generated from different sources might have different time and priority requirements. In addition, the network links and correlation servers might have different bandwidth and processing constraints respectively. In this work, we address the problem of optimizing distributed event correlation to maximize the correlation profit (benefit minus shipping and processing cost) of detecting composite events, while at the same time satisfying the network bandwidth, node capacity, and correlation tasks time constrains. We show that this problem is NP-hard and provide a heuristic approximation algorithm. We evaluate our heuristic approach with different network sizes, topologies under different event delivery and detection requirements. Our simulation study shows that the results obtained by our heuristic are close to the upper bound. expand
|
|
|
Fault detection in IP-based process control networks using data mining |
| |
Byungchul Park,
Young J. Won,
Hwanjo Yu,
James Won-Ki Hong,
Hong-Sun Noh,
Jang Jin Lee
|
|
Pages: 211-217 |
|
Industrial process control IP networks support communications between process control applications and devices. Communication faults in any stage of these control networks can cause delays or even shutdown of the entire manufacturing process. The current ...
Industrial process control IP networks support communications between process control applications and devices. Communication faults in any stage of these control networks can cause delays or even shutdown of the entire manufacturing process. The current process of detecting and diagnosing communication faults is mostly manual, cumbersome, and inefficient. Detecting early symptoms of potential problems is very important but automated solutions do not yet exist. Our research goal is to automate the process of detecting and diagnosing the communication faults as well as to prevent problems by detecting early symptoms of potential problems. To achieve our goal, we have first investigated real-world fault cases and summarized control network failures. We have also defined network metrics and their alarm conditions to detect early symptoms for communication failures between process control servers and devices. In particular, we leverage data mining techniques to train the system to learn the rules of network faults in control networks and our testing results show that these rules are very effective. In our earlier work, we presented a design of a process control network monitoring and fault diagnosis system. In this paper, we focus on how the fault detection part of this system can be improved using data mining techniques. expand
|
|
|
A user-centric network management framework for high-density wireless LANs |
| |
Yanfeng Zhu,
Qian Ma,
Chatschik Bisdikian,
Chun Ying
|
|
Pages: 218-225 |
|
With the ever increasing deployment density of Wireless Local Area Networks (WLANs), more and more access points (APs) are deployed within users' vicinity. The effective management of these APs to optimize users' eventual throughput becomes an important ...
With the ever increasing deployment density of Wireless Local Area Networks (WLANs), more and more access points (APs) are deployed within users' vicinity. The effective management of these APs to optimize users' eventual throughput becomes an important challenge in the high-density deployment environment. In this paper, we propose a user-centric network management framework to optimize the throughput of users operating in the high-density WLANs taking into consideration the network conditions sensed by users and their access priorities. The proposed framework is built around an information pipeline that facilitates the sharing of the information needed for optimal management of communication resources. Theoretical analysis and extensive simulations are presented on two major management activities: AP association and channel selection, and demonstrate that the proposed user-centric network management framework significantly outperforms traditional network management framework in the high-density deployment environment. expand
|
|
|
MeshMan: a management framework for wireless mesh networks |
| |
Vivek Aseeja,
Rong Zheng
|
|
Pages: 226-233 |
|
As wireless mesh networks become more popular, there exists a need to provide centralized management solutions, which facilitate network administrators to control, troubleshoot and collect statistics from their networks. Managing wireless mesh networks ...
As wireless mesh networks become more popular, there exists a need to provide centralized management solutions, which facilitate network administrators to control, troubleshoot and collect statistics from their networks. Managing wireless mesh networks poses unique challenges due to limited bandwidth resources and dynamic channel quality. A robust management solution should function despite network layer failure. In this paper, we propose MeshMan, a network layer agnostic, low overhead solution to network management to cope with unreliable wireless channels, link and network level dynamics in wireless mesh networks. It combines the concepts of source routing with hierarchical addressing, and provides a native efficient query interface. A prototype of MeshMan has been implemented as a user space daemon on Linux and evaluated using a 12-node wireless mesh network testbed. Experimental studies demonstrate that MeshMan has comparable or better performance than the Simple Network Management Protocol (SNMP) in management overhead and response times when the network is stable while having much better performance in presence network dynamics. expand
|
|
|
A scalable PBNM framework for MANET management |
| |
Wang-Cheol Song,
Shafqat-Ur Rehman,
Hanan Lutfiyya
|
|
Pages: 234-241 |
|
Policy-based Network Management (PBNM) in Mobile Ad-hoc Networks (MANETs) requires additional reliable and efficient mechanisms over PBNM in wired networks. Thus, it is important that the management system in MANETs should cluster the moving nodes and ...
Policy-based Network Management (PBNM) in Mobile Ad-hoc Networks (MANETs) requires additional reliable and efficient mechanisms over PBNM in wired networks. Thus, it is important that the management system in MANETs should cluster the moving nodes and manage their movements in an effective manner. In the paper, a scalable framework is proposed for the policy-based management in ad hoc networks in which we use k-hop clustering with extended COPS-PR. And, we discuss methods for Policy Enforcement Points (PEPs) to discover autonomously the Policy Decision Point (PDP) and set the management area in the framework. Also, three regions are suggested to effectively maintain PDP/PEP clusters in the PBNM system. Finally, we discuss the results achieved through simulations. expand
|
|
|
Adaptable misbehavior detection and isolation in wireless ad hoc networks using policies |
| |
Oscar F. Gonzalez Duque,
Antonis M. Hadjiantonis,
George Pavlou,
Michael Howarth
|
|
Pages: 242-250 |
|
Wireless ad hoc networks provide the communications platform for new technologies and applications, such as vehicular ad hoc networks or wireless mesh networks. However, their multihop wireless nature makes them inherently unreliable and vulnerable, ...
Wireless ad hoc networks provide the communications platform for new technologies and applications, such as vehicular ad hoc networks or wireless mesh networks. However, their multihop wireless nature makes them inherently unreliable and vulnerable, since their overall performance depends on the cooperative packet forwarding behavior of each individual node. In this paper we present a role-based approach that uses a distributed management overlay and gathers information about the packet forwarding activities of each node in the network. Using policies to control an adaptive algorithmic method that monitors the individual behavior of each node, we show that it is possible to detect, accuse and punish misbehaving nodes with a high degree of confidence. Our evaluation results demonstrate that after the successful detection of misbehaving nodes, their punishment through network isolation can significantly improve network performance in terms of packet delivery and throughput. expand
|
|
|
Monitoring of SLA compliances for hosted streaming services |
| |
Hasan Peter Racz,
Burkhard Stiller
|
|
Pages: 251-258 |
|
Monitoring of Service Level Objectives (SLOs) determines an essential part of Service Level Agreement (SLA) management, since customers are to be reimbursed, if a provider fails to fulfil them. By automating this process, a timely detection of a violation ...
Monitoring of Service Level Objectives (SLOs) determines an essential part of Service Level Agreement (SLA) management, since customers are to be reimbursed, if a provider fails to fulfil them. By automating this process, a timely detection of a violation is possible. The compliance approach must be flexible to adapt to potential changes, must be scalable with respect to the amount of data, and has to support multidomain environments. This paper determines a Hosted Streaming Services scenario and defines relevant SLOs. Key requirements are derived, the respective architecture is designed, and the approach is implemented prototypically based on a generic auditing framework. Further-more, a new scheme is proposed that considers the degree and duration of SLO violations in calculating reimbursements. expand
|
|
|
Gossiping for threshold detection |
| |
Fetahi Wuhib,
Rolf Stadler,
Mads Dam
|
|
Pages: 259-266 |
|
We investigate the use of gossip protocols to detect threshold crossings of network-wide aggregates. Aggregates are computed from local device variables using functions such as SUM, AVERAGE, COUNT, MAX and MIN. The process of aggregation and detection ...
We investigate the use of gossip protocols to detect threshold crossings of network-wide aggregates. Aggregates are computed from local device variables using functions such as SUM, AVERAGE, COUNT, MAX and MIN. The process of aggregation and detection is performed using a standard gossiping scheme. A key design element is to let nodes dynamically adjust their neighbor interaction rates according to the distance between the nodes' local estimate of the global aggregate and the threshold itself. We show that this allows considerable savings in communication overhead. In particular, the overhead becomes negligible when the aggregate is sufficiently far above or far below the threshold. We present evaluation results from simulation studies regarding protocol efficiency, quality of threshold detection, scalability, and controllability. expand
|
|
|
Monitoring and counter-profiling for voice over IP networks and services |
| |
Rémi Badonnel,
Olivier Festor,
Khaled Hamlaoui
|
|
Pages: 267-274 |
|
Voice over IP (VoIP) has become a major paradigm for providing lower operational costs and higher flexibility in networks and services. VoIP infrastructures are however facing multiple security issues. In particular, monitoring methods and techniques ...
Voice over IP (VoIP) has become a major paradigm for providing lower operational costs and higher flexibility in networks and services. VoIP infrastructures are however facing multiple security issues. In particular, monitoring methods and techniques can be applied to VoIP traffic in order to profile and track network users. We present in this paper a countermeasure strategy for preventing VoIP profiling. We propose two functional architectures with different noise generation functions in order to dynamically generate fake VoIP messages and deteriorate the profiling performances. We quantify the benefits and limits of our approach through an implementation prototype and the analysis of experimental results obtained in the case scenario of profiling methods based on principal component analysis (PCA). expand
|
|
|
Event handling in clean-slate future internet management |
| |
C. Mingardi,
G. Nunzi,
D. Dudkowski,
M. Brunner
|
|
Pages: 275-278 |
|
Event handling is a management mechanism that provides means for the network to react on changes in the network conditions or performance. In the construction of a clean-slate management architecture, we consider this as a main building block. This paper ...
Event handling is a management mechanism that provides means for the network to react on changes in the network conditions or performance. In the construction of a clean-slate management architecture, we consider this as a main building block. This paper proposes a fully distributed event distribution in a fully distributed environment: differently from existing works, no configuration is required in advance, and yet nodes have guarantee that events are delivered and that certain delivery objectives are respected. The contributions of this paper are: a generic system model for event handling and an analysis of event distribution mechanisms with respect to timeliness and traffic metrics. The paper describes and discusses in detail the results based on simulations and provides guidelines for management functions of the Future Internet. expand
|
|
|
Security and mobility architecture for isolated wireless networks using WIMAX as an infrastructure |
| |
Vamsi Krishna Gondi,
Nazim Agoulmine
|
|
Pages: 279-282 |
|
The main aim of the paper is to define a security and mobility architecture for users to roam along isolated wireless networks. Due to the mobility of the users as well as the networks some of the key issues like security and mobility management are ...
The main aim of the paper is to define a security and mobility architecture for users to roam along isolated wireless networks. Due to the mobility of the users as well as the networks some of the key issues like security and mobility management are not addressed properly due to non availability of infrastructure to handle authentications, mobility management in the access networks. To provide services in a isolated areas, and to cover large areas the ideal solution is provided by the cellular networks, but the bandwidth, cost of communication and the availability for different services are limited by the cellular networks. For this purpose we propose to integrate WIMAX (IEEE 802.16) based networks working in a mesh configuration with WLAN (IEEE 802.11) as a solution to provide different services. By this method a centralized system is proposed to process authentication and mobility management in the network for the users as well as access networks. In the proposed architecture, a master node acts as a gateway for mesh and slave nodes. The gateway has an AAA server which acts as an authentication and accounting server for the mesh nodes. WLAN are interconnected to mesh nodes and slave nodes and the users use WLAN as an access network. The user authenticates to the network using EAP or a onetime password method to access the services in the network. We also proposed mobility management in the architecture where users roams along different access networks in an efficient manner. We evaluated the architecture using a testbed, we calculated the time of authentications and reauthentications during roaming, delay at the user level while networks are in mobile mode. expand
|
|
|
Investigating the role of a transmission initiator in private peering arrangements |
| |
Ruzana Davoyan,
Jörn Altmann
|
|
Pages: 283-286 |
|
This paper investigates the impact of determination of an original initiator of transmission on demand as well as profits of the providers. For that purpose we present a new model, called differentiated traffic-based interconnection agreement (DTIA) ...
This paper investigates the impact of determination of an original initiator of transmission on demand as well as profits of the providers. For that purpose we present a new model, called differentiated traffic-based interconnection agreement (DTIA) that differentiates traffic into two types, referred to as native and stranger in order to determine a transmission initiator. In comparison to the existing financial settlement, under which the payments are based on the net traffic flow, the proposed model governs cost compensation according to the differentiated traffic flows. In addition, a traffic management mechanism that supports the presented approach was described. Analytical studies were provided using Nash bargaining solution to explore how the proposed strategy affects the outcome of providers' negotiation. The key consequence of the obtained results showed that determination of an initiator of transmission induces providers to receive higher profits. expand
|
|
|
Framework to achieve multi-domain service management |
| |
Anindo Bagchi,
Francesco Caruso,
Andrew Mayer,
Ronald Roman,
Prabha Kumar,
Sitaram Kowtha
|
|
Pages: 287-290 |
|
An ongoing trend within the telecommunications industry has been toward providing end-to-end (E2E) IP based services. Accordingly, the associated management perspective has shifted from management of services within a single service provider domain towards ...
An ongoing trend within the telecommunications industry has been toward providing end-to-end (E2E) IP based services. Accordingly, the associated management perspective has shifted from management of services within a single service provider domain towards a multi-domain paradigm supporting an E2E service view in which relationships among service providers, suppliers and customers and the related service level agreements (SLAs) take on major significance. In order to achieve multi-domain management, it is essential that a service management framework be put in place that supports planning, fulfilling, and assuring E2E services, including management of Service Requests, Service Assurance, and SLAs. The collecting and fusing of information from multiple sources, many external to the service provider's domain, to support the required E2E perspective has become an essential process. expand
|
|
|
Analysing Joost peer to peer IPTV protocol |
| |
Mehdi Nafaa,
Nazim Agoulmine
|
|
Pages: 291-294 |
|
After Kazaa and Skype, Niklas Zennstrom and Janus Friis released Joost: A peer to peer TV client. Joost claims that it can work seamlessly and has better video quality than existing p2p video streaming applications and even approaching number of traditional ...
After Kazaa and Skype, Niklas Zennstrom and Janus Friis released Joost: A peer to peer TV client. Joost claims that it can work seamlessly and has better video quality than existing p2p video streaming applications and even approaching number of traditional TV broadcasted channels. In this paper, we present an experimental analysis of the Joost p2p television protocol. After months of packet monitoring, we have gathered valuable data traffic that is analysed. We show the results of Joost insights and describe its key components and networking model. Our objective is to analyse its management protocol, considering it like a closed-box, without any a-priori, knowledge on its internal implementation. We focused also on the peer life in the Joost network. expand
|
|
|
Problem classification method to enhance the ITIL incident and problem |
| |
Yang Song,
Anca Sailer,
Hidayatullah Shaikh
|
|
Pages: 295-298 |
|
Problem determination and resolution PDR is the process of detecting anomalies in a monitored system, locating the problems responsible for the issue, determining the root cause and fixing the cause of the problem. The cost of PDR represents a substantial ...
Problem determination and resolution PDR is the process of detecting anomalies in a monitored system, locating the problems responsible for the issue, determining the root cause and fixing the cause of the problem. The cost of PDR represents a substantial part of operational costs, and faster, more effective PDR can contribute to a substantial reduction in system administration costs. In this paper, we propose to automate the process of PDR by leveraging machine learning methods. The main focus is to effectively categorize the problem a user experiences by recognizing the problem specificity leveraging all available training data such like the performance data and the logs data. Specifically, we transform the structure of the problem into a hierarchy which can be determined by existing taxonomy in advance. We then propose an efficient hierarchical incremental learning algorithm which is capable of adjusting its internal local classifier parameters in real-time. Comparing to the traditional batch learning algorithms, this online learning framework can significantly decrease the computational complexity of the training process by learning from new instances on an incremental fashion. In the same time this reduces the amount of memory required to store the training instances. We demonstrate the efficiency of our approach by learning hierarchical problem patterns for several issues occurring in distributed web applications. Experimental results show that our approach substantially outperforms previous methods. expand
|
|
|
How much management is management enough? providing monitoring processes with online adaptation and learning capability |
| |
Josiane Ortolan Coelho,
Luciano Paschoal Gaspary,
Liane Margarida Rockenbach Tarouco
|
|
Pages: 299-302 |
|
Recent investigations of management traffic patterns in production networks suggest that just a small and static set of management data tends to be used, the flow of management data is relatively constant, and the operations in use for manager-agent ...
Recent investigations of management traffic patterns in production networks suggest that just a small and static set of management data tends to be used, the flow of management data is relatively constant, and the operations in use for manager-agent communication are reduced to a few, sometimes obsolete set. This is an indication of lack of progress of monitoring processes, taking into account their strategic role and potential, for example, to anticipate and prevent faults, performance bottlenecks, and security problems. One of the main reasons for such limitation relies on the fact that operators, who still are a fundamental element of the monitoring control loop, can no longer handle the rapidly increasing size and heterogeneity of both hardware and software components that comprise modern networked computing systems. This form of human-in-the-loop management certainly hampers timely adaptation of monitoring processes. To tackle this issue, this paper presents a model, inspired by the reinforcement learning theory, for adaptive network, service and application monitoring. The model is instantiated through a prototypical implementation of an autonomic element, which, based on historical and even unexpected values retrieved for management objects, dynamically widens or restricts the set of management objects to be monitored. expand
|
|
|
Application-specific packet capturing using kernel probes |
| |
Byungjoon Lee,
Seong Moon,
Youngseok Lee
|
|
Pages: 303-306 |
|
When we reverse-engineer unknown protocols or analyze the Internet traffic, it is critical to capture complete traffic traces generated by a target application. Besides, to prove the accuracy of Internet traffic classification algorithms of the traffic ...
When we reverse-engineer unknown protocols or analyze the Internet traffic, it is critical to capture complete traffic traces generated by a target application. Besides, to prove the accuracy of Internet traffic classification algorithms of the traffic monitoring system usually located in the middle of the network, it is highly required to retain traffic traces associated with the related application. Therefore, in this paper, we present an application-specific packet capturing method at end hosts, which is based on the dynamic kernel probing technique. From the experiments it is shown that the proposed method is useful for creating per-application complete traffic traces without performance degradation. expand
|
|
|
RESERVOIR: management technologies and requirements for next generation service oriented infrastructures |
| |
B. Rochwerger,
A. Galis,
E. Levy,
J. A. Cáceres,
D. Breitgand,
Y. Wolfsthal,
I. M. Llorente,
M. Wusthoff,
R. S. Montero,
E. Elmroth
|
|
Pages: 307-310 |
|
RESERVOIR project [16] is developing an advanced system and service management approach that will serve as the infrastructure for Cloud Computing and Communications and Future Internet of Services by creative coupling of service virtualization, grid ...
RESERVOIR project [16] is developing an advanced system and service management approach that will serve as the infrastructure for Cloud Computing and Communications and Future Internet of Services by creative coupling of service virtualization, grid computing, networking and service management techniques. This paper presents work in progress for the integration and management of such systems into a new generation of Managed Service Infrastructure. expand
|
|
|
Collaborative content caching algorithms in mobile ad hoc networks environment |
| |
Y. Abdelmalek,
A. Abd El Al,
T. Saadawi
|
|
Pages: 311-314 |
|
In this paper, we address the problem of collaborative video caching in ad hoc mobile networks. We consider network portraying static video server with wired interface to gateway node that is equipped with wireless interfaces, other nodes are requiring ...
In this paper, we address the problem of collaborative video caching in ad hoc mobile networks. We consider network portraying static video server with wired interface to gateway node that is equipped with wireless interfaces, other nodes are requiring access to the video streams that is stored at video server. In order to reduce the average access latency as well as enhance the video accessibility, efficient video caching placement and replacement strategies are crucial at some of the distributed intermediate nodes across the network. Virtual backbone caching nodes will be elected by executing caching placement algorithm after running the routing protocol phase. The simulation results indicate that the proposed collaborative aggregate cache mechanism can significantly improve the video QoS in terms of packet loss and average packet delay. expand
|
|
|
A policy based security management architecture for sensor networks |
| |
Sérgio de Oliveira,
Thiago Rodrigues de Oliveira,
José Marcos Nogueira
|
|
Pages: 315-318 |
|
Wireless sensor networks are subjected to several types of attacks specially attacks of denial of service types (DoS). Several mechanisms and techniques were proposed to provide security to wireless sensor networks, like cryptographic process, key management ...
Wireless sensor networks are subjected to several types of attacks specially attacks of denial of service types (DoS). Several mechanisms and techniques were proposed to provide security to wireless sensor networks, like cryptographic process, key management protocols, intrusion detection systems, node revocation schemas, secure routing, and secure data fusion. A recent work proposes a security management framework to dynamically configure and reconfigure security components in sensor networks according to management information collected by sensor nodes and sent to decision-maker management entities. It turns on or off security components only when they are necessary, saving power and extend network lifetime. The architecture is policy based, what enable rules configuration specific for each application. We evaluate that security management framework, showing possibilities to save power and how that work can contribute to extend network lifetime. We propose some scenarios to evaluate the performance of the security management framework and estimate the cost of security components. expand
|
|
|
Managing responsiveness of virtual desktops using passive monitoring |
| |
Rajdeep Bhowmik,
Andrzej Kochut,
Kirk Beaty
|
|
Pages: 319-326 |
|
Desktop virtualization is a new computing approach to application delivery and management. It leverages OS virtualization and remoting protocols to provide users with remote access to virtual machines running in a centralized data center. It promises ...
Desktop virtualization is a new computing approach to application delivery and management. It leverages OS virtualization and remoting protocols to provide users with remote access to virtual machines running in a centralized data center. It promises significant benefits in terms of improved data security, reduced management complexity, and more efficient and flexible resource usage. However, it brings a lot of management challenges typical for centralized systems, with performance and quality of service management being one of the most important. This paper proposes a management algorithm suitable for efficient resource allocation in virtualized desktop environments and takes application performance QoS features into consideration. It proposes a novel, non-intrusive method for application and remoting protocol agnostic desktop responsiveness monitoring. Moreover, it is based on studies of desktop workload usage which enabled us to discover and leverage workload patterns that can lead to increased efficiency both in terms of desktop responsiveness and resource usage. We have prototyped the system and discuss several case studies validating the approach and illustrating the most important features of the solution. expand
|
|
|
Shares and utilities based power consolidation in virtualized server environments |
| |
Michael Cardosa,
Madhukar R. Korupolu,
Aameek Singh
|
|
Pages: 327-334 |
|
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among ...
Virtualization technologies like VMware and Xen provide features to specify the minimum and maximum amount of resources that can be allocated to a virtual machine (VM) and a shares based mechanism for the hypervisor to distribute spare resources among contending VMs. However much of the existing work on VM placement and power consolidation in data centers fails to take advantage of these features. One of our experiments on a real testbed shows that leveraging such features can improve the overall utility of the data center by 47% or even higher. Motivated by these, we present a novel suite of techniques for placement and power consolidation of VMs in data centers taking advantage of the min-max and shares features inherent in virtualization technologies. Our techniques provide a smooth mechanism for power-performance tradeoffs in modern data centers running heterogeneous applications, wherein the amount of resources allocated to a VM can be adjusted based on available resources, power costs, and application utilities. We evaluate our techniques on a range of large synthetic data center setups and a small real data center testbed comprising of VMware ESX servers. Our experiments confirm the end-to-end validity of our approach and demonstrate that our final candidate algorithm, PowerExp and MinMax, consistently yields the best overall utility across a broad spectrum of inputs - varying VM sizes and utilities, varying server capacities and varying power costs - thus providing a practical solution for administrators. expand
|
|
|
iMark: an identity management framework for network virtualization environment |
| |
N. M. Mosharaf Kabir Chowdhury,
Fida-E Zaheer,
Raouf Boutaba
|
|
Pages: 335-342 |
|
In recent years, network virtualization has been propounded as an open and flexible future internetworking paradigm that allows multiple virtual networks (VNs) to co-exist on a shared physical substrate. Each VN in a network virtualization environment ...
In recent years, network virtualization has been propounded as an open and flexible future internetworking paradigm that allows multiple virtual networks (VNs) to co-exist on a shared physical substrate. Each VN in a network virtualization environment (NVE) is free to implement its own naming, addressing, routing, and transport mechanisms. While such flexibility allows fast and easy deployment of diversified applications and services, ensuring end-to-end communication and universal connectivity poses a daunting challenge. This paper advocates that effective and efficient management of heterogeneous identifier spaces is the key to solving the problem of end-to-end connectivity in an NVE. We propose iMark, an identity management framework based on a global identity space, which enables end hosts to communicate with each other within and outside of their own networks through a set of controllers, adapters, and well-placed mappings without sacrificing the autonomy of the concerned VNs. We describe the procedures that manipulate these mappings between different identifier spaces and provide performance evaluation of the proposed framework. expand
|
|
|
Enabling high-speed and extensible real-time communications monitoring |
| |
Francesco Fusco,
Felipe Huici,
Luca Deri,
Saverio Niccolini,
Thilo Ewald
|
|
Pages: 343-350 |
|
The use of the Internet as a medium for real-time communications has grown significantly over the past few years. However, the best-effort model of this network is not particularly well-suited to the demands of users who are familiar with the reliability, ...
The use of the Internet as a medium for real-time communications has grown significantly over the past few years. However, the best-effort model of this network is not particularly well-suited to the demands of users who are familiar with the reliability, quality and security of the Public Switched Telephone Network. If the growth is to continue, monitoring and real time analysis of communication data will be needed in order to ensure good call quality, and should degradation occur, to take corrective action. Writing this type of monitoring application is difficult and time consuming: VoIP traffic not only tends to use dynamic ports, but its real-time nature, along with the fact that its packets tend to be small, impose non-trivial performance requirements. In this paper we present RTC-Mon, the Real-Time Communications Monitoring framework, which provides an extensible platform for the quick development of high-speed, real-time monitoring applications. While the focus is on VoIP traffic, the framework is general and is capable of monitoring any type of real-time communications traffic. We present testbed performance results for the various components of RTC-Mon, showing that it can monitor a large number of concurrent flows without losing packets. In addition, we implemented a proof-of-concept application that can not only track statistics about a large number of calls and their users, but that consists of only 800 lines of code, showing that the framework is efficient and that it also significantly reduces development time. expand
|
|
|
Monitoring the impact of P2P users on a broadband operator's network |
| |
H. J. Kolbe,
O. Kettig,
E. Golic
|
|
Pages: 351-358 |
|
Since their emergence peer-to-peer (P2P) applications have been generating a considerable fraction of the overall transferred bandwidth in broadband networks. Residential broadband service has been moving from one geared towards technology enthusiasts ...
Since their emergence peer-to-peer (P2P) applications have been generating a considerable fraction of the overall transferred bandwidth in broadband networks. Residential broadband service has been moving from one geared towards technology enthusiasts and early adopters to a commodity for a large fraction of households. Thus, the question whether P2P is still the dominant application in terms of bandwidth usage becomes highly relevant for broadband operators. In this work we present a method for classifying broadband users into a P2P- and a non-P2P group based on the amount of communication partners ("peers") they have in a dedicated timeframe. Based on this classification, we derive their impact on network characteristics like the number of active users and their aggregate bandwidth. Privacy is assured by anonymization of the data and by not taking into account the packet payloads. We apply our method to real operational data collected from a major German DSL provider's access link which transported all traffic each user generates and receives. We find that P2P users are still large contributors to the total amount of traffic seen. However, in comparison to data collected four years earlier, the impact from P2P on the bandwidth peaks in the busy hours has clearly decreased while other applications have a growing impact. Further analysis also reveals that the P2P users' traffic does not exhibit strong locality. We furthermore compare our findings to those available in the literature and propose areas for future work on network monitoring, P2P applications, and network design. expand
|
|
|
Controlling performance trade-offs in adaptive network monitoring |
| |
Alberto Gonzalez Prieto,
Rolf Stadler
|
|
Pages: 359-366 |
|
A key requirement for autonomic (i.e., self-*) management systems is a short adaptation time to changes in the networking conditions. In this paper, we show that the adaptation time of a distributed monitoring protocol can be controlled. We show this ...
A key requirement for autonomic (i.e., self-*) management systems is a short adaptation time to changes in the networking conditions. In this paper, we show that the adaptation time of a distributed monitoring protocol can be controlled. We show this for A-GAP, a protocol for continuous monitoring of global metrics with controllable accuracy. We demonstrate through simulations that, for the case of A-GAP, the choice of the topology of the aggregation tree controls the trade-off between adaptation time and protocol overhead in steady-state. Generally, allowing a larger adaptation time permits reducing the protocol overhead. Our results suggest that the adaptation time primarily depends on the height of the aggregation tree and that the protocol overhead is strongly influenced by the number of internal nodes. We outline how AGAP can be extended to dynamically self-configure and to continuously adapt its configuration to changing conditions, in order to meet a set of performance objectives, including adaptation time, protocol overhead, and estimation accuracy. expand
|
|
|
Computing histograms of local variables for real-time monitoring using aggregation trees |
| |
Dan Jurca,
Rolf Stadler
|
|
Pages: 367-374 |
|
In this paper we present a protocol for the continuous monitoring of a local network state variable. Our aim is to provide a management station with the value distribution of the local variables across the network, by means of partial histogram aggregation, ...
In this paper we present a protocol for the continuous monitoring of a local network state variable. Our aim is to provide a management station with the value distribution of the local variables across the network, by means of partial histogram aggregation, with minimum protocol overhead. Our protocol is decentralized and asynchronous to achieve robustness and scalability, and it executes on an overlay interconnecting management processes in network devices. On this overlay, the protocol maintains a spanning tree and updates the histogram of the network state variables through incremental aggregation. The protocol allows to control the trade-off between protocol overhead and a global accuracy objective. This functionality is implemented by a dynamic configuration of local error filters that control whether an update is sent towards the management station or not. We evaluate our protocol by means of simulations. Our results demonstrate the controllability of our method in a wide selection of scenarios, and the scalability of our protocol for large-scale networks. expand
|
|
|
Heteroscedastic models to track relationships between management metrics |
| |
Miao Jiang,
Mohammad A. Munawar,
Thomas Reidemeister,
Paul A. S. Ward
|
|
Pages: 375-381 |
|
Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow faults to be detected and their causes localized. In particular, linear regression models have been ...
Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow faults to be detected and their causes localized. In particular, linear regression models have been used to capture metric correlations. We show that for many pairs of correlated metrics in software systems, such as those based on Java Enterprise Edition (JavaEE), the variance of the predicted variable is not constant. This behaviour violates the assumptions of linear regression, and we show that these models may produce inaccurate results. In this paper, leveraging insight from the system behaviour, we employ an efficient variant of linear regression to capture the non-constant variance. We show that this variant captures metric correlations, while taking the changing residual variance into consideration. We explore potential causes underlying this behaviour, and we construct and validate our models using a realistic multi-tier enterprise application. Using a set of 50 fault-injection experiments, we show that we can detect all faults without any false alarm. expand
|
|
|
SIPFIX: a scheme for distributed SIP monitoring |
| |
Sven Anderson,
Saverio Niccolini,
Dieter Hogrefe
|
|
Pages: 382-389 |
|
Voice-over-IP (VoIP) is a key component of Next-Generation-Networks and rapidly becoming more and more common in the Internet in general. This also increases the demand of VoIP operators for a scalable, distributed and flexible monitoring. But current ...
Voice-over-IP (VoIP) is a key component of Next-Generation-Networks and rapidly becoming more and more common in the Internet in general. This also increases the demand of VoIP operators for a scalable, distributed and flexible monitoring. But current monitoring architectures are either not designed for including application layer protocol analysis and data acquisition, or they are very specific and static, interacting directly with a certain VoIP product and are not integrated into the general network monitoring. Fully based on the new standards of the IP Flow Information eXchange (IPFIX) and the concept of Mediators we specify a monitoring scheme for the Session-Initiation-Protocol (SIP), the most common protocol used for VoIP. Therefore it integrates the acquisition and processing of VoIP traffic measurements into the general traffic monitoring, building a cost-effective cross-layer monitoring system. In use case examples we show how SIPFIX can cope with many challenges and requirements of SIP monitoring, like correlation of SIP and media traffic observed at different probes, Quality-of-Service evaluation, security and integrity checks, Denial-of-Service defense and real-time status reports. expand
|
|
|
Embedded system management using WBEM |
| |
Michael Hutter,
Alexander Szekely,
Johannes Wolkerstorfer
|
|
Pages: 390-397 |
|
Web-based management solutions have become an increasingly important and promising approach especially for small and embedded environments. This article presents the design and implementation of an embedded system that leverages the Web-based Enterprise ...
Web-based management solutions have become an increasingly important and promising approach especially for small and embedded environments. This article presents the design and implementation of an embedded system that leverages the Web-based Enterprise Management (WBEM) solution. WBEM has been designed to manage large heterogeneous environments but has not yet been deployed on small and embedded devices. First, we evaluate existing WBEM implementations due to its resource requirements. Second, we describe the design of an embedded network device that has been realized on a system-on-chip prototyping platform. A small-footprint WBEM server has been integrated that requires less than 900 kB of non-volatile memory. We provide performance measurements of our solution and compare the results with other Web-based management approaches. They show that WBEM is suitable to run on such resource-constraint devices and to be applicable in practice. expand
|
|
|
Control information description model and processing mechanism in the trustworthy and controllable network |
| |
Peng Wang,
Junzhou Luo,
Wei Li,
Yansheng Qu
|
|
Pages: 398-405 |
|
Traditional networks are surprisingly fragile and difficult to manage. The problem can partly be attributed to the exposition of too many details of the controlled objects leading to the deluge of complexity in control plane, and the absence of network-wide ...
Traditional networks are surprisingly fragile and difficult to manage. The problem can partly be attributed to the exposition of too many details of the controlled objects leading to the deluge of complexity in control plane, and the absence of network-wide views leading to the blindness of network management. With these problems, this paper decomposes the necessary network management information into three parts: the basic information, the cross-layer association, and global information. And a new controlled object description model is presented in the trustworthy and controllable network control architecture which separates the functionality of control and management of network form the data plane of IP network, and constructs the formal control and management plane of IP network. The new model identifies and abstracts the controlled objects with object-oriented approach. Based on this model, a cross-layer database is built to store the different layer control objects and to present cross-layer association view, a processing mechanism to process the original information is presented for global network state view, and a control plane is constructed to realize network control. The control information description model restricts the complexity of the controlled objects to their own implementation by abstraction, and alleviates the difficulty of network management. The cross-layer association view and the global network state view composes the network-wide views. The network-wide views realize the visibility and improve the manageability of network. Finally, we present 3 examples to indicate that the model alleviates the complexity of configuration management. expand
|
|
|
Supporting communities in programmable grid networks: gTBN |
| |
Mihai Lucian Cristea,
Rudolf J. Strijkers,
Damien Marchal,
Leon Gommans,
Cees de Laat,
Robert J. Meijer
|
|
Pages: 406-413 |
|
This paper presents the generalised Token Based Networking (gTBN) architecture, which enables dynamic binding of communities and their applications to specialised network services. gTBN uses protocol independent tokens to provide decoupling of authorisation ...
This paper presents the generalised Token Based Networking (gTBN) architecture, which enables dynamic binding of communities and their applications to specialised network services. gTBN uses protocol independent tokens to provide decoupling of authorisation from time of usage as well as identification of network traffic. The tokenised traffic allows specialised software components uploaded into network elements to execute services specific to communities. A reference implementation of gTBN over IPv4 is proposed as well as the presentation of our experiments. These experiments include validation tests of our test bed with common grid applications such as GridFTP, OpenMPI, and VLC. In addition, we present a firewalling use case based on gTBN. expand
|
|
|
A latency-aware algorithm for dynamic service placement in large-scale overlays |
| |
Jeroen Famaey,
Wouter De Cock,
Tim Wauters,
Filip De Turck,
Bart Dhoedt,
Piet Demeester
|
|
Pages: 414-421 |
|
A generic and self-managing service hosting infrastructure, provides a means to offer a large variety of services to users across the Internet. Such an infrastructure provides mechanisms to automatically allocate resources to services, discover the location ...
A generic and self-managing service hosting infrastructure, provides a means to offer a large variety of services to users across the Internet. Such an infrastructure provides mechanisms to automatically allocate resources to services, discover the location of these services, and route client requests to a suitable service instance. In this paper we propose a dynamic and latency-aware algorithm for assigning resources to services. Additionally, the proposed service hosting architecture and its protocols to support the service placement algorithm, are described in detail. Extensive simulations were performed to compare the solution of our latency-aware algorithm to the latency-unaware variant, in terms of system efficiency and scalability. expand
|
|
|
ITU-T RACF implementation for application-driven QoS control in MPLS networks |
| |
B. Martini,
F. Baroncelli,
V. Martini,
K. Torkman,
P. Castoldi
|
|
Pages: 422-429 |
|
Within the ITU-T Next Generation Network (NGN) architecture, the Resource Admission Control Function (RACF) has been designated to perform the application-driven QoS control across both access and core networks. However, an actual RACF implementation ...
Within the ITU-T Next Generation Network (NGN) architecture, the Resource Admission Control Function (RACF) has been designated to perform the application-driven QoS control across both access and core networks. However, an actual RACF implementation acting on MPLS metro-core networks does not exist since RACF lacks of the capability to configure QoS policies on MPLS network nodes. This prevents an effective end-to-end QoS control in a metro-core scenario on a per-application basis. This work presents a specific implementation of RACF operating over an MPLS network domain. This RACF implementation is applied to a testbed where a Video Client application requests a real-time video data transfer from a Video Server through an MPLS network. The admission control is performed upon service request based on video requirements and network resource availability. The differentiated traffic treatment on per-flow basis is realized through setting of MPLS DiffServ-aware Traffic Engineering (TE) capabilities using the NETCONF protocol. Effective traffic differentiation is achieved in a multi-service network scenario and thus it validates NETCONF as candidate protocol for policy provisioning in MPLS networks. expand
|
|
|
Management of SOA based NGN service exposure, service discovery and service composition |
| |
N. Blum,
T. Magedanz,
F. Schreiner
|
|
Pages: 430-437 |
|
Next generation telecommunication network operators securely opening up their network capabilities and services to third party service providers require flexible service delivery platforms. Policy based service exposure, service discovery and service ...
Next generation telecommunication network operators securely opening up their network capabilities and services to third party service providers require flexible service delivery platforms. Policy based service exposure, service discovery and service composition mechanisms are required to offer chargeable services and service building blocks to external entities in a customizable way. Based on research and developments conducted while prototyping solutions for the Open SOA Telco Playground, a unique IMS based NGN testbed for realizing SOA based NGN service delivery platforms, this work proposes the eXtended POlicy based, Semantically enabled sErvice bRoker (XPOSER). XPOSER provides novel NGN service exposition mechanisms. It enables intent-based NGN service discovery and allows for user-centric, automated service composition. Utilizing SOA based Operation Support Systems this work explains requirements and solutions for dynamically managing NGN service compositions by tightly linking service creation, service fulfillment and service assurance mechanisms already at service composition time. expand
|
|
|
Using heuristics to improve service portfolio selection in P2P grids |
| |
Álvaro Coêlho,
Francisco Brasileiro,
Paulo Ditarso Maciel
|
|
Pages: 438-444 |
|
In this paper we consider a peer-to-peer grid system which provides multiple services to its users. An incentive mechanism promotes collaboration among peers. It has been shown that the use of a reciprocation-based incentive mechanism in such a system ...
In this paper we consider a peer-to-peer grid system which provides multiple services to its users. An incentive mechanism promotes collaboration among peers. It has been shown that the use of a reciprocation-based incentive mechanism in such a system prevents free-riding and, at the same time, promotes the clustering of peers that have mutually profitable interactions. On the other hand, an issue that has not been sufficiently studied in this context is that of service portfolio selection. Normally, peers are subject to resource limitations, which force them to provide only a subset of all services that can be possibly provided. Clearly, the subset of selected services impacts the profit that the grid yields to the peers, since each service will have a different cost and will return a different utility. Moreover, the utility generated by a service is strongly influenced by the behavior of the other peers, which in turn may change over time. In this paper we explore the use of heuristics to select the portfolio of services to be offered by peers in such a grid. The main contributions of this work are the use of heuristics to improve the average profit of peers and a study on the impact of some system characteristics on the heuristics behavior. expand
|
|
|
A solution to support risk analysis on IT change management |
| |
Juliano Araújo Wickboldt,
Guilherme Sperb Machado,
Weverton Luis da Costa Cordeiro,
Roben Castagna Lunardi,
Alan Diego dos Santos,
Fabrício Girardi Andreis,
Cristiano Bonato Both,
Lisandro Zambenedetti Granville,
Luciano Paschoal Gaspary,
Claudio Bartolini,
David Trastour
|
|
Pages: 445-452 |
|
The growing necessity of organizations in using technologies to support to their operations implies that managing IT resources became a mission-critical issue for the health of the primary companies' businesses. Thus, in order to minimize problems in ...
The growing necessity of organizations in using technologies to support to their operations implies that managing IT resources became a mission-critical issue for the health of the primary companies' businesses. Thus, in order to minimize problems in the IT infrastructure, possibly affecting the daily business operations, risks intrinsic to the change process have to be analyzed and assessed. Risk Management is a widely discussed subject in several areas, although for IT Change Management it is quite a new discipline. The Information Technology Infrastructure Library (ITIL) introduces a set of best practices to conduct the management of IT infrastructures. According to ITIL, risks should be investigated, measured, and mitigated before any change is approved. Even with these guidelines, there is no default automatic method for risk assessment in IT Change Management. In this paper we introduce a risk analysis method based on the execution history of past changes. In addition, we propose a failure representation model to capture the feedback of the execution of changes over IT infrastructures. expand
|
|
|
Defensive configuration with game theory |
| |
Sheila Becker,
Radu State,
Thomas Engel
|
|
Pages: 453-459 |
|
This paper proposes a new model, based on mainstream game theory for the optimal configuration of services. We consider the case of reliable realtime P2P communications and show how the configuration of security mechanisms can be configured using game ...
This paper proposes a new model, based on mainstream game theory for the optimal configuration of services. We consider the case of reliable realtime P2P communications and show how the configuration of security mechanisms can be configured using game theoretical concepts, in which the defendant is played by the management plane having to face adversaries which play the attacker role. Our main contribution lies in proposing a risk assessment framework and deriving optimal strategies - in terms of Nash equilibrium - for both the attacker and the defendant. We consider the specific service of communications in autonomic networks and we show how the optimal configuration can be determined within the proposed framework. expand
|
|
|
Algorithms for SLA composition to provide inter-domain services |
| |
Nabil Bachir Djarallah,
Hélia Pouyllau
|
|
Pages: 460-467 |
|
Providing inter-domain QoS guaranteed services is a huge challenge for operators and will bring new revenues to them. But, establishing end-to-end services requires a level of resource management that does not exist yet, even locally to an operator. ...
Providing inter-domain QoS guaranteed services is a huge challenge for operators and will bring new revenues to them. But, establishing end-to-end services requires a level of resource management that does not exist yet, even locally to an operator. Due to confidentiality and independence reasons, operators are afraid to cooperate which can not be avoided in the inter-domain context. We consider the existence of an alliance framework wherein operators would agree to cooperate. In this article, we address the issues of negotiating inter-domain services and propose efficient algorithms to determine the end-to-end QoS contract that will satisfy the QoS demand for a service. expand
|
|
|
On semantic and compliance of SNMP MIBs in IP/MPLS routers |
| |
K. Torkman,
B. Martini,
F. Baroncelli,
V. Martini,
P. Castoldi
|
|
Pages: 468-473 |
|
The Simple Network Management Protocol (SNMP) Framework, is one of the most diffused and successful protocol for network management generally available in current network devices. This work studies the dynamic evolution of the SNMP Management Information ...
The Simple Network Management Protocol (SNMP) Framework, is one of the most diffused and successful protocol for network management generally available in current network devices. This work studies the dynamic evolution of the SNMP Management Information Based (MIBs) of commercial IP/MPLS routers and evaluates the effectiveness of the relevant implementation. In addition, it characterizes the MIB modules involved in the provisioning of a connectivity and investigates the compliance of MIB implementation as regard the standard specifications. The obtained results were achieved by means of a testbed comprising a software tool developed by the authors. In particular, these results revealed that current MIB implementations do not respect standard specifications in terms of accessibility of the MIB objects. In addition, MIB are often used as a database for storing implementation-specific information, not foreseen by standard specification, rather than to support router operations. Finally, it was found that the configuration of simple connectivity requires the creation and modification of hundreds of different MIB objects thus making the use of SNMP limited to the collection of management information and general notifications. expand
|
|
|
Monitoring probabilistic SLAs in web service orchestrations |
| |
Sidney Rosario,
Albert Benveniste,
Claude Jard
|
|
Pages: 474-481 |
|
Web services are software applications that are published over the Web, and can be searched and invoked by other programs. New Web services can be formed by composing elementary services, such composite services are called Web service orchestrations. ...
Web services are software applications that are published over the Web, and can be searched and invoked by other programs. New Web services can be formed by composing elementary services, such composite services are called Web service orchestrations. Quality of Service (QoS) issues for Web service orchestrations deeply differ from corresponding QoS issues in network management. In an open world of Web services, service level agreements (SLAs) play an important role. They are contracts defining the obligations and rights between the provider of a Web service and a client with respect to the services' function and quality. In a previous work we have advocated using soft contracts of probabilistic nature, for the QoS part of contracts. Soft contracts have no hard bounds on QoS parameters, but rather probability distributions for them. An essential component of SLA management is the continuous monitoring of the performance of called Web services, to check for violation of the agreed SLA. In this paper we propose a statistical technique for QoS contract run time monitoring. Our technique is compatible with the use of soft probabilistic contracts. expand
|
|
|
Queuing model based end-to-end performance evaluation for MPLS virtual private networks |
| |
Yanfeng Zhu,
Yibo Zhang,
Chun Ying,
Wei Lu
|
|
Pages: 482-488 |
|
Monitoring the end-to-end Quality-or-Service (QoS) is an important work for service providers' Operation Support System (OSS), because it is the fundamental requirement for QoS provisioning. However, it is in fact a challenging work, and there are few ...
Monitoring the end-to-end Quality-or-Service (QoS) is an important work for service providers' Operation Support System (OSS), because it is the fundamental requirement for QoS provisioning. However, it is in fact a challenging work, and there are few efficient approaches to address it. In this paper, for Multi-Protocol Label Switching (MPLS) Virtual Private Networks (VPNs), we propose a queuing model based end-to-end performance evaluation scheme for OSS to monitor the end-to-end delay, which is one of the important QoS metrics. By means of a queuing model, we deduce the relationship between the end-to-end delay and the information available in the Management Information Base (MIB) of routers, and then we present the evaluation scheme which avoids the costly per-packet measurement. The complexity of the proposed scheme is much lower than existing schemes. Extensive simulation results show that the proposed scheme can efficiently evaluate the end-to-end performance metrics (the estimation error is nearly 10%). expand
|
|
|
Enhanced cognitive resource management for QoS-guaranteed service provisioning in home/office network |
| |
Shahnaza Tursunova,
Son Tran Trong,
Bong-Kyun Lee,
Eun-Young Cho,
You-Hyeon Jeong,
Young-Tak Kim
|
|
Pages: 489-496 |
|
In this paper, we propose an enhanced cognitive resource management for QoS-aware service provisioning in wired and wireless home/office network. We enhance QoS-aware customer network management (Q-CNM) system. The Q-CNM controls overall management process ...
In this paper, we propose an enhanced cognitive resource management for QoS-aware service provisioning in wired and wireless home/office network. We enhance QoS-aware customer network management (Q-CNM) system. The Q-CNM controls overall management process in home/office network, gathers network information, and processes incoming requests through QoS-aware resource allocation with connection admission control (CAC) function. Especially the cognitive management process at the Q-CNM provides load redistribution, optimized resource utilization for QoS-guaranteed differentiated service provisioning based on obtained knowledge about network. The detailed analysis of QoS-aware resource allocation and cognitive management process at the Q-CNM are explained. The network performance and QoS parameters are analyzed based on experimental implementation of the proposed management scheme in a real testbed environment and ns-2 network simulator expand
|
|
|
End to End session based bearer control for IP multimedia subsystems |
| |
Richard Good,
Neco Ventura
|
|
Pages: 497-504 |
|
The IP Multimedia Subsystem (IMS) is a converged service enabler that facilitates the rapid deployment of multimedia services in IP networks. The IMS defines a Policy Based Management model to ensure QoS-enabled connectivity by providing mediation and ...
The IP Multimedia Subsystem (IMS) is a converged service enabler that facilitates the rapid deployment of multimedia services in IP networks. The IMS defines a Policy Based Management model to ensure QoS-enabled connectivity by providing mediation and interaction between applications and transport layer resources. This paper reviews the current state of the art regarding the necessary mediation between QoS control elements and transport layer resources in the IMS. An enhancement to discover end to end signaling and media routes is proposed. This mechanism uses SIP routing information to discover origin, destination and transit administrative domains, and allows an application to effectively issue resource requests from its home network and enable QoS-enabled connectivity across all traversed transport segments. We present a prototype implementation that demonstrates the service initiated QoS model, in addition to the proposed algorithm that allows home network based resource reservation across administrative domains. The testbed and proposed mechanism are subjected to validation and performance tests and results are presented; in particular the effects the proposed enhancements have on session setup delay and signaling overhead are examined. expand
|
|
|
Performance of distributed reservation control in wavelength-routed all-optical WDM networks with adaptive alternate routing |
| |
Iyad Katib,
Deep Medhi
|
|
Pages: 505-512 |
|
In this work, we consider the performance benefits of distribution reservation control schemes in partially-connected wavelength-routed all-optical WDM networks. In particular, we propose a new distributed reservation control scheme designed to work ...
In this work, we consider the performance benefits of distribution reservation control schemes in partially-connected wavelength-routed all-optical WDM networks. In particular, we propose a new distributed reservation control scheme designed to work for general topology wavelength-routed WDM networks. In this new scheme, capacity reservation is invoked by always using the first shortest path if the direct route does not exist between all node (demand) pairs. Through our study, we show that our proposed scheme in the presence of distributed adaptive routing is much more fair on pairwise blocking than the conventional reservation method in general topology wavelength-routed WDM networks, especially in the presence of wavelength converters and overload situations. expand
|
|
|
Techniques for better alias resolution in internet topology discovery |
| |
S. García-Jiménez,
E. Magaña,
D. Morató,
M. Izal
|
|
Pages: 513-520 |
|
One of the challenging problems related with network topology discovery in Internet is the process of IP address alias identification. Topology information is usually obtained from a set of traceroutes that provide IP addresses of routers in the path ...
One of the challenging problems related with network topology discovery in Internet is the process of IP address alias identification. Topology information is usually obtained from a set of traceroutes that provide IP addresses of routers in the path from a source to a destination. If these traceroutes are repeated between several source/destination pairs we can get a sampling of all IP addresses for crossed routers. In order to generate the topology graph in which each router is a node, it is needed to identify all IP addresses that belong to the same router. In this work we propose improvements over existing methods to obtain alias identification related mainly with the types and options in probing packets. expand
|
|
|
On the feasibility of static analysis for BGP convergence |
| |
Luca Cittadini,
Massimo Rimondini,
Matteo Corea,
Giuseppe Di Battista
|
|
Pages: 521-528 |
|
Internet Service Providers can enforce a fine grained control of Interdomain Routing by cleverly configuring the Border Gateway Protocol. However, the price to pay for the flexibility of BGP is the lack of convergence guarantees. Network protocol design ...
Internet Service Providers can enforce a fine grained control of Interdomain Routing by cleverly configuring the Border Gateway Protocol. However, the price to pay for the flexibility of BGP is the lack of convergence guarantees. Network protocol design literature introduced several sufficient conditions that routing policies should satisfy to guarantee convergence. However, to our knowledge, none of these conditions has yet been exploited to automatically check BGP policies for convergence. This paper presents two fundamental contributions. First, we describe a heuristic algorithm that statically detects potential oscillations in a BGP network. We prove that our algorithm has several highly desirable properties: i) it exceeds state of the art algorithms in that it is able to correctly report more configurations as stable, ii) it can be implemented efficiently enough to enable static analysis of Internet scale BGP configurations, iii) it is free from false negatives, and iv) it can help in spotting the troublesome points in a detected oscillation. We also propose an architecture for a modular tool that exploits our heuristic algorithm to process native router configurations and return information about the potential presence of oscillations. Such a tool can effectively integrate syntactic checkers and assist operators in verifying configurations. We validate our approach using a prototype implementation and show that it scales well enough to enable Internet scale convergence checks. expand
|
|
|
Architectural principles and elements of in-network management |
| |
Dominique Dudkowski,
Marcus Brunner,
Giorgio Nunzi,
Chiara Mingardi,
Chris Foley,
Miguel Ponce de Leon,
Catalin Meirosu,
Susanne Engberg
|
|
Pages: 529-536 |
|
Recent endeavors in addressing the challenges of the current and future Internet pursue a clean slate design methodology. Simultaneously, it is argued that the Internet is unlikely to be changed in one fell swoop and that its next generation requires ...
Recent endeavors in addressing the challenges of the current and future Internet pursue a clean slate design methodology. Simultaneously, it is argued that the Internet is unlikely to be changed in one fell swoop and that its next generation requires an evolutionary design approach. Recognizing both positions, we claim that cleanness and evolution are not mutually exclusive, but rather complementary and indispensable properties for sustainable management in the future Internet. In this paper we propose the in-network management (INM) paradigm, which adopts a clean slate design approach to the management of future communication networks that is brought about by evolutionary design principles. The proposed paradigm builds on embedded management capabilities to address the intrinsic nature, and hence, close relationship between the network and its management. At the same time, INM assists in the gradual adoption of embedded self-managing processes to progressively achieve adequate and practical degrees of INM. We demonstrate how INM can be exploited in current and future network management by its application to P2P networks. expand
|
|
|
An evaluation of network management protocols |
| |
Pedro Gonçalves,
José Luís Oliveira,
Rui L. Aguiar
|
|
Pages: 537-544 |
|
During the last decade several network management solutions have been proposed or extended to cope with the growing complexity of networks, systems and services. Architectures, protocols, and information models have been proposed as a way to better respond ...
During the last decade several network management solutions have been proposed or extended to cope with the growing complexity of networks, systems and services. Architectures, protocols, and information models have been proposed as a way to better respond to the new and different demands of global networks. However this offer also leads to a growing complexity of management solutions and to an increase in systems' requirements. The current management landscape is populated with a multiplicity of protocols, initially developed as an answer to different requirements. This paper presents a comparative study of currently common management protocols in All-IP networks: SNMP, COPS, Diameter, CIM/XML over HTTP and CIM/XML over SOAP. This assessment was focused on wireless aspect issues, and as such includes measures of bandwidth, packets, round-trip delays, and agents' requirements. We also analyzed the advantages of compression in these protocols. expand
|
|
|
Adaptive management of connections to meet availability guarantees in SLAs |
| |
Anders Mykkeltveit,
Bjarne E. Helvik
|
|
Pages: 545-552 |
|
Today's backbone communication networks serve a wide range of services with different availability requirements. Each customer has a contract, denoted a Service Level Agreement (SLA) which specifies the availability requirement over the contract period. ...
Today's backbone communication networks serve a wide range of services with different availability requirements. Each customer has a contract, denoted a Service Level Agreement (SLA) which specifies the availability requirement over the contract period. In the literature, different provisioning strategies to establish connection arrangements capable of meeting a statistical asymptotic availability for the different customers have been proposed. In reality, the SLAs specify guarantees on the interval availability which may deviate significantly from the asymptotic availability. This paper proposes to use an adaptive strategy to manage which connections are affected by failures and maximize the compliance with the SLAs. Different policies for management of connections from the same class with equal requirements and connections with different requirements are proposed. These policies are evaluated and compared with the traditional provisioning policies in a simulation study. The results show that adaptive management can significantly reduce the risk of violating the SLAs in several scenarios expand
|
|
|
A management scheme of SRLG-disjoint protection path |
| |
Alisson Barbosa de Souza,
Ana Luiza de B. de P. Barros,
Antônio Sérgio de S. Vieira,
Gustavo Augusto,
L. de Campos,
Jéssyca Alencar L. e Silva,
Joaquim Celestino,
Joel Uchôa,
Laure W. N. Mendouga
|
|
Pages: 553-560 |
|
With the emergence of new applications and requirements it became necessary to create new monitoring and reactive configuration mechanisms to try to meet the SLAs (Service Level Agreements). In WDM (Wavelength Division Multiplexing) optical networks, ...
With the emergence of new applications and requirements it became necessary to create new monitoring and reactive configuration mechanisms to try to meet the SLAs (Service Level Agreements). In WDM (Wavelength Division Multiplexing) optical networks, one way of trying to fulfill these agreements is by using pre-established protection paths. However, despite guaranteeing that traffic will be rapidly routed to its protection path in case of failure, there is no guarantee that the latter will be capable of meeting the contracted SLA in accordance with the bit error rate of its links. In this article we propose a scheme for monitoring and selecting the SRLG (Shared Risk Link Group) protection path disjointed from the main path using Genetic Algorithms, Fuzzy Logic in a PBM (Policy Based Management) platform denominated GAFUDI. expand
|
|
|
Experiences in using MUWS for scalable distributed monitoring |
| |
Aimilios Chourmouziadis,
Oscar F. Gonzalez Duque,
George Pavlou
|
|
Pages: 561-568 |
|
Efficient Web Services (WS) based network monitoring of managed devices is a difficult task due to the relatively big overhead WS impose. In the past we proposed mechanisms to perform distributed monitoring efficiently, minimizing the relevant overhead. ...
Efficient Web Services (WS) based network monitoring of managed devices is a difficult task due to the relatively big overhead WS impose. In the past we proposed mechanisms to perform distributed monitoring efficiently, minimizing the relevant overhead. Standardization of WS operations is also important in order to achieve interoperability. The WS Resource Framework (WSRF) tries to standardize the messages exchanged with resources representing the state of a device. Adopting WSRF's concepts, the Management Using Web Services (MUWS) standard aims to support device management in an interoperable manner. In this paper we propose methods to use the mechanisms introduced in our previous work combined with MUWS in order to establish the means to retrieve management information efficiently and at the same time achieve interoperability. We also present our experiences in using custom as well as standardized solutions for monitoring devices that range from small to large resource-capable systems. We describe the motivations for this research and present ideas on techniques that need to be adopted for WS based monitoring based on what we have learned in the process. expand
|
|
|
Towards an optimized model of incident ticket correlation |
| |
Patricia Marcu,
Genady Grabarnik,
Laura Luan,
Daniela Rosu,
Larisa Shwartz,
Chris Ward
|
|
Pages: 569-576 |
|
In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident and Problem Management are two of the Service Operation processes in the IT Infrastructure Library (ITIL). These two processes aim to recognize, ...
In recent years, IT Service Management (ITSM) has become one of the most researched areas of IT. Incident and Problem Management are two of the Service Operation processes in the IT Infrastructure Library (ITIL). These two processes aim to recognize, log, isolate and correct errors which occur in the environment and disrupt the delivery of services. Incident Management and Problem Management form the basis of the tooling provided by an Incident Ticket Systems (ITS). In an ITS system, seemingly unrelated tickets created by end users and monitoring systems can coexist and have the same root cause. The connection between failed resource and malfunctioning services is not realized automatically, but often established manually by means of human intervention. This need for human involvement reduces productivity. The introduction of automation would increase productivity and therefore reduce the cost of incident resolution. In this paper, we propose a model to correlate incident tickets based on three criteria. First, we employ a category-based correlation that relies on matching service identifiers with associated resource identifiers, using similarity rules. Secondly, we correlate the configuration items which are critical to the failed service with the earlier identified resource tickets in order to optimize the topological comparison. Finally, we augment scheduled resource data collection with constraint adaptive probing to minimize the correlation interval for temporally correlated tickets. We present experimental data in support of our proposed correlation model. expand
|
|
|
Self-management of hybrid networks: can we trust NetFlow data? |
| |
Tiago Fioreze,
Lisandro Zambenedetti Granville,
Aiko Pras,
Anna Sperotto,
Ramin Sadre
|
|
Pages: 577-584 |
|
Network measurement provides vital information on the health of managed networks. The collection of network information can be used for several reasons (e.g., accounting or security) depending on the purpose the collected data will be used for. At the ...
Network measurement provides vital information on the health of managed networks. The collection of network information can be used for several reasons (e.g., accounting or security) depending on the purpose the collected data will be used for. At the University of Twente (UT), an automatic decision process for hybrid networks that relies on collected network information has been investigated. This approach, called self-management of hybrid networks requires information retrieved from measuring processes in order to automatically decide on establishing/releasing lambda-connections for IP flows that are long in duration and big in volume (known as elephant flows). Nonetheless, the employed measurement technique can break the self-management decisions if the reported information does not accurately describe the actual behavior and characteristics of the observed flows. Within this context, this paper presents an investigation on the trustfulness of measurements performed using the popular NetFlow monitoring solution when elephant flows are especially observed. We primarily focus on the use of NetFlow with sampling in order to collect network information and investigate how reliable such information is for the self-management processes. This is important because the self-management approach decides which flows should be offloaded to the optical level based on the current state of the network and its running flows. We observe three specific flow metrics: octets, packets, and flow duration. Our analysis shows that NetFlow provides reliable information regarding octets and packets. On the other hand, the flow duration reported when sampling is employed tends to be shorter than the actual duration. expand
|
|
|
Analyzing end-to-end network reachability |
| |
Sruthi Bandhakavi,
Sandeep Bhatt,
Cat Okita,
Prasad Rao
|
|
Pages: 585-590 |
|
Network security administrators cannot always accurately tell which end-to-end accesses are permitted within their network, and which ones are not. The problem is that every access is determined by the configurations of multiple, separately administered, ...
Network security administrators cannot always accurately tell which end-to-end accesses are permitted within their network, and which ones are not. The problem is that every access is determined by the configurations of multiple, separately administered, components. As configurations evolve, a small change in one configuration file can have widespread impact on the end-to-end accesses. Short of exhaustive testing, which is impractical, there are no good solutions to analyze end-to-end flows from network configurations. This paper presents a general technique to analyze all the end-to-end accesses from the configuration files of network routers, switches and firewalls. We efficiently analyze certain state-dependent filter rules. Our goal is to help network security engineers and operators quickly determine configuration errors that may cause unexpected behavior such as unwanted accesses or unreachable services. Our technique can be also be used as part of the change management process, to help prevent network misconfiguration. expand
|
|
|
Applying quorum role in network management |
| |
Edemilson da Silva,
Altair Olivo Santin,
Edgard Jamhour,
Carlos Maziero,
Emir Toktar
|
|
Pages: 591-597 |
|
This work presents a proposal for extending the Role-Based Access control (RBAC) model to support activities that demand runtime mutability in their authorization attributes. Such activities cannot be subdivided in a set of subtasks executed sequentially ...
This work presents a proposal for extending the Role-Based Access control (RBAC) model to support activities that demand runtime mutability in their authorization attributes. Such activities cannot be subdivided in a set of subtasks executed sequentially neither can be accomplished by a single role. The approach presented allows the creation of quorum roles, which can only be activated in a session with the endorsement of a quorum of other roles. A prototype illustrates the application of our proposal in a network management scenario. In the illustrative scenario, a previously defined set of roles, by endorsement, activates a quorum role to perform a management task without the participation of the network administrator role. expand
|
|
|
Security management with scalable distributed IP traceback |
| |
Djakhongir Siradjev,
Laziz Yunusov,
Young-Tak Kim
|
|
Pages: 598-605 |
|
In this paper we propose an IP traceback mechanism based on deterministic packet marking and logging, using protected nodes set to reduce logged data amount. The proposed scheme exploits the fact that the number of nodes that may be under attack is usually ...
In this paper we propose an IP traceback mechanism based on deterministic packet marking and logging, using protected nodes set to reduce logged data amount. The proposed scheme exploits the fact that the number of nodes that may be under attack is usually limited to a small fraction of total nodes in the Internet, greatly reducing storage requirements by logging only the traffic destined to this fraction of nodes, thus meeting the hardware limitations of high speed core routers. Before logging at the traceback-enabled router every packet is checked whether it is destined to a host in the protected nodes set by using bloom filter. Protected nodes set and list of traceback-enabled routers is managed by security management infrastructure, which can be mirrored to avoid introduction of single point of failure. Maintaining the list of traceback-enabled routers allows performing neighbor discovery in the overlay network, which is required to detect faked identification field value in IP header by an attacker. By adding initialization stage and infrastructure the proposed scheme can provide constant complexity of per-packet processing and much longer bloom filter refresh period comparing to other approaches that use logging paradigm. Performance evaluation shows that the proposed IP traceback mechanism can be implemented in the real Internet with scalability and good deployment feasibility in terms of false positive ratio and memory usage. expand
|
|
|
Survivable keying for wireless ad hoc networks |
| |
Michele Nogueira,
Guy Pujolle,
Eduardo Silva,
Aldri Santos,
Luiz Albini
|
|
Pages: 606-613 |
|
Cryptographic techniques are at the center of security solutions for wireless ad hoc networks. Public key infrastructures (PKIs) are essential for their efficient operation. However, the fully distributed organization of these networks makes a challenge ...
Cryptographic techniques are at the center of security solutions for wireless ad hoc networks. Public key infrastructures (PKIs) are essential for their efficient operation. However, the fully distributed organization of these networks makes a challenge to design PKIs. Moreover, changes in network paradigms and the increasing dependency on technology require more dependable, survivable and scalable PKIs. This paper presents a survivable PKI whose goal is to preserve key management operations even in face of attacks or intrusions. Our PKI is based on the adaptive cooperation among preventive, reactive and tolerant defense lines. It employs different evidences to prove the liability of users for their keys as well as social relationships for helping public key exchanges. Simulation results show the improvements achieved by our proposal in terms of effectiveness and survivability to different attacks. expand
|
|
|
EJB-based implementation of L1VPN NMS controlled by each customer |
| |
Hiroshi Matsuura,
Naotaka Morita
|
|
Pages: 614-621 |
|
We propose a new service for the L1VPN (layer-1 virtual private network), in which an L1VPN customer can manage and control its own L1VPN from an end-to-end point of view. In the service, a customer can change its routing policy on the basis of a decision ...
We propose a new service for the L1VPN (layer-1 virtual private network), in which an L1VPN customer can manage and control its own L1VPN from an end-to-end point of view. In the service, a customer can change its routing policy on the basis of a decision and set network notification policy for individual VPN users. These operations are conducted by an L1VPN NMS (network management system), which is distributed online by an L1VPN provider in EJB (enterprise java beans) format. In addition to the L1VPN NMS, EJB-based customer domain NMSs that manage individual customer domains are also delivered to individual customers. In cooperation with the provider NMS, which is for the L1VPN provider network, and the customer domain NMSs, an L1VPN NMS can update the L1VPN logical information from provider and customer domains. The L1VPN NMS receives alarm notifications from both NMSs and forwards them to IP users who are affected by the notifications. We evaluate the effect of an L1VPN on alarm notification time because swift alarm notification is critical for IP users. In addition, we evaluate the effect of deploying multiple customer domain NMSs in one Linux NMS server. expand
|
|
|
DeskBench: flexible virtual desktop benchmarking toolkit |
| |
Junghwan Rhee,
Andrzej Kochut,
Kirk Beaty
|
|
Pages: 622-629 |
|
The thin-client computing model has been recently regaining popularity in a new form known as the virtual desktop. That is where the desktop is hosted on a virtualized platform. Even though the interest in this computing paradigm is broad there are relatively ...
The thin-client computing model has been recently regaining popularity in a new form known as the virtual desktop. That is where the desktop is hosted on a virtualized platform. Even though the interest in this computing paradigm is broad there are relatively few tools and methods for benchmarking virtual client infrastructures. We believe that developing such tools and approaches is crucial for the future success of virtual client deployments and also for objective evaluation of existing and new algorithms, communication protocols, and technologies. We present DeskBench, a virtual desktop benchmarking tool, that allows for fast and easy creation of benchmarks by simple recording of the user's activity. It also allows for replaying the recorded actions in a synchronized manner at maximum possible speeds without compromising the correctness of the replay. The proposed approach relies only on the basic primitives of mouse and keyboard events as well as screen region updates which are common in window manager systems. We have implemented a prototype of the system and also conducted a series of experiments measuring responsiveness of virtual machine based desktops under various load conditions and network latencies. The experiments illustrate the flexibility and accuracy of the proposed method and also give some interesting insights into the scalability of virtual machine based desktops. expand
|
|
|
Memory overbooking and dynamic control of Xen virtual machines in consolidated environments |
| |
Jin Heo,
Xiaoyun Zhu,
Pradeep Padala,
Zhikui Wang
|
|
Pages: 630-637 |
|
The newly emergent cloud computing environments host hundreds to thousands of services on a shared resource pool. The sharing is enhanced by virtualization technologies allowing multiple services to run in different virtual machines (VMs) on a single ...
The newly emergent cloud computing environments host hundreds to thousands of services on a shared resource pool. The sharing is enhanced by virtualization technologies allowing multiple services to run in different virtual machines (VMs) on a single physical node. Resource overbooking allows more services with time-varying demands to be consolidated reducing operational costs. In the past, researchers have studied dynamic control mechanisms for allocating CPU to virtual machines, when CPU is overbooked with respect to the sum of the peak demands from all the VMs. However, runtime re-allocation of memory among multiple VMs has not been widely studied, except on VMware platforms. In this paper, we present a case study where feedback control is used for dynamic memory allocation to Xen virtual machines in a consolidated environment. We illustrate how memory behaves differently from CPU in terms of its relationship to application-level performance, such as response times. We have built a prototype of a joint resource control system for allocating both CPU and memory resources to co-located VMs in real time. Experimental results show that our solution allows all the hosted applications to achieve the desired performance in spite of their time-varying CPU and memory demands, whereas a solution without memory control incurs significant service level violations. expand
|
|
|
Refined failure remediation for IT change management systems |
| |
Guilherme Sperb Machado,
Weverton Luis da Costa Cordeiro,
Alan Diego dos Santos,
Juliano Wickboldt,
Roben Castagna Lunardi,
Fabrício Girardi Andreis,
Cristiano Bonato Both,
Luciano Paschoal Gaspary,
Lisandro Zambenedetti Granville,
David Trastour,
Claudio Bartolini
|
|
Pages: 638-645 |
|
In order to deal with failures in the deployment of IT changes and to always leave IT infrastructures into consistent states, we proposed in a previous work, a solution to automate the generation of rollback plans in IT change management systems. The ...
In order to deal with failures in the deployment of IT changes and to always leave IT infrastructures into consistent states, we proposed in a previous work, a solution to automate the generation of rollback plans in IT change management systems. The solution was based on a mechanism that treats Requests for Change (RFC) (or parts of them) as a single atomic transaction. In this work, we extend our previous investigation and present more flexible and fine grained treatment of failures. The paper first presents extensions to our conceptual model in order (i) to give IT operators some flexibility in defining rollback actions, for example, by allowing the rollback plan to not only be a reversed change plan; and (ii) to execute different recovery activities depending on the cause and location of a problem. The paper then focuses on a refined manner to handle and treat failures in change deployments. We follow the ITIL version 3 best practises which suggest that, depending on the RFC context, the human operator can classify activities as reversible or irreversible. Such classification allows change management systems to automatically generate more accurate remediation plans. The proposal takes into account not only a precise way to define how rollback plans will be generated, but also an intuitive method enabling the operator to define compensation activities in order to complete the RFC successfully, even with the occurrence of failures. To prove the concept and technical feasibility, we have materialized our solution in the CHANGELEDGE prototype that, using elements of the Business Process Execution Language (BPEL), is able to generate correct remediation plans to handle and treat failures in IT change management systems. expand
|
|
|
Mobile service-oriented content delivery in wireless mesh networks |
| |
Mohamed Elshenawy,
Mohamed El-Darieby,
Baher Abdulhai
|
|
Pages: 646-652 |
|
Wireless mesh networks (WMN) is poised to be a cost-effective platform for many municipal applications in public safety, business and entertainment. In this paper we present WMN-based platform for content delivery within Intelligent Transportation Systems ...
Wireless mesh networks (WMN) is poised to be a cost-effective platform for many municipal applications in public safety, business and entertainment. In this paper we present WMN-based platform for content delivery within Intelligent Transportation Systems (ITS) applications. An example ITS application would be to deliver content used for vehicle route guidance in emergency evacuation situations. The characteristics of ITS applications of higher vehicle speeds and of the limited coverage of WMN (wi-fi) mesh router causes more frequent handoffs which complicates content delivery over wireless mesh networks. Frequent handoffs mandate smaller handoff delay. Using standard IEEE handoffs may cause unacceptable interruption in content delivery. We propose a service-oriented mobility management protocol (SMMP) for ITS content delivery. Within the protocol, content is considered as services described by XML metadata files. The protocol takes advantage of a hierarchical organization of WMN routers to reduce the handoff delay. The quasi-stationary nature of WMN mesh routers enables the detection of the sequence of routers that a vehicle will be in connection with. SMMP provides to the traveling vehicles cached MAC addresses of WMN mesh routers to communicate with. We evaluate the benefits of the proposed protocol and compared it to the traditional solutions using OMNET++ simulator. Our results show that using SMMP reduces handoff latencies and improves the overall network throughput at lower and higher vehicle speeds. expand
|
|
|
An efficient spectrum management mechanism for cognitive radio networks |
| |
Gülfem Isiklar Alptekin,
Ayse Basar Bener
|
|
Pages: 653-660 |
|
The traditional static spectrum access approach, which assigns a fixed portion of the spectrum to a specific license holder for exclusive use, is unable to manage the spectrum efficiently any longer. In an effort to improve the efficiency of its usage, ...
The traditional static spectrum access approach, which assigns a fixed portion of the spectrum to a specific license holder for exclusive use, is unable to manage the spectrum efficiently any longer. In an effort to improve the efficiency of its usage, alternative spectrum allocation scenarios are being proposed. One of these technologies is the Dynamic Spectrum Access which enables wireless users to share a wide range of available spectrum in an opportunistic manner. In this paper, we study an architecture for a competitive spectrum exchange marketplace, a theoretic base, and the empirical work for spectrum price formation. The competitive spectrum exchange marketplace architecture considers short term sub-lease of unutilized spectrum bands to different service providers. Our proposed pricing model applies game theory as its mathematical base. The Nash equilibrium point tells the spectrum holders the ideal price values where profit is maximized at the highest level of customer satisfaction. Our empirical results prove that the service providers' demand depends on the price and QoS of that band as well as the price and QoS offering of its competitors. expand
|
|
|
Building end-to-end management analytics for enterprise data centers |
| |
Hai Huang,
Yaoping Ruan,
Anees Shaikh,
Ramani Routray,
Chung-hao Tan,
Sandeep Gopisetty
|
|
Pages: 661-675 |
|
The complexity of modern data centers has evolved significantly in recent years. One typically is comprised of a large number and types of middleware and applications that are hosted in a heterogeneous pool of both physical and virtual servers, connected ...
The complexity of modern data centers has evolved significantly in recent years. One typically is comprised of a large number and types of middleware and applications that are hosted in a heterogeneous pool of both physical and virtual servers, connected by a complex web of virtual and physical networks. Therefore, to manage everything in a data center, system administrators usually need a plethora of management tools since one tool often manages only one type of devices. The boundaries between the different management tools can limit productivity of system administrators on their daily tasks as each tool only offers a partial view of the entire managed environment. As a result, advanced analytics such as impact analysis and problem determination are generally not achievable using the traditional management tools as they require a holistic view of the entire data center. In this paper, we describe an integrated management system for applications, servers, network and storage devices called DataGraph. Our system integrates data across heterogeneous point products and agents for management and monitoring to enable the above mentioned management analytics capabilities. A common data model is introduced to federate data collected by the different tools in multiple database repositories so no modifications are needed to existing management tools. A common integrated web user interface is implemented to facilitate management tasks that would otherwise require invoking multiple tools. We deployed this tool in a lab environment and demonstrated these analytics capabilities through several case studies. expand
|
|
|
SURFmap: a network monitoring tool based on the maps API |
| |
Rick Hofstede,
Tiago Fioreze
|
|
Pages: 676-690 |
|
Network monitoring allows network managers to get a better insight in the network traffic transiting in a managed network. In order to make the tasks of a network manager easier, many network monitoring tools are made available for a wide range of purposes ...
Network monitoring allows network managers to get a better insight in the network traffic transiting in a managed network. In order to make the tasks of a network manager easier, many network monitoring tools are made available for a wide range of purposes (e.g., traffic accounting, performance analysis, and so on) network managers may have. However, most of these tools lack to provide geographical information about network traffic. This paper presents a network monitoring tool prototype, called SURFmap, which provides network traffic information at a geographical dimension by using the Google Maps API. Through the use of the Google Maps API's features, SURFmap provides different zoom levels when showing network information, which results in the creation of different levels of abstraction in the network data visualization. SURFmap has revealed to be more intuitive when showing network traffic information, which makes the network monitoring activity from the network manager's perspective more interesting. expand
|
|
|
Auto-connectivity and security setup for access network elements |
| |
Henning Sanneck,
Christoph Schmelz,
Eddy Troch,
Luc De Bie
|
|
Pages: 691-705 |
|
In access networks, the roll-out of new network elements (NE) or changes to the NE HW and SW cause considerable overhead. The total number of NE is significant and is increasing for new radio access technologies like Long Term Evolution (LTE) due to ...
In access networks, the roll-out of new network elements (NE) or changes to the NE HW and SW cause considerable overhead. The total number of NE is significant and is increasing for new radio access technologies like Long Term Evolution (LTE) due to the decreasing cell size. Furthermore for network scenarios like femto access points / home NEs conventional network deployment and management approaches where the network is fully planned and NEs are tightly managed cannot be followed any more. Furthermore the increased security requirements by operators for such network deployments have to be observed. An auto-connectivity scheme which incorporates the NE's security setup is proposed which tries to balance the trade-off between automation (avoiding any manual intervention) and security. This is achieved by shifting manufacturer and operator activities to a preparation (rather than the actual roll-out) phase and eliminating any interaction between them as much as possible. The NE is delivered only with an "off-theshelf" software & configuration installation. Only at the point in time when the NE is placed on site, the NE hardware-to-site mapping happening is executed. Together with mutual authentication between NE and the Operation, Administration and Maintenance (OAM) system it is possible to enable a very flexible and secure roll-out process. expand
|
|
|
Introducing process-oriented IT service management at an academic computing center: an interim report |
| |
Michael Brenner,
Heiz-Gerd Hegerin,
Helmut Reiser,
Christian Richter,
Thomas Schaaf
|
|
Pages: 706-720 |
|
The Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ) is a service provider for a variety of academic institutions, mainly in the Munich (Germany) area. The services provided range from network services, server hosting, application services ...
The Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ) is a service provider for a variety of academic institutions, mainly in the Munich (Germany) area. The services provided range from network services, server hosting, application services to specialized supercomputing services. Even in academia, computing services become ever more business critical : IT services for university spin-offs, virtual labs provided to other universities as an application service, and an increasing number of industry cooperation projects require highly available and reliable services. As scope, volume, complexity and required quality of services increase, financial and personal resources to provide these do not (at least not on the same scale). The only way to meet this challenge is to improve operational effectiveness and efficiency. Such improvements do not seem achievable just by purchasing or developing more management tools (the LRZ already uses an abundance of management software applications). Addressing the, in the past often somewhat neglected, organizational aspects of IT Service Management (ITSM), i.e. process-oriented ITSM, promises to yield much better gains in efficiency. A project to introduce process-oriented IT Service Management at the LRZ was started at the end of 2007. This presentation outlines the motivation and scope of this long-running (3-4 years), multi-faceted project, and presents and interim report on results and experiences in the introduction of process-oriented ITSM at a large academic computing center. expand
|
|
|
Multi-tenant solution for IT service management: a quantitative study of benefits |
| |
Larisa Shwartz,
Yixin Diao,
Genady Ya. Grabarnik
|
|
Pages: 721-731 |
|
The very competitive business climate dictates efficient and cost effective delivery and support of IT services. Compounded by complexity of IT environments and criticality of IT to business success, IT service providers seek multitenant solutions to ...
The very competitive business climate dictates efficient and cost effective delivery and support of IT services. Compounded by complexity of IT environments and criticality of IT to business success, IT service providers seek multitenant solutions to reduce operational cost and improve service quality. In this paper we consider a multi-tenant solution for IT service management and examine its critical aspects for realizing business benefits. We conduct a quantitative study of its benefits using a complexity based value assessment methodology. By regarding complexity as a substitute for potential labor cost, we estimate the business value of the multi-tenant solution before its actual deployment. expand
|
|
|
Best practices for deploying a CMDB in large-scale environments |
| |
Alexander Keller,
Suraj Subramanian
|
|
Pages: 732-745 |
|
We describe best practices for deploying a Configuration Management Database (CMDB) that we have developed during several recent client engagements. Given the complexity and novelty of CMDB solutions that deal with discovering, storing and tracking actual ...
We describe best practices for deploying a Configuration Management Database (CMDB) that we have developed during several recent client engagements. Given the complexity and novelty of CMDB solutions that deal with discovering, storing and tracking actual Configuration Items (CIs), many enterprises rely on service delivery organizations - such as IBM Global Technology Services - to perform the configuration and roll-out of the system into production. This can be either done on the customer premises (within the scope of a so-called project-based service engagement), or by subscribing to a managed service, and thus leveraging the IT service management environment that the service provider has already set up. Often, enterprises severely underestimate the effort involved in setting up IT service management infrastructures by mistakenly equating the setup of such a complex system with the mere installation project of a shrink-wrapped, self-contained product. This, however is not the case: The immense heterogeneity of data center resources makes that no single vendor can cover the breadth of managed resource types when new product versions ship every 12 months, often by means of integrating acquisitions into the product portfolio. Consequently, today's IT Service Management systems rather resemble construction kits and frameworks that require a good deal of tailoring and customization to become useable and useful to the customer. The present paper attempts to provide an insider view into the issues that a CMDB deployment architecture needs to address. In our work, we found that the success of a CMDB deployment project can be attributed to a set of tradeoffs and best practices, especially when it comes to tuning the performance of the system and orchestrating the distributed components of a CMDB so that they work well together. By grounding our work in a concrete case study and by referring to real-life requirements, we demonstrate how to develop an operational architecture by using an off-the-shelf CMDB product. We point out the key design points of our architecture and describe the tradeoffs we had to make, which we subsequently distill into a set of best practices that have been successfully applied in sizing, estimating and implementing subsequent CMDB deployment engagements. expand
|
|
|
Modeling remote desktop systems in utility environment with application to QoS management |
| |
Vanish Talwar,
Klara Nahrstedt,
Dejan Milojicic
|
|
Pages: 746-760 |
|
A remote desktop utility system is an emerging client/server networked model for enterprise desktops. In this model, a shared pool of consolidated compute and storage servers host users' desktop applications and data respectively. End-users are allocated ...
A remote desktop utility system is an emerging client/server networked model for enterprise desktops. In this model, a shared pool of consolidated compute and storage servers host users' desktop applications and data respectively. End-users are allocated resources for a desktop session from the shared pool on-demand, and they interact with their applications over the network using remote display technologies. Understanding the detailed behavior of applications in these remote desktop utilities is crucial for more effective QoS management. However, there are challenges due to hard-to-predict workloads, complexity, and scale. In this paper, we present a detailed modeling of a remote desktop system through case study of an Office application - email. The characterization provides insights into workload and user model, the effect of remote display technology, and implications of shared infrastructure. We then apply these learnings and modeling results for improved QoS resource management decisions - achieving over 90% improvement compared to state of the art allocation mechanisms. We also present discussion on generalizing a methodology for a broader applicability of model-driven resource management. expand
|
|
|
A model based approach to autonomic management of virtual networks |
| |
Steven Davy,
Claire Fahy,
Zohra Boudjemil,
Leigh Griffin,
John Strassner
|
|
Pages: 761-774 |
|
Enabling the automated deployment and maintenance of current and future services over the Internet is a difficult task and requires self-aware functions that can adapt the services offered by the network to changing customer demand, business goals, and/or ...
Enabling the automated deployment and maintenance of current and future services over the Internet is a difficult task and requires self-aware functions that can adapt the services offered by the network to changing customer demand, business goals, and/or environmental conditions using policies. This paper describes a process to realise static and dynamic service deployment, given an understanding of available resources that exist in a communications network to build services . This process contributes to the realisation of the AutoI autonomic management architecture for the Internet, which aims to develop a self-managing virtual resource overlay that can span across heterogeneous networks and supports service mobility, security, quality of service and reliability. The core of the process is to take advantage of the substantial DEN-ng information model that decouples the definition and design of services from the resources available in the network. In this way, the creation and modification of services within a network can be planned, deployed, and even dynamically composed to meet context-aware demands. Providing such a service-aware process is central to the architecture being defined in the AutoI project and information modelling is seen as a major facilitator to this task. In this respect, we introduce a method of analysing a description of a service (including its demands on resources) within the context of currently available and deployed resources and services, in order to make informed decisions as to whether the service can be effectively deployed or not. The specific scenario we investigate is that of a secure VPN service that requires a set of security related functionality from the network in order to be effectively deployed. In accordance with the virtual resource overlay aspect of the AutoI architecture, virtual resources are modelled and used to realise the deployment of the new service. The scenario is implemented and validated in our test bed, where the service and resource characteristics are altered to determine the impact on the deployed service. expand
|
|
|
Web-based administration of grid credentials for identity and authority delegation |
| |
Songjie Wei,
Subrata Mazumdar
|
|
Pages: 775-789 |
|
Grid computing, as a technology to coordinate loosely-coupled computing resources for dynamic virtual organizations, has become prevalent in both industry and academia in the past decade. While providing or utilizing heterogeneous and distributed grids, ...
Grid computing, as a technology to coordinate loosely-coupled computing resources for dynamic virtual organizations, has become prevalent in both industry and academia in the past decade. While providing or utilizing heterogeneous and distributed grids, people can never alleviate their security concerns on the resources and data. Globus Toolkit as an open-source grid environment has implemented the public key infrastructure (PKI) and extended it for proxy-certificate-based delegation propagation with a series of separate and command-line-based components and services. We have built an integrated web service system to coordinate all of Globus's components and services that are needed for user credential management. Our system can reduce the necessary operations on creating and maintaining user credentials in Globus. The system also simplifies the procedure of deploying or accessing Globus services for user authentication, authorization, and identity and authority delegation. We provide a lightweighted Mozilla Firefox add-on on the client side to interact with our online system. On the server side, we implement web services for CA functionality, VOMS attribute certificate generation, and proxy delegation and retrieval, which satisfy the typical needs of most Globus users. Although our current solution is designed for integrating and automating all the credential-related operations for Globus users, it is portable for other online service platforms using similar PKI and delegation mechanisms. expand
|
|
|
Adaptive real-time monitoring for large-scale networked systems |
| |
Alberto Gonzalez Prieto,
Rolf Stadler
|
|
Pages: 790-795 |
|
The focus of this thesis is continuous real-time monitoring, which is essential for the realization of adaptive management systems in large-scale dynamic environments. Real-time monitoring provides the necessary input to the decision-making process of ...
The focus of this thesis is continuous real-time monitoring, which is essential for the realization of adaptive management systems in large-scale dynamic environments. Real-time monitoring provides the necessary input to the decision-making process of network management. We have developed, implemented, and evaluated a design for real-time continuous monitoring of global metrics with performance objectives, such as monitoring overhead and estimation accuracy. Global metrics describe the state of the system as a whole, in contrast to local metrics, such as device counters or local protocol states, which capture the state of a local entity. Global metrics are computed from local metrics using aggregation functions, such as SUM, AVERAGE and MAX. A key part in the design is a model for the distributed monitoring process that relates performance metrics to parameters that tune the behavior of a monitoring protocol. The model has been instrumental in designing a monitoring protocol that is controllable and achieves given performance objectives. Our design has proved to be effective in meeting performance objectives, efficient, adaptive to changes in the networking conditions, controllable along different performance dimensions, and scalable. We have implemented a prototype on a testbed of commercial routers, which proves the feasibility of the design, and, more generally, the feasibility of effective and efficient real-time monitoring in large network environments. expand
|
|
|
Policy-based self-management of wireless ad hoc networks |
| |
Antonis M. Hadjiantonis,
George Pavlou
|
|
Pages: 796-802 |
|
The motivation of the presented thesis emanated from the need for unrestricted wireless communication in a scalable and predictable manner. This need is accentuated by the increasing users' demand for spontaneous communication. The objective is to propose ...
The motivation of the presented thesis emanated from the need for unrestricted wireless communication in a scalable and predictable manner. This need is accentuated by the increasing users' demand for spontaneous communication. The objective is to propose a management framework able to leverage the potential of wireless ad hoc networks as an alternative communication method allowing them to coexist with other networks and to emerge as their flexible extension. expand
|
|
|
STACO: an accounting configuration architecture for multi-service mobile networks |
| |
Peter Racz,
Burkhard Stiller
|
|
Pages: 803-808 |
|
Accounting is a key task in commercial networks. With the increasing number of IP-based services and mobility support, accounting needs to evolve towards an integrated, service-oriented accounting approach in a mobile environment. Therefore, this dissertation ...
Accounting is a key task in commercial networks. With the increasing number of IP-based services and mobility support, accounting needs to evolve towards an integrated, service-oriented accounting approach in a mobile environment. Therefore, this dissertation digest paper presents the Service-oriented Tailored Accounting Configuration (STACO) architecture that enables a service-oriented accounting configuration management in a mobile, multi-domain networking environment. Additionally, it presents the Diameter flow accounting application as an extension to the Diameter protocol in order to integrate IP flow accounting into any Diameter-based infrastructure and to support an efficient transfer of IP flow records. expand
|
|
|
Adaptive response system for distributed denial-of-service attacks |
| |
Vrizlynn L. L. Thing,
Morris Sloman,
Naranker Dulay
|
|
Pages: 809-814 |
|
This dissertation presents a Distributed denial-of-service Adaptive ResponsE (DARE) system, capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integration of distributed ...
This dissertation presents a Distributed denial-of-service Adaptive ResponsE (DARE) system, capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integration of distributed modules for both signature-based and anomaly-based detection. Additionally, the innovative design of DARE's individual components takes into consideration the strengths and weaknesses of existing defence mechanisms, and the characteristics and possible future mutations of DDoS attacks. The distributed components work together interactively to adapt detection and response according to the attack types. Experiments on DARE show that the attack detection and mitigation were successfully completed within seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate and new legitimate requests was maintained. DARE is able to detect and trigger appropriate responses in accordance to the attacks being launched with high accuracy, effectiveness and efficiency. The dissertation is available at http://pubs.doc.ic.ac.uk/VrizlynnThing-PhD-Thesis- 2008/VrizlynnThing-PhD-Thesis-2008.pdf. expand
|
|
|
Performance of network and service monitoring frameworks |
| |
Abdelkader Lahmadi,
Laurent Andrey,
Olivier Festor
|
|
Pages: 815-820 |
|
The efficiency and the performance of management systems is becoming a hot research topic within the networks and services management community. This concern is due to the new challenges of large scale managed systems, where the management plane is integrated ...
The efficiency and the performance of management systems is becoming a hot research topic within the networks and services management community. This concern is due to the new challenges of large scale managed systems, where the management plane is integrated within the functional plane and where management activities have to carry accurate and up-to-date information. We defined a set of primary and secondary metrics to measure the performance of a management approach. Secondary metrics are derived from the primary ones and quantifies mainly the efficiency, the scalability and the impact of management activities. To validate our proposals, we have designed and developed a benchmarking platform dedicated to the measurement of the performance of a JMX manager-agent based management system. The second part of our work deals with the collection of measurement data sets from our JMX benchmarking platform. We mainly studied the effect of both load and the number of agents on the scalability, the impact of management activities on the user perceived performance of a managed server and the delays of JMX operations when carrying variables values. Our findings show that most of these delays follow a Weibull statistical distribution. We used this statistical model to study the behavior of a monitoring algorithm proposed in the literature, under heavy tail delays distribution. In this case, the view of the managed system on the manager side becomes noisy and out of date. expand
|
|
|
On harnessing information models and ontologies for policy conflict analysis |
| |
Steven Davy,
Brendan Jennings,
John Strassner
|
|
Pages: 821-826 |
|
We present a policy conflict analysis process that makes use of pre-defined semantic models of an application to perform effective and efficient conflict analysis. The process is effective as it can be used to analyse for policy conflicts that may occur ...
We present a policy conflict analysis process that makes use of pre-defined semantic models of an application to perform effective and efficient conflict analysis. The process is effective as it can be used to analyse for policy conflicts that may occur in different applications due to the separation of application specific information and constraints from the algorithms to semantic models, such as information model and ontologies. The process is efficient as it incorporates a pre-analysis policy selection step that reduces the number of policies that need to be analysed more extensively. Experimental results show that this process results in a significant reduction in the number of policies that needed to be analysed for potential conflict and that it is flexible enough to detect for policy conflict both for many popular applications and between different applications. expand
|
|
|
An approach to measurement based quality of service control for communications networks |
| |
Alan Davy,
Dmitri Botvich,
Brendan Jennings
|
|
Pages: 827-832 |
|
This paper presents a purely empirical approach to estimating the effective bandwidth of aggregated traffic flows independent of traffic model assumptions. The approach is shown to be robust when used in a variety of traffic scenarios such as both elastic ...
This paper presents a purely empirical approach to estimating the effective bandwidth of aggregated traffic flows independent of traffic model assumptions. The approach is shown to be robust when used in a variety of traffic scenarios such as both elastic and streaming traffic flows of varying degrees of aggregation. The method then forms the basis of two Quality of Service related traffic performance optimisation strategies. The paper presents a cost efficient approach to supplying suitably accurate demand matrix input for QoS related network planning and a QoS provisioning, revenue maximising admission control algorithm for an IPTV services network. This paper summarises these approaches and discusses the major benefits of an appropriately accurate effective bandwidth estimation algorithm. expand
|