|
|
Is the new economy socially sustainable? (invited presentation) (abstract only) |
| |
Manuel Castells
|
|
Page: 2 |
|
doi>10.1145/337180.337181 |
|
Full text: PDF
|
|
At the turn of the millennium, the revolution in information technology has ushered in a new economy. This economy, originated in the United States, and more specifically in the American West Coast, is spreading throughout the world, in an uneven, yet ...
At the turn of the millennium, the revolution in information technology has ushered in a new economy. This economy, originated in the United States, and more specifically in the American West Coast, is spreading throughout the world, in an uneven, yet dynamic pattern. It is essentially characterized by the key role of knowledge and information in spurring productivity and enhancing competitiveness; by its global reach; and by its networked form of business organization. Well managed, this new economy may yield an extraordinary harvest of human creativity and social well being. However, several major contradictions threaten the stability of this new economy: the volatility of global financial markets; the institutional rigidity of business, legislation, and governments in many countries; increasing social inequality and social exclusion throughout the world, limiting market expansion and triggering social tensions; and the growing opposition to globalization without representation on behalf of alternative values, and legitimate concerns on the environmental and social costs of this model of growth. Information technology offers great potential in helping to supersede these contradictions at the dawn of an emerging socio-economic system. But the speed of technological innovation requires the parallel development of institutional and cultural innovation, away from bureaucracy but closer to people, to ensure the sustainability of the new economy, and to spur the new wave of technological creativity. expand
|
|
|
The future of software (invited presentation) (abstract only) |
| |
Grady Booch
|
|
Page: 3 |
|
doi>10.1145/337180.337182 |
|
Full text: PDF
|
|
Software is the fuel of the world's new economy. Software has been used to create new markets, heal the human body, explore distant worlds, and bring individuals into community. Software transcends all political boundaries, consumes few resources in ...
Software is the fuel of the world's new economy. Software has been used to create new markets, heal the human body, explore distant worlds, and bring individuals into community. Software transcends all political boundaries, consumes few resources in its execution, and permits the creation of new worlds with new laws of physics. At its best, software extends the human experience; at its worse, it can amplify our basest faults.And yet, the activity of engineering software falls short of what we would expect to be possible. Software development and deployment remain labor-intensive and intellectually demanding, requiring the best from developers who must play a number of different roles. There is still much friction in the process of crafting complex software; the goal of creating quality software in a repeatable and sustainable manner remains elusive to many organizations, especially those who are driven to develop in Internet time. This problem is exacerbated by the reality that, worldwide, there exists a shortage of skilled developers.In this presentation, we examine the future of software and the future of engineering software. We begin by briefly considering the past and then level-setting where we are in the present, focusing especially on the state of the practice of software development in the world today. We continue with a consideration of the technological, theoritical, economic, and social trends that are shaping the nature of software development. We conclude with a challenge for what software and software engineering can be in a frictionless environment. expand
|
|
|
Dot com versus bricks and mortar — the impact of portal technology (invited presentation) (abstract only) |
| |
Chris Horn
|
|
Page: 4 |
|
doi>10.1145/337180.337183 |
|
Full text: PDF
|
|
The “New Economy is rapidly being adopted on a global scale as corporations vie for new competitive positions and defensive responses. Incumbents” so-called “bricks'n'mortar corporations” are generally finding it challenging, ...
The “New Economy is rapidly being adopted on a global scale as corporations vie for new competitive positions and defensive responses. Incumbents” so-called “bricks'n'mortar corporations” are generally finding it challenging, but usually rewarding, to extend their business practices to the internet. New entrants” so-called “Dot Com companies” are unfettered from institutional rigidity and thus have an enormous opportunity to gain market share, but at the same time are frequently challenged to provide the same levels of brand awareness, product and service as at least some of the incumbents. In this presentation we consider how internet infrastructure software is evolving, and its implications for both brick'n'mortar and dot com organisations. expand
|
|
|
Requirements engineering in the year 00: a research perspective |
| |
Axel van Lamsweerde
|
|
Pages: 5-19 |
|
doi>10.1145/337180.337184 |
|
Full text: PDF
|
|
Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements ...
Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. The processes involved in RE include domain analysis, elicitation, specification, assessment, negotiation, documentation, and evolution. Getting high-quality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of utmost importance in software engineering research and practice.The paper presents a brief history of the main concepts and techniques developed to date to support the RE task, with a special focus on modeling as a common denominator to all RE processes. The initial description of a complex safety-critical system is used to illustrate a number of current research trends in RE-specific areas such as goal-oriented requirements elaboration, conflict management, and the handling of abnormal agent behaviors. Opportunities for goal-based architecture derivation are also discussed together with research directions to let the field move towards more disciplined habits. expand
|
|
|
A case study: demands on component-based development |
| |
Ivica Crnkovic,
Magnus Larsson
|
|
Pages: 23-31 |
|
doi>10.1145/337180.337185 |
|
Full text: PDF
|
|
Building software systems with reusable components brings many advantages. The development becomes more efficient, the realibility of the products is enhanced, and the maintenance requirement is significantly reduced. Designing, developing and maintaining ...
Building software systems with reusable components brings many advantages. The development becomes more efficient, the realibility of the products is enhanced, and the maintenance requirement is significantly reduced. Designing, developing and maintaining components for reuse is, however, a very complex process which places high requirements not only for the component functionality and flexibility, but also for the development organization. In this paper we discuss the different levels of component reuse, and certain aspects of component development, such as component generality and efficiency, compatibility problems, the demands on development environment, maintenance, etc. The evolution of requirements for products generates new requirements for components, if components are not enough general and mature. This dynamism determines the component life cycle where the component first reaches its stability and later degenerates in an asset that is difficult to use, difficult to adapt and maintain. When reaching this stage, the component becomes an obstacle for efficient reuse and should be replaced. Questions related to use of standard and de-facto standard components are addressed specifically. As an illustration of reuse issues, we present a successful implementation of a component-based system which is widely used for industrial process control. expand
|
|
|
Investigating and improving a COTS-based software development |
| |
M. Morisio,
C. B. Seaman,
A. T. Parra,
V. R. Basili,
S. E. Kraft,
S. E. Condon
|
|
Pages: 32-41 |
|
doi>10.1145/337180.337186 |
|
Full text: PDF
|
|
The work described in this paper is an investigation of COTS-based software development within a particular NASA environment, with an emphasis on the processes used. Fifteen projects using a COTS-based approach were studied and their actual process was ...
The work described in this paper is an investigation of COTS-based software development within a particular NASA environment, with an emphasis on the processes used. Fifteen projects using a COTS-based approach were studied and their actual process was documented. This process is evaluated to identify essential differences in comparison to traditional software development. The main differences, and the activities for which projects require more guidance, are requirements definition and COTS selection, high level design, integration and testing.
Starting from these empirical observations, a new process and guidelines for COTS-based development are developed and briefly presented. The new process is currently under experimentation. expand
|
|
|
PPT: a COTS integration case study |
| |
L. David Balk,
Ann Kedia
|
|
Pages: 42-49 |
|
doi>10.1145/337180.337187 |
|
Full text: PDF
|
|
T. Rowe Price Investment Technologies built The Product and Project Tracking System (PPT) to reduce the human resources needed to track and forecast Information Technology projects. Instead of developing or purchasing a new system, the need was met by ...
T. Rowe Price Investment Technologies built The Product and Project Tracking System (PPT) to reduce the human resources needed to track and forecast Information Technology projects. Instead of developing or purchasing a new system, the need was met by integrating Commercial-off-the-Shelf (COTS) products already used and licensed by the company. The conclusion can be made that this approach reduces development costs while providing more flexibility than a single vendor solution. This paper described the process used and issues encountered in building a system from software products generally intended for stand-alone applications. It discusses the rationale behind the system, the choice of products, the software engineering process used, the handling of changes, and modifications made in business practice. A discussion will be made of the initial Return on Investment and ongoing support requirements. expand
|
|
|
Supporting diversity with component frameworks as architectural elements |
| |
Jan Gerben Wijnstra
|
|
Pages: 51-60 |
|
doi>10.1145/337180.337188 |
|
Full text: PDF
|
|
In this paper, we describe our experience with component frameworks within a family architecture for a medical imaging product family. The component frameworks are handled as an integral part of the architectural approach and are an important means to ...
In this paper, we describe our experience with component frameworks within a family architecture for a medical imaging product family. The component frameworks are handled as an integral part of the architectural approach and are an important means to support diversity in the functionality provided by the individual family members.This paper focuses on a particular kind of component framework that has been applied throughout the medical imaging product family. This kind of framework is useful when the various family members are based on the same concepts and the diversity is formed by the differences in the specific instances of these concepts that are present in the family members. These component frameworks have a number of similarities, allowing a standardised approach to their development. They support the division of the system into a generic architectural skeleton, which can be extended with plug-ins to realise specific family members, each with their own set of features. expand
|
|
|
Requirements engineering for product families |
| |
Juha Kuusela,
Juha Savolainen
|
|
Pages: 61-69 |
|
doi>10.1145/337180.337189 |
|
Full text: PDF
|
|
In search for improved software quality and high productivity, software reuse has become a key research area. One of the most promising reuse approaches is product families. However, current practices in requirements engineering do not support product ...
In search for improved software quality and high productivity, software reuse has become a key research area. One of the most promising reuse approaches is product families. However, current practices in requirements engineering do not support product families. This paper describes a definition hierarchy method for requirements capturing, structuring, analysis and documentation. This method helps to identify architectural drivers of the product family and shows how different products in the family vary. expand
|
|
|
Extending requirement specifications using analogy |
| |
Yusuf Pisan
|
|
Pages: 70-76 |
|
doi>10.1145/337180.337190 |
|
Full text: PDF
|
|
Creating the specifications for a new system is a labour intensive task. Analogical reasoning provides a flexible mechanism to retrieve and adapt past specifications. Previous work in applying analogical reasoning to requirement specifications has departed ...
Creating the specifications for a new system is a labour intensive task. Analogical reasoning provides a flexible mechanism to retrieve and adapt past specifications. Previous work in applying analogical reasoning to requirement specifications has departed from the psychological foundations of analogical reasoning, introducing specific ontologies and abstract templates to constrain the reasoning process. We argue that similar results can be obtained without introducing domain specific constraints and that using analogical reasoning engines based on well-established psychological theories, such as the Structure-Mapping Engine, will lead to better results and scale up more effectively. expand
|
|
|
It's engineering Jim … but not as we know it: software engineering — solution to the software crisis, or part of the problem? |
| |
Antony Bryant
|
|
Pages: 78-87 |
|
doi>10.1145/337180.337191 |
|
Full text: PDF
|
|
This paper considers the impact and role of the 'engineering' metaphor, and argues that it is time to reconsider its impact on software development practice.
This paper considers the impact and role of the 'engineering' metaphor, and argues that it is time to reconsider its impact on software development practice. expand
|
|
|
Producing more reliable software: mature software engineering process vs. state-of-the-art technology? |
| |
James C. Widmaier
|
|
Pages: 88-93 |
|
doi>10.1145/337180.337192 |
|
Full text: PDF
|
|
A customer of high assurance software recently sponsored a software engineering experiment in which a real-time software system was developed concurrently by two popular software development methodologies. One company specialized in the state-of-the-practice ...
A customer of high assurance software recently sponsored a software engineering experiment in which a real-time software system was developed concurrently by two popular software development methodologies. One company specialized in the state-of-the-practice waterfall method rated at a Capability Maturity Model Level 4. A second developer employed his mathematically based formal method with automatic code generation. As specified in separate contracts, C++ code plus development documentation and process and product metrics (errors) were to be delivered. Both companies were given identical functional specs and agreed to a generous and equal cost, schedule, and explicit functional reliability objectives. At conclusion of the experiment an independent third party determined through extensive statistical testing that neither methodology was able to meet the user's reliability objectives within cost and schedule constraints. The metrics collected revealed the strengths and weaknesses of each methodology and why they were not able to reach customer reliability objectives. This paper will explore the specification for the system under development, the two competing development processes, the products and metrics captured during development, the analysis tools and testing techniques by the third party, and the results of a reliability and process analysis. expand
|
|
|
Improving problem-oriented mailing list archives with MCS |
| |
Robert S. Brewer
|
|
Pages: 95-104 |
|
doi>10.1145/337180.337193 |
|
Full text: PDF
|
|
Developers often use electronic mailing lists when seeking assistance with a particular software application. The archives of these mailing lists provide a rich repository of problem-solving knowledge. Developers seeking a quick answer to a problem find ...
Developers often use electronic mailing lists when seeking assistance with a particular software application. The archives of these mailing lists provide a rich repository of problem-solving knowledge. Developers seeking a quick answer to a problem find these archives inconvenient, because they lack efficient searching mechanisms, and retain the structure of the original conversational threads which are rarely relevant to the knowledge seeker.We present a system called MCS which improves mailing list archives through a process called condensation. Condensation involves several tasks: extracting only messages of longer-term relevance, adding metadata to those messages to improve searching, and potentially editing the content of the messages when appropriate to clarify. The condensation process is performed by a human editor (assisted by a tool), rather than by an artificial intelligence (AI) system.We describe the design and implementation of MCS, and compare it to rlated systems. We also present our experiences condensing a 1428 message mailing list archive to an archive containing only 177 messages (an 88% reduction). The condensation required only 1.5 minutes of editor effort per message. The condensed archive was adopted by the users of the mailing list. expand
|
|
|
Broad-spectrum studies of log file analysis |
| |
James H. Andrews,
Yingjun Zhang
|
|
Pages: 105-114 |
|
doi>10.1145/337180.337194 |
|
Full text: PDF
|
|
This paper reports on research into applying the technique of log file analysis for checking test results to a broad range of testing and other tasks. The studies undertaken included applying log file analysis to both unit- and system-level testing and ...
This paper reports on research into applying the technique of log file analysis for checking test results to a broad range of testing and other tasks. The studies undertaken included applying log file analysis to both unit- and system-level testing and to requirements of both safety-critical and non-critical systems, and the use of log file analysis in combination with other testing methods. The paper also reports on the technique of using log file analyzers to simulate the software under test, both in order to validate the analyzers and to clarify requirements. It also discusses practical issues to do with the completeness of the approach, and includes comparisons to other recently-published approaches to log file analysis. expand
|
|
|
Multivariate visualization in observation-based testing |
| |
David Leon,
Andy Podgurski,
Lee J. White
|
|
Pages: 116-125 |
|
doi>10.1145/337180.337195 |
|
Full text: PDF
|
|
We explore the use of multivariate visualization techniques to support a new approach to test data selection, called observation-based testing. Applications of multivariate visualization are described, including: evaluating ...
We explore the use of multivariate visualization techniques to support a new approach to test data selection, called observation-based testing. Applications of multivariate visualization are described, including: evaluating and improving synthetic tests; filtering regression test suites; filtering captured operational executions; comparing test suites; and assessing bug reports. These applications are illustrated by the use of correspondence analysis to analyze test inputs for the GNU GCC compiler. expand
|
|
|
An empirical study of regression test application frequency |
| |
Jung-Min Kim,
Adam Porter,
Gregg Rothermel
|
|
Pages: 126-135 |
|
doi>10.1145/337180.337196 |
|
Full text: PDF
|
|
Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such ...
Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases. expand
|
|
|
Testing levels for object-oriented software |
| |
Y. Labiche,
P. Thévenod-Fosse,
H. Waeselynck,
M.-H. Durand
|
|
Pages: 136-145 |
|
doi>10.1145/337180.337197 |
|
Full text: PDF
|
|
One of the characteristics of object-oriented software is the complex dependency that may exist between classes due to inheritance, association and aggregation relationships. Hence, where to start testing and how to define an integration strategy are ...
One of the characteristics of object-oriented software is the complex dependency that may exist between classes due to inheritance, association and aggregation relationships. Hence, where to start testing and how to define an integration strategy are issues that require further investigation. This paper presents an approach to define a test order by exploiting a model produced during design stages (e.g., using OMT, UML), namely the class diagram. Our goal is to minimize the number of stubs to be constructed in order to decrease the cost of testing. This is done by testing a class after the classes it depends on. The novelty of the test order lies in the fact that it takes account of: (i) dynamic (polymorphism) dependencies; (ii) abstract classes that cannot be instantiated, making some testing levels infeasible. The test order is represented by a graph showing which testing levels must be done in sequence and which ones may be done independently. It also provides information about the classes involved in each level and how they are involved (e.g., instantiation or not). The approach is implemented in a tool called TOONS (Testing level generator for Object-OrieNted Software). It is applied to an industrial case study from the avionics domain. expand
|
|
|
Software evolution in componentware using requirements/assurances contracts |
| |
Andreas Rausch
|
|
Pages: 147-156 |
|
doi>10.1145/337180.337198 |
|
Full text: PDF
|
|
In practice, pure top-down and refinement-based development processes are not sufficient. Usually, an iterative and incremental approach is applied instead. Existing methodologies, however, do not support such evolutionary development processes ...
In practice, pure top-down and refinement-based development processes are not sufficient. Usually, an iterative and incremental approach is applied instead. Existing methodologies, however, do not support such evolutionary development processes very well. In this paper, we present the basic concepts of an overall methodology based on component ware and software evolution. The foundation of our methodology is a novel, well-founded model for component-based systems. This model is sufficiently powerful to handle the fundamental structural and behavioral aspects of component ware and object-orientation. Based on the model, we are able to provide a clear definition of a software evolution step.During development, each evolution step implies changes of an appropriate set of development documents. In order to model and track the dependencies between these documents, we introduce the concept of Requirements/Assurances Contracts. These contracts can be rechecked whenever the specification of a component evolves, enabling us to determine the impacts of the respective evolution step. Based on the proposed approach, developers are able to track and manage the software evolution process and to recognize and avoid failures due to software evolution. A short example shows the usefulness of the presented concepts and introduces a practical description technique for Requirements/Assurances Contracts. expand
|
|
|
An integrated cost model for software reuse |
| |
A. Mili,
S. Fowler Chmiel,
R. Gottumukkala,
L. Zhang
|
|
Pages: 157-166 |
|
doi>10.1145/337180.337199 |
|
Full text: PDF
|
|
Several cost models have been proposed in the past for estimating, predicting, and analyzing the costs of software reuse. In this paper we analyze existing models, explain their variance, and propose a tool-supported comprehensive model that encompasses ...
Several cost models have been proposed in the past for estimating, predicting, and analyzing the costs of software reuse. In this paper we analyze existing models, explain their variance, and propose a tool-supported comprehensive model that encompasses most of the existing models. expand
|
|
|
Data mining library reuse patterns using generalized association rules |
| |
Amir Michail
|
|
Pages: 167-176 |
|
doi>10.1145/337180.337200 |
|
Full text: PDF
|
|
In this paper, we show how data mining can be used to discover library reuse patterns in existing applications. Specifically, we consider the problem of discovering library classes and member functions that are typically reused in combination by application ...
In this paper, we show how data mining can be used to discover library reuse patterns in existing applications. Specifically, we consider the problem of discovering library classes and member functions that are typically reused in combination by application classes. This paper improves upon our earlier research using “association rules” [8] by taking into account the inheritance hierarchy using “generalized association rules”. This turns out to be a non-trivial but worthwhile endeavor.By browsing generalized association rules, a developer can discover patterns in library usage in a way that takes into account inheritance relationships. For example, such a rule might tell us that application classes that inherit from a particular library class often instantiate another class or one of its descendents. We illustrate the approach using our tool, CodeWeb, by demonstrating characteristic ways in which applications reuse classes in the KDE application framework. expand
|
|
|
Towards a taxonomy of software connectors |
| |
Nikunj R. Mehta,
Nenad Medvidovic,
Sandeep Phadke
|
|
Pages: 178-187 |
|
doi>10.1145/337180.337201 |
|
Full text: PDF
|
|
Software systems of today are frequently composed from prefabricated, heterogeneous components that provide complex functionality and engage in complex interactions. Existing research on component-based development has mostly focused on component structure, ...
Software systems of today are frequently composed from prefabricated, heterogeneous components that provide complex functionality and engage in complex interactions. Existing research on component-based development has mostly focused on component structure, interfaces, and functionality. Recently, software architecture has emerged as an area that also places significant importance on component interactions, embodied in the notion of software connectors. However, the current level of understanding and support for connectors has been insufficient. This has resulted in their inconsistent treatment and a notable lack of understanding of what the fundamental building blocks of software interaction are and how they can be composed into more complex interactions. This paper attempts to address this problem. It presents a comprehensive classification framework and taxonomy of software connectors. The taxonomy is obtained through an extensive analysis of existing component interactions. The taxonomy is used both to understand existing software connectors and to suggest new, unprecedented connectors. We demonstrate the use of the taxonomy on the architecture of a large, existing system. expand
|
|
|
A formal approach for designing CORBA based applications |
| |
Matteo Pradella,
Matteo Rossi,
Dino Mandrioli,
Alberto Coen-Porisini
|
|
Pages: 188-197 |
|
doi>10.1145/337180.337202 |
|
Full text: PDF
|
|
The design of distributed applications in a CORBA based environment can be carried out by means of an incremental approach, which starts from the specification and leads to the high level architectural design. This is done by introducing in the specification ...
The design of distributed applications in a CORBA based environment can be carried out by means of an incremental approach, which starts from the specification and leads to the high level architectural design. This is done by introducing in the specification all typical elements of CORBA and by providing a methodological support to the designers. The paper discusses a methodology to transform a formal specification written in TRIO into a high level design document written using an extension of TRIO named TC. The TC language is suited to formally describe the high level architecture of a CORBA based application. The methodology and the associated language are presented by means of an example involving a real Supervision and Control System. expand
|
|
|
Simulation in software engineering training |
| |
Anke Drappa,
Jochen Ludewig
|
|
Pages: 199-208 |
|
doi>10.1145/337180.337203 |
|
Full text: PDF
|
|
Simulation is frequently used for training in many application areas like aviation and economics, but not in software engineering. We present the SESAM project which focuses on software engineering education using simulation. In the SESAM project a simulator ...
Simulation is frequently used for training in many application areas like aviation and economics, but not in software engineering. We present the SESAM project which focuses on software engineering education using simulation. In the SESAM project a simulator was developed. Using this simulator, a student can take the role of a software project manager. The simulated software project can be finished within a couple of hours because it is simulated in “quick-motion” mode.In this paper, the background and goals of the SESAM project are presented. A new simulation model, the so called QA model, is introduced. The model behavior is demonstrated by investigating and comparing different strategies for software development. The results of experiments based on the QA model are reported. Finally, conclusions are drawn from the experiments and future work is outlined. expand
|
|
|
Twenty dirty tricks to train software engineers |
| |
Ray Dawson
|
|
Pages: 209-218 |
|
doi>10.1145/337180.337204 |
|
Full text: PDF
|
|
Many employers find that graduates and sandwich students come to them poorly prepared for the every day problems encountered at the workplace. Although many university students undertake team projects at their institutions, an education environment has ...
Many employers find that graduates and sandwich students come to them poorly prepared for the every day problems encountered at the workplace. Although many university students undertake team projects at their institutions, an education environment has limitations that prevent the participants experiencing the full range of problems encountered in the real world. To overcome this, action was taken on courses at the Plessey Telecommunications company and Loughborough University to disrupt the students' software development progress. These actions appear mean and vindictive, and are labeled 'dirty tricks' in this paper, but their value has been appreciated by both the students and their employers. The experiences and learning provided by twenty 'dirty tricks' are described and their contribution towards teaching essential workplace skills is identified. The feedback from both students and employers has been mostly informal but the universally favourable comments received give strong indications that the courses achieved their aim of preparing the students for the workplace. The paper identifies some limitations on the number and types of 'dirty tricks' that can be employed at a university and concludes that companies would benefit if such dirty tricks were employed in company graduate induction programmes as well as in university courses. expand
|
|
|
Deriving test plans from architectural descriptions |
| |
A. Bertolino,
F. Corradini,
P. Inverardi,
H. Muccini
|
|
Pages: 220-229 |
|
doi>10.1145/337180.337205 |
|
Full text: PDF
|
|
|
|
|
WYSIWYT testing in the spreadsheet paradigm: an empirical evaluation |
| |
Karen J. Rothermel,
Curtis R. Cook,
Margaret M. Burnett,
Justin Schonfeld,
T. R. G. Green,
Gregg Rothermel
|
|
Pages: 230-239 |
|
doi>10.1145/337180.337206 |
|
Full text: PDF
|
|
Is it possible to achieve some of the benefits of formal testing within the informal programming conventions of the spreadsheet paradigm? We have been working on an approach that attempts to do so via the development of a testing methodology for this ...
Is it possible to achieve some of the benefits of formal testing within the informal programming conventions of the spreadsheet paradigm? We have been working on an approach that attempts to do so via the development of a testing methodology for this paradigm. Our “What You See Is What You Test” (WYSIWYT) methodology supplements the convention by which spreadsheets provide automatic immediate visual feedback about values by providing automatic immediate visual feedback about “testedness”. In previous work we described this methodology; in this paper, we present empirical data about the methodology's effectiveness. Our results show that the use of the methodology was associated with significant improvement in testing effectiveness and efficiency even with no training on the theory of testing or test adequacy that the model implements. These results may be due at least in part to the fact that use of the methodology was associated with a significant reduction in overconfidence. expand
|
|
|
Integrating UML diagrams for production control systems |
| |
Hans J. Köhler,
Ulrich Nickel,
Jörg Niere,
Albert Zündorf
|
|
Pages: 241-251 |
|
doi>10.1145/337180.337207 |
|
Full text: PDF
|
|
This paper proposes to use SDL block diagrams, UML class diagrams, and UML behavior diagrams like collaboration diagrams, activity diagrams, and statecharts as a visual programming language. We describe a modeling approach for flexible, autonomous production ...
This paper proposes to use SDL block diagrams, UML class diagrams, and UML behavior diagrams like collaboration diagrams, activity diagrams, and statecharts as a visual programming language. We describe a modeling approach for flexible, autonomous production agents, which are used for the decentralization of production control systems. In order to generate a (Java) implementation of a production control system from its specification, we define a precise semantics for the diagrams and we define how different (kinds of) diagrams are combined to a complete executable specification.Generally, generating code from UML behavior diagrams is not well understood. Frequently, the semantics of a UML behavior diagram depends on the topic and the aspect that is modeled and on the designer that created it. In addition, UML behavior diagrams usually model only example scenarios and do not describe all possible cases and possible exceptions.We overcome these problems by restricting the UML notation to a subset of the language that has a precise semantics. In addition, we define which kind of diagram should be used for which purpose and how the different kinds of diagrams are integrated to a consistent overall view. expand
|
|
|
Dragonfly: linking conceptual and implementation architectures of multiuser interactive systems |
| |
Gary E. Anderson,
T. C. Nicholas Graham,
Timothy N. Wright
|
|
Pages: 252-261 |
|
doi>10.1145/337180.337208 |
|
Full text: PDF
|
|
Software architecture styles for developing multiuser applications are usually defined at a conceptual level, abstracting such low-level issues of distributed implementation as code replication, caching strategies and concurrency control ...
Software architecture styles for developing multiuser applications are usually defined at a conceptual level, abstracting such low-level issues of distributed implementation as code replication, caching strategies and concurrency control policies. Ultimately, such conceptual architectures must be cast into code. The iterative design inherent in interactive systems implies that significant evolution will take place at the conceptual level. Equally, however, evolution occurs at the implementation level in order to tune performance. This paper introduces Dragonfly, a software architecture style that maintains a tight, bidirectional link between conceptual and implementation software architectures, allowing evolution to be performed at either level. Dragonfly has been implemented in the Java-based TeleComputing Developer (TCD) toolkit. expand
|
|
|
A case study of open source software development: the Apache server |
| |
Audris Mockus,
Roy T. Fielding,
James Herbsleb
|
|
Pages: 263-272 |
|
doi>10.1145/337180.337209 |
|
Full text: PDF
|
|
According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the ...
According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes. expand
|
|
|
Multiple mass-market applications as components |
| |
David Coppit,
Kevin J. Sullivan
|
|
Pages: 273-282 |
|
doi>10.1145/337180.337210 |
|
Full text: PDF
|
|
Truly successful models for component-based software development continue to prove elusive. One of the few is the use of operating system, database and similar programs in many systems. We address three related problems in this paper. First, we lack ...
Truly successful models for component-based software development continue to prove elusive. One of the few is the use of operating system, database and similar programs in many systems. We address three related problems in this paper. First, we lack needed models. Second, we do not know the conditions under which such models can succeed. In particular, it is unclear whether the notable success with operating systems can be replicated. Third, we do not know whether certain specific models can succeed. We are addressing these problems by evaluating a particular model that shares important characteristics with the successful operating system example: using compatible PC packages as components. Our approach to evaluating such a model is to engage in a case study that aims to build an industrially successful system representative of an important class of systems. We report on our use of the model to develop a computational tool for reliability engineering. We draw two conclusions. First, this kind of model has the potential to succeed. Second, even today, the model can produce significant returns, but it clearly carries considerable risks. expand
|
|
|
Developing and deploying software engineering courseware in an adaptable curriculum framework |
| |
W. Richards Adroin
|
|
Pages: 284-292 |
|
doi>10.1145/337180.337212 |
|
Full text: PDF
|
|
We describe an effort to design an adaptable framework for teaching and learning in software engineering. We are developing a repository of asynchronous, multimedia courseware that facilitates the rapid incorporation of new advances ...
We describe an effort to design an adaptable framework for teaching and learning in software engineering. We are developing a repository of asynchronous, multimedia courseware that facilitates the rapid incorporation of new advances in research and technology, enables courses to be tailored to individual student needs and interests, leverages innovations in educational technology and encourages innovation in teaching and in student learning. Our emphasis is on developing composable multi-level “knowledge and topic units” (KU/TUs) that can be employed to tailor course content and depth to fit the needs of a diverse student population. We have developed “live” and on-line course material for KU/TUs in software engineering and taught courses using this material. The framework was deployed in three software engineering courses (previously taught concurrently) and provides quite different learning environments for the students in each course and, to some extent, tailors the courses to individual students within the classes based on their skills, objectives and backgrounds. We describe efforts at formative evaluation. Student satisfaction is high and available measures of success, e.g., student performance, have improved markedly. We also describe a project now beginning to build on this prototype that will be accompanied by more extensive formative and summative evaluation. expand
|
|
|
Achieving industrial relevance with academic excellence: lessons from the Oregon Master of Software engineering |
| |
Stuart R. Faulk
|
|
Pages: 293-302 |
|
doi>10.1145/337180.337214 |
|
Full text: PDF
|
|
Many educational institutions are developing graduate programs in software engineering targeted to working professionals. These educators face the dilemma of providing programs with both industrial relevance and academic excellence. This paper describes ...
Many educational institutions are developing graduate programs in software engineering targeted to working professionals. These educators face the dilemma of providing programs with both industrial relevance and academic excellence. This paper describes our experience and lessons learned in developing such a program, the Oregon Master of Software Engineering (OMSE). It describes a structured approach to curriculum design, curriculum design principles and methods that can be applied to develop a quality professional program. expand
|
|
|
Inference of message sequence charts |
| |
Rajeev Alur,
Kousha Etessami,
Mihalis Yannakakis
|
|
Pages: 304-313 |
|
doi>10.1145/337180.337215 |
|
Full text: PDF
|
|
Software designers draw Message Sequence Charts for early modeling of the individual behaviors they expect from the concurrent system under design. Can they be sure that precisely the behaviors they have described are realizable by some implementation ...
Software designers draw Message Sequence Charts for early modeling of the individual behaviors they expect from the concurrent system under design. Can they be sure that precisely the behaviors they have described are realizable by some implementation of the components of the concurrent system? If so, can one automatically synthesize concurrent state machines realizing the given MSCs? If, on the other hand, other unspecified and possibly unwanted scenarios are “implied” by their MSCs, can the software designer be automatically warned and provided the implied MSCs?In this paper we provide a framework in which all these questions are answered positively. We first describe the formal framework within which one can derive implied MSCs, and we then provide polynomial-time algorithms for implication, realizability, and synthesis. In particular, we describe a novel algorithm for checking deadlock-free (safe) realizability. expand
|
|
|
Generating statechart designs from scenarios |
| |
Jon Whittle,
Johann Schumann
|
|
Pages: 314-323 |
|
doi>10.1145/337180.337217 |
|
Full text: PDF
|
|
This paper presents an algorithm for automatically generating UML statecharts from a collection of UML sequence diagrams. Computer support for this transition between requirements and design is important for a successful application of UML's highly iterative, ...
This paper presents an algorithm for automatically generating UML statecharts from a collection of UML sequence diagrams. Computer support for this transition between requirements and design is important for a successful application of UML's highly iterative, distributed software development process. There are three main issues which must be addressed when generating statecharts from sequence diagrams. Firstly, conflicts arising from the merging of independently developed sequence diagrams must be detected and resolved. Secondly, different sequence diagrams often contain identical or similar behaviors. For a true interleaving of the sequence diagrams, these behaviors must be recognized and merged. Finally, generated statecharts usually are only an approximation of the system and thus must be hand-modified and refined by designers. As such, the generated artifact should be highly structured and readable. In terms of statecharts, this corresponds to the introduction of hierarchy. Our algorithm successfully tackles all three of these aspects and will be illustrated in this paper with a well-known ATM example. expand
|
|
|
Object model resurrection — an object oriented maintenance activity |
| |
Gokul V. Subramaniam
|
|
Pages: 324-333 |
|
doi>10.1145/337180.337218 |
|
Full text: PDF
|
|
This paper addresses the problem of reengineering object-oriented systems that have incurred increased maintenance cost due to long development time-span and project lifecycle. When an Incremental Approach is used to develop an object-oriented system, ...
This paper addresses the problem of reengineering object-oriented systems that have incurred increased maintenance cost due to long development time-span and project lifecycle. When an Incremental Approach is used to develop an object-oriented system, there is a risk that the class design and the overall object model will deteriorate in quality with each increment. A recent research work suggested a process activity (Class Deterioration Detection and Resurrection - CDDR process activity) and a technique for the detection and resurrection of deteriorated classes [5]. That work focussed on one particular aspect of object-oriented software maintenance - Class Quality Deterioration due to lack of cohesion induced by high coupling. This paper addresses the problem of deteriorating object-oriented design due to code and class growth (increase in the number of classes) within a system. A Code/Class Growth Control process activity (CGC) is suggested to avoid and eliminate Repetitions Code and Classes within the evolving system. The CDDR and CGC process activities are used to build an evolving Maintenance process model for object-oriented systems. The presented maintenance process model is an effective way to periodically assess and resurrect the quality of an object-oriented design during incremental development. expand
|
|
|
Action Language: a specification language for model checking reactive systems |
| |
Tevfik Bultan
|
|
Pages: 335-344 |
|
doi>10.1145/337180.337219 |
|
Full text: PDF
|
|
We present a specification language called Action Language for model checking software specifications. Action Language forms an interface between transition system models that a model checker generates and high level specification languages such as Statecharts, ...
We present a specification language called Action Language for model checking software specifications. Action Language forms an interface between transition system models that a model checker generates and high level specification languages such as Statecharts, RSML and SCR—similar to an assembly language between a microprocessor and a programming language. We show that Action Language translations of Statecharts and SCR specifications are compact and they preserve the structure of the original specification. Action Language allows specification of both synchronous and asynchronous systems. It also supports modular specifications to enable compositional model checking. expand
|
|
|
Three approximation techniques for ASTRAL symbolic model checking of infinite state real-time systems |
| |
Zhe Dang,
Richard A. Kemmerer
|
|
Pages: 345-354 |
|
doi>10.1145/337180.337220 |
|
Full text: PDF
|
|
ASTRAL is a high-level formal specification language for real-time systems. It has structuring mechanisms that allow one to build modularized specifications of complex real-time systems with layering. Based upon the ASTRAL symbolic model checler reported ...
ASTRAL is a high-level formal specification language for real-time systems. It has structuring mechanisms that allow one to build modularized specifications of complex real-time systems with layering. Based upon the ASTRAL symbolic model checler reported in [13], three approximation techniques to speed-up the model checking process for use in debugging a specification are presented. The techniques are random walk, partial image and dynamic environment generation. Ten mutation tests on a railroad crossing benchmark are used to compare the performance of the techniques applied separately and in combination. The test results are presented and analyzed. expand
|
|
|
Component design of retargetable program analysis tools that reuse intermediate representations |
| |
James Hayes,
William G. Griswold,
Stuart Moskovics
|
|
Pages: 356-365 |
|
doi>10.1145/337180.337221 |
|
Full text: PDF
|
|
Interactive program analysis tools are often tailored to one particular representation of programs, making adaptation to a new language costly. One way to ease adaptability is to introduce an intermediate abstraction—an adaptation layer—between ...
Interactive program analysis tools are often tailored to one particular representation of programs, making adaptation to a new language costly. One way to ease adaptability is to introduce an intermediate abstraction—an adaptation layer—between an existing language representation and the program analysis tool. This adaptation layer translates the tool's queries into queries on the particular representation.Our experiments with this approach on the StarTool program analysis tool resulted in low-cost retargets for C, Tcl/Tk, and Ada. Required adjustments to the approach, however, led to insights for improving a client's retargetability. First, retargeting was eased by having our tool import a tool-centric (i.e., client-centric) interface rather than a general-purpose, language-neutral representation interface. Second, our adaptation layer exports two interfaces, a representation interface supporting queries on the represented program and a language interface that the client queries to configure itself suitably for the given language. Straightforward object-oriented extensions enhance reuse and ease the development of multi-language tools. expand
|
|
|
Light-weight context recovery for efficient and accurate program analyses |
| |
Donglin Liang,
Mary Jean Harrold
|
|
Pages: 366-375 |
|
doi>10.1145/337180.337222 |
|
Full text: PDF
|
|
To compute accurate information efficiently for programs that use pointer variables, a program analysis must account for the fact that a procedure may access different sets of memory locations when the procedure is invoked under different callsites. ...
To compute accurate information efficiently for programs that use pointer variables, a program analysis must account for the fact that a procedure may access different sets of memory locations when the procedure is invoked under different callsites. This paper presents light-weight context recovery, a technique that can efficiently determine whether a memory location is accessed by a procedure under a specific callsite. The paper also presents a technique that uses this information to improve the precision and efficiency of program analyses. Our empirical studies show that (1) light-weight context recovery can be quite precise in identifying the memory locations accessed by a procedure under a specific call-site and (2) distinguishing memory locations accessed by a procedure under different callsites can significantly improve the precision and the efficiency of program analyses on programs that use pointer variables. expand
|
|
|
A replicated assessment and comparison of common software cost modeling techniques |
| |
Lionel C. Briand,
Tristen Langley,
Isabella Wieczorek
|
|
Pages: 377-386 |
|
doi>10.1145/337180.337223 |
|
Full text: PDF
|
|
Delivering a software product on time, within budget, and to an agreed level of quality is a critical concern for many software organizations. Underestimating software costs can have detrimental effects on the quality of the delivered software and thus ...
Delivering a software product on time, within budget, and to an agreed level of quality is a critical concern for many software organizations. Underestimating software costs can have detrimental effects on the quality of the delivered software and thus on a company's business reputation and competitiveness. On the other hand, overestimation of software cost can result in missed opportunities to funds in other projects. In response to industry demand, a myriad of estimation techniques has been proposed during the last three decades. In order to assess the suitability of a technique from a diverse selection, its performance and relative merits must be compared.The current study replicates a comprehensive comparison of common estimation techniques within different organizational contexts, using data from the European Space Agency. Our study is motivated by the challenge to assess the feasibility of using multi-organization data to build cost models and the benefits gained from company-specific data collection. Using the European Space Agency data set, we investigated a yet unexplored application domain, including military and space projects. The results showed that traditional techniques, namely, ordinary least-squares regression and analysis of variance outperformed Analogy-based estimation and regression trees. Consistent with the results of the replicated study no significant difference was found in accuracy between estimates derived from company-specific data and estimates derived from multi-organizational data. expand
|
|
|
Characterization of risky projects based on project managers' evaluation |
| |
Osamu Mizuno,
Tohru Kikuno,
Yasunari Takagi,
Keishi Sakamoto
|
|
Pages: 387-395 |
|
doi>10.1145/337180.337226 |
|
Full text: PDF
|
|
During the process of software development, senior managers often find indications that projects are risky and take appropriate actions to recover them from this dangerous status. If senior managers fail to detect such risks, it is possible that such ...
During the process of software development, senior managers often find indications that projects are risky and take appropriate actions to recover them from this dangerous status. If senior managers fail to detect such risks, it is possible that such projects may collapse completely.In this paper, we propose a new scheme for the characterization of risky projects based on an evaluation by the project manager. In order to acquire the relevant data to make such an assessment, we first designed a questionnaire from five viewpoints within the projects: requirements, estimations, team organization, planning capability and project management activities. Each of these viewpoints consisted of a number of concrete questions. We then analyzed the responses to the questionnaires as provided by project managers by applying a logistic regression analysis. That is, we determined the coefficients of the logistic model from a set of the questionnaire responses. The experimental results using actual project data in Company A showed that 27 projects out of 32 were predicted correctly. Thus we would expect that the proposed characterizing scheme is the first step toward predicting which projects are risky at an early phase of the development. expand
|
|
|
Implementing incremental code migration with XML |
| |
Wolfgang Emmerich,
Cecilia Mascolo,
Anthony Finkelsteiin
|
|
Pages: 397-406 |
|
doi>10.1145/337180.337227 |
|
Full text: PDF
|
|
We demonstrate how XML and related technologies can be used for code mobility at any granularity, thus overcoming the restrictions of existing approaches. By not fixing a particular granularity for mobile code, we enable complete programs as well as ...
We demonstrate how XML and related technologies can be used for code mobility at any granularity, thus overcoming the restrictions of existing approaches. By not fixing a particular granularity for mobile code, we enable complete programs as well as individual lines of code to be sent across the network. We define the concept of incremental code mobility as the ability to migrate and add, remove, or replace code fragments (i.e., increments) in a remote program. The combination of fine-grained and incremental migration achieves a previously unavailable degree of flexibility. We examine the application of incremental and fine-grained code migration to a variety of domains, including user interface management, application management on mobile thin clients, for example PDAs, and management of distributed documents. expand
|
|
|
Principled design of the modern Web architecture |
| |
Roy T. Fielding,
Richard N. Taylor
|
|
Pages: 407-416 |
|
doi>10.1145/337180.337228 |
|
Full text: PDF
|
|
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component ...
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this paper, we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support. expand
|
|
|
A study on exception detection and handling using aspect-oriented programming |
| |
Martin Lippert,
Cristina Videira Lopes
|
|
Pages: 418-427 |
|
doi>10.1145/337180.337229 |
|
Full text: PDF
|
|
Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework ...
Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.We found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future. expand
|
|
|
A case study in root cause defect analysis |
| |
Marek Leszak,
Dewayne E. Perry,
Dieter Stoll
|
|
Pages: 428-437 |
|
doi>10.1145/337180.337232 |
|
Full text: PDF
|
|
There are three interdependent factors that drive our software development processes: interval, quality and cost. As market pressures continue to demand new features ever more rapidly, the challenge is to meet those demands while increasing, or at least ...
There are three interdependent factors that drive our software development processes: interval, quality and cost. As market pressures continue to demand new features ever more rapidly, the challenge is to meet those demands while increasing, or at least not sacrificing, quality. One advantage of defect prevention as an upstream quality improvement practice is the beneficial effect it can have on interval: higher quality early in the process results in fewer defects to be found and repaired in the later parts of the process, thus causing an indirect interval reduction.We report a retrospective root cause defect analysis study of the defect Modification Requests (MRs) discovered while building, testing, and deploying a release of a transmission network element product. We subsequently introduced this analysis methodology into new development projects as an in-process measurement collection requirement for each major defect MR.We present the experimental design of our case study discussing the novel approach we have taken to defect and root cause classification and the mechanisms we have used for randomly selecting the MRs to analyze and collecting the analyses via a web interface. We then present the results of our analyses of the MRs and describe the defects and root causes that we found, and delineate the countermeasures created to either prevent those defects and their root causes or detect them at the earliest possible point in the development process.We conclude with lessons learned from the case study and resulting ongoing improvement activities. expand
|
|
|
Bandera: extracting finite-state models from Java source code |
| |
James C. Corbett,
Matthew B. Dwyer,
John Hatcliff,
Shawn Laubach,
Corina S. Păsăreanu,
Robby,
Hongjun Zheng
|
|
Pages: 439-448 |
|
doi>10.1145/337180.337234 |
|
Full text: PDF
|
|
Finite-state verification techniques, such as model checking, have shown promise as a cost-effective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief ...
Finite-state verification techniques, such as model checking, have shown promise as a cost-effective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief among these is the problem of constructing a finite-state model that approximates the executable behavior of the software system of interest. Current best-practice involves hand-construction of models which is expensive (prohibitive for all but the smallest systems), prone to errors (which can result in misleading verification results), and difficult to optimize (which is necessary to combat the exponential complexity of verification algorithms).In this paper, we describe an integrated collection of program analysis and transformation components, called Bandera, that enables the automatic extraction of safe, compact finite-state models from program source code. Bandera takes as input Java source code and generates a program model in the input language of one of several existing verification tools; Bandera also maps verifier outputs back to the original source code. We discuss the major components of Bandera and give an overview of how it can be used to model check correctness properties of Java programs. expand
|
|
|
Quickly detecting relevant program invariants |
| |
Michael D. Ernst,
Adam Czeisler,
William G. Griswold,
David Notkin
|
|
Pages: 449-458 |
|
doi>10.1145/337180.337240 |
|
Full text: PDF
|
|
Explicitly stated program invariants can help programmers by characterizing certain aspects of program execution and identifying program properties that must be preserved when modifying code. Unfortunately, these invariants are usually absent from code. ...
Explicitly stated program invariants can help programmers by characterizing certain aspects of program execution and identifying program properties that must be preserved when modifying code. Unfortunately, these invariants are usually absent from code. Previous work showed how to dynamically detect invariants from program traces by looking for patterns in and relationships among variable values. A prototype implementation, Daikon, accurately recovered invariants from formally-specified programs, and the invariants it detected in other programs assisted programmers in a software evolution task. However, Daikon suffered from reporting too many invariants, many of which were not useful, and also failed to report some desired invariants.This paper presents, and gives experimental evidence of the efficacy of, four approaches for increasing the relevance of invariants reported by a dynamic invariant detector. One of them — exploiting unused polymorphism — adds desired invariants to the output. The other three — suppressing implied invariants, limiting which variables are compared to one another, and ignoring unchanged values — eliminate undesired invariants from the output and also improve runtime by reducing the work done by the invariant detector. expand
|
|
|
Characterizing implicit information during peer review meetings |
| |
Patrick d'Astous,
Pierre N. Robillard
|
|
Pages: 460-466 |
|
doi>10.1145/337180.343189 |
|
Full text: PDF
|
|
|
|
|
Object-oriented inspection in the face of delocalisation |
| |
Alastair Dunsmore,
Marc Roper,
Murray Wood
|
|
Pages: 467-476 |
|
doi>10.1145/337180.337343 |
|
Full text: PDF
|
|
Software inspection is now widely accepted as an effective technique for defect detection. This acceptance is largely based on studies using procedural program code. This paper presents empirical evidence that raises significant questions about the application ...
Software inspection is now widely accepted as an effective technique for defect detection. This acceptance is largely based on studies using procedural program code. This paper presents empirical evidence that raises significant questions about the application of inspection to object-oriented code.A detailed analysis of the 'hard to find' defects during an inspection experiment shows that many of them can be characterised as 'delocalised' — the information needed to recognise the defect is distributed throughout the software. The paper shows that key features of object-oriented technology are likely to exaggerate delocalisation.As a result, it is argued that new methods of inspection for object-oriented code are required. These must address: partitioning code for inspection (“what to read”), reading strategies (“how to read”), and support for understanding what isn't read — “localising the delocalisation”. expand
|
|
|
An inheritance-based technique for building simulation proofs incrementally |
| |
Idit Keidar,
Roger Khazan,
Nancy Lynch,
Alex Shvartsman
|
|
Pages: 478-487 |
|
doi>10.1145/337180.337358 |
|
Full text: PDF
|
|
This paper presents a technique for incrementally constructing safety specifications, abstract algorithm descriptions, and simulation proofs showing that algorithms meet their specifications.The technique for building specifications ...
This paper presents a technique for incrementally constructing safety specifications, abstract algorithm descriptions, and simulation proofs showing that algorithms meet their specifications.The technique for building specifications (and algorithms) allows a child specification (or algorithm) to inherit from its parent by two forms of incremental modification: (a) interface extension, where new forms of interaction are added to the parent's interface, and (b) specialization (subtyping), where new data, restrictions, and effects are added to the parent's behavior description. The combination of interface extension and specialization constitutes a powerful and expressive incremental modification mechanism for describing changes that do not override the behavior of the parent, although it may introduce new behavior.Consider the case when incremental modification is applied to both a parent specification S and a parent algorithm A. A proof that the child algorithm A′ implements the child specification S′ can be built incrementally upon simulation proof that algorithm A implements specification S. The new work required involves reasoning about the modifications, but does not require repetition of the reasoning in the original simulation proof.The paper presents the technique mathematically, in terms of automata. The technique has already been used to model and validate a full-fledged group communication system (see [26]); the methodology and results of that experiment are summarized in this paper. expand
|
|
|
Verification of time partitioning in the DEOS scheduler kernel |
| |
John Penix,
Willem Visser,
Eric Engstrom,
Aaron Larson,
Nicholas Weininger
|
|
Pages: 488-497 |
|
doi>10.1145/337180.337364 |
|
Full text: PDF
|
|
This paper describes an experiment to use the Spin model checking system to support automated verification of time partitioning in the Honeywell DEOS real-time scheduling kernel. The goal of the experiment was to investigate whether model checking could ...
This paper describes an experiment to use the Spin model checking system to support automated verification of time partitioning in the Honeywell DEOS real-time scheduling kernel. The goal of the experiment was to investigate whether model checking could be used to find a subtle implementation error that was originally discovered and fixed during the standard formal review process. To conduct the experiment, a core slice of the DEOS scheduling kernel was first translated without abstraction from C++ into Promela (the input language for Spin). We constructed an abstract “test-driver” environment and carefully introduced several abstractions into the system to support verification. Several experiments were run to attempt to verify that the system implementation adhered to the critical time partitioning requirements. During these experiments, the known error was rediscovered in the time partitioning implementation. We believe this case study provides several insights into how to develop cost-effective methods and tools to support the software design and implementation review process. expand
|
|
|
Graphical animation of behavior models |
| |
Jeff Magee,
Nat Pryce,
Dimitra Giannakopoulou,
Jeff Kramer
|
|
Pages: 499-508 |
|
doi>10.1145/337180.337368 |
|
Full text: PDF
|
|
Graphical animation is a way of visualizing the behavior of design models. This visualization is of use in validating a design model against informally specified requirements and in interpreting the meaning and significance of analysis results in relation ...
Graphical animation is a way of visualizing the behavior of design models. This visualization is of use in validating a design model against informally specified requirements and in interpreting the meaning and significance of analysis results in relation to the problem domain. In this paper we describe how behavior models specified by Labeled Transition Systems (LTS) can drive graphical animations. The semantic framework for the approach is based on Timed Automata. Animations are described by an XML document that is used to generate a set of JavaBeans. The elaborated JavaBeans perform the animation actions as directed by the LTS model. expand
|
|
|
Towards the principled design of software engineering diagrams |
| |
Corin Gurr,
Konstantinos Tourlas
|
|
Pages: 509-518 |
|
doi>10.1145/337180.337371 |
|
Full text: PDF
|
|
Diagrammatic specification, modelling and programming languages are increasingly prevalent in software engineering and, it is often claimed, provide natural representations which permit of intuitive reasoning. A desirable goal of software engineering ...
Diagrammatic specification, modelling and programming languages are increasingly prevalent in software engineering and, it is often claimed, provide natural representations which permit of intuitive reasoning. A desirable goal of software engineering is the rigorous justification of such reasoning, yet many formal accounts of diagrammatic languages confuse or destroy any natural reading of the diagrams. Hence they cannot be said to be intuitive. The answer, we feel, is to examine seriously the meaning and accuracy of the terms “natural” and “intuitive” in this context. This paper highlights, and illustrates by means of examples taken from industrial practice, an ongoing research theme of the authors. We take a deeper and more cognitively informed consideration of diagrams which leads us to a more natural formal underpinning that permits (i) the formal justification of informal intuitive arguments, without placing the onus of formality upon the engineer constructing the argument; and (ii) a principled approach to the identification of intuitive (and counter-intuitive) features of diagrammatic languages. expand
|
|
|
From MCC and CMM: technology transfers bright and dim |
| |
Bill Curtis
|
|
Pages: 521-530 |
|
doi>10.1145/337180.337375 |
|
Full text: PDF
|
|
This paper describes lessons learned during the author's five lives in technology transfer. The author's first life came in General Electric's Space Division where he performed research on software metrics and structured programming, and transferred ...
This paper describes lessons learned during the author's five lives in technology transfer. The author's first life came in General Electric's Space Division where he performed research on software metrics and structured programming, and transferred technology to the pages of technical journals. His second life came at ITT's Programming Technology Center where he was responsible for transferring software measurement practices into common use across ITT's worldwide software operations. Some measurement initiatives survived, but most were short-lived. His third life came in MCC's Human Interface Laboratory and Software Technology Program. MCC's member companies were only occasionally able to transfer the advanced technology they challenged MCC to produce. His fourth life came in directing the Software Process Program at the Software Engineering Institute where he led the team that produced the Capability Maturity Model. Although the CMM's transfer was occasionally too rapid to control, the CMM suggested that you should transfer no technology before its time. The author's fifth and current life involves co-founding TeraQuest and helping companies to improve their software development capability. The paper includes twenty-five lessons in technology transfer and one model. expand
|
|
|
Fraunhofer: the German model for applied research and technology transfer |
| |
Dieter Rombach
|
|
Pages: 531-537 |
|
doi>10.1145/337180.337443 |
|
Full text: PDF
|
|
The Fraunhofer Gesellschaft e.V. in Germany is Europe's largest and most successful organization for applied research and technology transfer. Its 48 institutes cover all areas of technology and engineering ranging from materials and production technology ...
The Fraunhofer Gesellschaft e.V. in Germany is Europe's largest and most successful organization for applied research and technology transfer. Its 48 institutes cover all areas of technology and engineering ranging from materials and production technology to information & communication technology and solar energy. The Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern, Germany, focuses on software engineering methods, software product and process management, and learning organization concepts for software. It applies an experiment- or feedback-based transfer model, which has led to many successful and sustained improvements in the industrial practice of software development. In this presentation, the underlying transfer model, key business areas and core competencies of Fraunhofer IESE as well as examples of industrial transfer projects will be illustrated. The presentation will conclude with arguments why this transfer approach is well suited for software development and why it is a prerequisite for the professionalization of software development expand
|
|
|
Software development engineer in Microsoft: a subjective view of soft skills required |
| |
Martin Orsted
|
|
Pages: 539-540 |
|
doi>10.1145/337180.337445 |
|
Full text: PDF
|
|
This paper is a position statement. There are important requirements on software development engineers that go beyond the normal academic qualifications and technical skills, and which quite often receive a lower priority in education and training.
This paper is a position statement. There are important requirements on software development engineers that go beyond the normal academic qualifications and technical skills, and which quite often receive a lower priority in education and training. expand
|
|
|
Software needs engineering: a position paper |
| |
Jane B. Grimson,
Hans-Jürgen Kugler
|
|
Pages: 541-544 |
|
doi>10.1145/337180.337446 |
|
Full text: PDF
|
|
When the general press refers to 'software' in its headlines, then this is often not to relate a success story, but to expand on yet another 'software-risk-turned-problem-story.'For many people the term 'software' evokes the image of an application package ...
When the general press refers to 'software' in its headlines, then this is often not to relate a success story, but to expand on yet another 'software-risk-turned-problem-story.'For many people the term 'software' evokes the image of an application package running either on a PC or some similar stand-alone usage. Over 70% of all software, however, are not developed in the traditional software houses as part of the creation of such packages. Much of this software comes in the form of products and services that end users would not readily associate with software. These can be complex systems with crucial connections made through software, such as telecommunications or banking systems, or the logistics systems of airports. Or these can be end-user products with software embedded, ranging from battery management systems in electric shavers, over mobile phones to engine management and safety systems in cars. e-Commerce systems fall into this category, too.Yes, there is software that works reliably and as expected, and there are professional approaches to create such products — one can engineer software, in the right environment, with the right people. expand
|
|
|
Is software education narrow-minded?: a position paper |
| |
Peter Morrogh
|
|
Pages: 545-546 |
|
doi>10.1145/337180.337451 |
|
Full text: PDF
|
|
The content of computer science and software engineering courses needs to be examined so that students are better prepared to cope with the challenges of a rapidly changing software industry.
The content of computer science and software engineering courses needs to be examined so that students are better prepared to cope with the challenges of a rapidly changing software industry. expand
|
|
|
An approach to architectural analysis of product lines |
| |
Gerald C. Gannod,
Robyn R. Lutz
|
|
Pages: 548-557 |
|
doi>10.1145/337180.337455 |
|
Full text: PDF
|
|
This paper addresses the issue of how to perform architectural analysis on an existing product line architecture. The con tribution of the paper is to identify and demonstrate a repeatable product line architecture analysis process. The approach defines ...
This paper addresses the issue of how to perform architectural analysis on an existing product line architecture. The con tribution of the paper is to identify and demonstrate a repeatable product line architecture analysis process. The approach defines a “good” product line architecture in terms of those quality attributes required by the particular product line under development. It then analyzes the architecture against these criteria by both manual and tool-supported methods. The phased approach described in this paper provides a structured analysis of an existing product line architecture using (1) formal specification of the high-level architecture, (2) manual analysis of scenarios to exercise the architecture's support for required variabilities, and (3) model checking of critical behaviors at the architectural level that are required for all systems in the product line. Results of an application to a software product line of spaceborne telescopes are used to explain and evaluate the approach. expand
|
|
|
Introducng a software modeling concept in a medium-sized company |
| |
Klaus Schmid,
Ulrike Becker-Kornstaedt,
Peter Knauber,
Floian Bernauer
|
|
Pages: 558-567 |
|
doi>10.1145/337180.337461 |
|
Full text: PDF
|
|
In this paper, we describe, using the Quality Improvement Paradigm (QIP), how an improvement project aimed at improving the modeling and documentation approach of a medium-sized company (MSuD) was conducted. We discuss the new modeling approach which ...
In this paper, we describe, using the Quality Improvement Paradigm (QIP), how an improvement project aimed at improving the modeling and documentation approach of a medium-sized company (MSuD) was conducted. We discuss the new modeling approach which may serve for other companies as a template for deriving their own adapted approach. Further, we illustrate our insights from this project that can help in future technology transfer projects. A major characteristic of this project was that it was embedded in a long-term consulting relationship. expand
|
|
|
From research to reward: challenges in technology transfer |
| |
Adrian M. Colyer
|
|
Pages: 569-576 |
|
doi>10.1145/337180.337467 |
|
Full text: PDF
|
|
Over a five year period the Applied Science & Technology group of IBM's Hursley Laboratory in England turned itself from a fully-funded research organisation into an entirely self-funded technology transfer group. Much practical experience and insight ...
Over a five year period the Applied Science & Technology group of IBM's Hursley Laboratory in England turned itself from a fully-funded research organisation into an entirely self-funded technology transfer group. Much practical experience and insight was gained into the questions of: What are the obstacles to overcome in successful technology transfer? How to find a match between technology and customer? How best to manage risk and expectation?To be successful a technology transfer group needs to be correctly positioned within its sponsoring organisation, use management processes that provide flexibility and control, and develop a sophisticated engagement model for working with its customers. expand
|
|
|
Technology transfer macro-process: a practical guide for the effective introduction of technology |
| |
Tetsuto Nishiyama,
Kunihiko Ikeda,
Toru Niwa
|
|
Pages: 577-586 |
|
doi>10.1145/337180.337470 |
|
Full text: PDF
|
|
In our efforts to increase software development productivity, we have worked to introduce numerous software development techniques and technologies into various target organizations. Through these efforts, we have come to understand the difficulties ...
In our efforts to increase software development productivity, we have worked to introduce numerous software development techniques and technologies into various target organizations. Through these efforts, we have come to understand the difficulties involved in technical transfer. Some of the major hurdles that these organizations face during technical transfers are tight schedules and budgets. We have made efforts to lighten this load by using various customization techniques and have defined an overall process called the Technology Transfer Macro-Process that we can use to introduce a wide variety of software development techniques and technologies into a target organization.This paper introduces this simple and practical process along with important methods and concepts such as the Process Plug-in Method and the Process Warehouse, for the introduction of new tools, technologies, and processes within an organization. The issue of initial productivity loss will also be discussed and a suggestion on how to avoid this will be made. These methods have been successfully used to introduce object-oriented technology (OOT) into actual development projects and have helped to increase overall productivity within the target development organizations. expand
|
|
|
When the project absolutely must get done: marrying the organization chart with the precedence diagram |
| |
Stan Rifkin
|
|
Pages: 588-596 |
|
doi>10.1145/337180.337475 |
|
Full text: PDF
|
|
Very little is new in project planning, but this is! We present a technique to marry the organization chart with a project's task precedence diagram. This permits us to simulate the project at a micro, project-specific level never before achieved. We ...
Very little is new in project planning, but this is! We present a technique to marry the organization chart with a project's task precedence diagram. This permits us to simulate the project at a micro, project-specific level never before achieved. We can perform “what-if” scenarios related to organization structures, the deployment of specific individuals and skills, and the structure of information flow and exception-handling in a project. The tool used, ViteProject, was developed over the last ten years in a Stanford University laboratory, where substantial results have been achieved when applied to design activities other than software. We present our real-world experience with several software projects, where it has improved project visibility and allowed us to rationally optimize projects in a way hitherto impossible. expand
|
|
|
An evaluation of the paired comparisons method for software sizing |
| |
Eduardo Miranda
|
|
Pages: 597-604 |
|
doi>10.1145/337180.337477 |
|
Full text: PDF
|
|
This paper evaluates the accuracy, precision and robustness of the paired comparisons method for software sizing and concludes that the results produced by it are superior to the so called “expert” approaches.
This paper evaluates the accuracy, precision and robustness of the paired comparisons method for software sizing and concludes that the results produced by it are superior to the so called “expert” approaches. expand
|
|
|
Grow fast, grow global: how the Irish software industry evolved to this business model |
| |
Barry Murphy
|
|
Pages: 606-607 |
|
doi>10.1145/337180.337480 |
|
Full text: PDF
|
|
|
|
|
The making of Orbix and the iPortal suite |
| |
Sean Baker
|
|
Pages: 609-616 |
|
doi>10.1145/337180.337484 |
|
Full text: PDF
|
|
IONA released the first full implementation of the CORBA standard in August 1992, and our first product, Orbix, has become the most successful object request broker, capturing almost 70-percent of this market. It has spawned many follow-on products from ...
IONA released the first full implementation of the CORBA standard in August 1992, and our first product, Orbix, has become the most successful object request broker, capturing almost 70-percent of this market. It has spawned many follow-on products from IONA and from partner companies. This development followed nearly ten years of research in the area of distributed object systems within Trinity College Dublin, centered on language support for developers of distributed systems.This paper captures some of the lessons we ve learned in the transition from academia to business. We ve had to learn many Software Engineering skills. We ve had to find the right mix between engineering, marketing and sales expertise. We ve had to learn how to release new products while staying committed to current ones. And most recently, we ve had to learn how to become an Internet-company in order to deliver the iPortal Suite. For IONAians, it has been a fascinating journey. expand
|
|
|
Improvement of a configuration management system |
| |
Frank Titze
|
|
Pages: 618-625 |
|
doi>10.1145/337180.337488 |
|
Full text: PDF
|
|
The company CAD-UL AG develops software tools for embedded systems. Single tools as compilers, linkers and debuggers are offered as well as complete development tool chains for the software development process. In contrast to application software for ...
The company CAD-UL AG develops software tools for embedded systems. Single tools as compilers, linkers and debuggers are offered as well as complete development tool chains for the software development process. In contrast to application software for personal computers, embedded systems require very specialized software of highly optimized and exhaustively tested code.Since the previously existing configuration management was not efficient in comparison to the state-of-the-art in software engineering, an improvement was implemented by the introduction of a modern Configuration Management (CM) system [1].In this presented paper, CAD-UL intends to show the results and the experiences of the European Systems & Software Initiative Process Improvement Experiment (ESSI-PIE) ICMS with the new configuration management system. expand
|
|
|
Applying and adjusting a software process improvement model in practice: the use of the IDEAL model in a small software enterprise |
| |
Karlheinz Kautz,
Henrik Westergaard Hansen,
Kim Thaysen
|
|
Pages: 626-633 |
|
doi>10.1145/337180.337492 |
|
Full text: PDF
|
|
Software process improvement is a demanding and complex undertaking. To support the constitution and implementation of software process improvement schemes the Software Engineering Institute (SEI) proposes a framework, the so-called IDEAL model. This ...
Software process improvement is a demanding and complex undertaking. To support the constitution and implementation of software process improvement schemes the Software Engineering Institute (SEI) proposes a framework, the so-called IDEAL model. This model is based on experiences from large organizations. The aim of the research described here was to investigate the suitability of the model for small software enterprises. It has therefore been deployed and adjusted for successful use in a small Danish software company. The course of the project and the application of the model are presented and the case is reflected on the background of current knowledge about managing software process improvement as organizational change. expand
|
|
|
European experiences with software process improvement |
| |
Fran O'Hara
|
|
Pages: 635-640 |
|
doi>10.1145/337180.337495 |
|
Full text: PDF
|
|
Assessment models used include SPICE (ISO/IEC TR 15504) [1] and Software Engineering Institute's CMM1 [2] (one organisation also achieved ISO9001 certification).
Assessment models used include SPICE (ISO/IEC TR 15504) [1] and Software Engineering Institute's CMM1 [2] (one organisation also achieved ISO9001 certification). expand
|
|
|
Software process improvement by object technology (ESSI PIE 27785 — SPOT) |
| |
Antonio Caliò,
Massimo Autiero,
Giuseppe Bux
|
|
Pages: 641-647 |
|
doi>10.1145/337180.337497 |
|
Full text: PDF
|
|
This paper describes the on going experience of Caliò Informatica Srl in a project of Process Improvement Experiment (PIE) sponsored by the Community's ESSI (European Systems and Software Initiative) program. The experiment concerns the improvement ...
This paper describes the on going experience of Caliò Informatica Srl in a project of Process Improvement Experiment (PIE) sponsored by the Community's ESSI (European Systems and Software Initiative) program. The experiment concerns the improvement of two primary Software Life Cycle (SLC) processes, namely Analysis and Design, by adopting Object Oriented technology, and in particular the UML method. Rational Rose is the technology, which is supporting the improvement. The PIE is being done on top of a strategic baseline project, being deployed by Caliò Informatica, in the business domain of enterprise management applications.Main benefits being achieved concern: increased professional skills and technical capability of Caliò's personnel; achievement of higher customer satisfaction; better resource allocation in software projects; improvement of product quality and robustness, because of its better modularization and structuring; set up of a reusable software components library. expand
|
|
|
Daily build and feature development in large distributed projects |
| |
Even-André Karlsson,
Lars-Göran Andersson,
Per Leion
|
|
Pages: 649-658 |
|
doi>10.1145/337180.337498 |
|
Full text: PDF
|
|
Daily build is a software development paradigm that originated in the PC industry to get control of the development process, while still allowing the focus on end user requirements and code. The PC industry used daily build to avoid chaos in increasingly ...
Daily build is a software development paradigm that originated in the PC industry to get control of the development process, while still allowing the focus on end user requirements and code. The PC industry used daily build to avoid chaos in increasingly larger applications in an environment without a strong development process. Ericsson Radio Systems has chosen to implement daily build to increase the focus on end user requirements and code, but from a different starting point with a traditionally strong development process. In this article we discuss our experiences with daily build and feature oriented development in this context. We also relate our experience to the concept of extreme programming, arguing that our ideas can help extend the applicability of extreme programming beyond small co-located projects. expand
|
|
|
Why don't we get more (self?) respect: the positive impact of software engineering research upon practice |
| |
Barry W. Boehm,
Mike Evangelist,
Volker Gruhn,
Jeff Kramer,
Edward F. Miller, Jr. /
Leon Osterweil
|
|
Page: 660 |
|
doi>10.1145/337180.343191 |
|
Full text: PDF
|
|
Software vendors rarely acknowledge their debt to research, indeed often are unaware of it, and rarely even appreciate the importance of such acknowledgement. The long lead times, and tortuous adoption paths, for software engineering research contributions ...
Software vendors rarely acknowledge their debt to research, indeed often are unaware of it, and rarely even appreciate the importance of such acknowledgement. The long lead times, and tortuous adoption paths, for software engineering research contributions also cloud perception of the actual source of popularly adopted software engineering technologies.Whatever the reasons, this panel proposes to address the problem of lack of appreciation of software engineering research by presenting evidence of the ways in which the work of the community has had tangible, and often substantial, impact. The case studies we will trace include: the growth of software design and architecture from the early work of Parnas, Jackson, et al.,. the growth of software testing and analysis technology from the early work of Howden, Miller, et. al., and the growth of software measurement from the work of Boehm and others.It is hoped that this panel will prove to be the springboard for a larger community effort to document in a scholarly and articulate way the successes and impacts of our community. It is hoped that this documentation will lead to improved self-image, greater respect from other communities, and a more favorable attitude from funding sources. Part of the panel discussion will focus on how to achieve these goals. expand
|
|
|
Component-based software engineering and the issue of trust |
| |
Bill Councill,
Janet S. Flynt,
Alok Mehta,
John R. Speed,
Mary Shaw /
George T. Heineman
|
|
Pages: 661-664 |
|
doi>10.1145/337180.337501 |
|
Full text: PDF
|
|
Software component consumers are entitled to trusted components. This panel addresses the criteria for trusted components and presents generally accepted definitions for all terms used to describe both software components and the methods and processes ...
Software component consumers are entitled to trusted components. This panel addresses the criteria for trusted components and presents generally accepted definitions for all terms used to describe both software components and the methods and processes required to verify trusted software components. expand
|
|
|
Shortages of qualified software engineering faculty and practitioners (panel session): challenges in breaking the cycle |
| |
Günther Ruhe,
Donald J. Bagert,
Helen Edwards,
Michael Ryan /
Nancy R. Mead,
Hossein Saiedian
|
|
Pages: 665-668 |
|
doi>10.1145/337180.337504 |
|
Full text: PDF
|
|
One of the most serious issues facing the software engineering education community is the lack of qualified tenure-track (full-time) faculty to teach software engineering courses, particularly at the undergraduate level. Similarly, one of the most serious ...
One of the most serious issues facing the software engineering education community is the lack of qualified tenure-track (full-time) faculty to teach software engineering courses, particularly at the undergraduate level. Similarly, one of the most serious issues facing the software industry is the lack of qualified junior and senior software engineers. This shortage cycle has existed for some time, and if it is not addressed properly will only worsen, thereby affecting the software engineering field in a more general way than it has already.The objective of this panel is to put a number of suggestions for improvement into discussion and debate in order to evaluate their potential and viability. expand
|
|
|
Who needs doctors? (panel session) (abstract only) |
| |
Jeff Magee,
Mauro Pezzè
|
|
Page: 669 |
|
doi>10.1145/337180.343193 |
|
Full text: PDF
|
|
|
|
|
Lessons learned from teaching reflective software engineering using the Leap toolkit |
| |
Carlton A. Moore
|
|
Pages: 672-675 |
|
doi>10.1145/337180.337508 |
|
Full text: PDF
|
|
|
|
|
Can quality graduate software engineering courses really be delivered asynchronously on-line? |
| |
Stephen Edwards
|
|
Pages: 676-679 |
|
doi>10.1145/337180.337512 |
|
Full text: PDF
|
|
This article briefly presents a case study in on-line asynchronous course delivery. It sketches the design of a graduate computer science course entitled “Software Design and Quality,” illustrating an effective approach to distance learning ...
This article briefly presents a case study in on-line asynchronous course delivery. It sketches the design of a graduate computer science course entitled “Software Design and Quality,” illustrating an effective approach to distance learning that accommodates learning by doing, team collaboration, and critical thinking. It also shows that there are effective alternatives to “canned” streaming media presentations that achieve quality on-line education. expand
|
|
|
Multibook's test environment |
| |
Nathalie Poerwantoro,
Abdulmotaleb El Saddik,
Bernd Krämer,
Ralf Steinmetz
|
|
Pages: 680-683 |
|
doi>10.1145/337180.337514 |
|
Full text: PDF
|
|
Well engineered Web based courseware and exercises provide flexibility and added value to the students, which goes beyond the traditional text book or CD-ROM based courses. The Multibook project explores the boundaries of customized learning materials ...
Well engineered Web based courseware and exercises provide flexibility and added value to the students, which goes beyond the traditional text book or CD-ROM based courses. The Multibook project explores the boundaries of customized learning materials by composing learning trails dynamically as learners have set their profile to access a course. In this paper we first give an overview of the core project ideas and illustrate them along our Software Engineering course. Then we present a novel extension to the project's exercise environment with a graph editing component that particularly fits the needs of structure-related assignments. expand
|
|
|
E-Slate: a software architectural style for end-user programming |
| |
George Birbilis,
Manolis Koutlis,
Kriton Kyrimis,
George Tsironis,
George Vasiliou
|
|
Pages: 684-687 |
|
doi>10.1145/337180.337521 |
|
Full text: PDF
|
|
|
|
|
An interactive multimedia software house simulation for postgraduate software engineers |
| |
Helen Sharp,
Pat Hall
|
|
Pages: 688-691 |
|
doi>10.1145/337180.337528 |
|
Full text: PDF
|
|
The Open University's M880 Software Engineering is a postgraduate distance education course aimed at software professionals. The case study element of the course (approximately 100 hours of study) is presented through an innovative interactive ...
The Open University's M880 Software Engineering is a postgraduate distance education course aimed at software professionals. The case study element of the course (approximately 100 hours of study) is presented through an innovative interactive multimedia simulation of a software house Open Software Solutions (OSS). The student 'joins' OSS as an employee and performs various tasks as a member of the company's project teams. The course is now in its sixth presentation and has been studied by over 1500 students. In this paper, we present the background to the development, and a description of the environment and student tasks. expand
|
|
|
LIGHTVIEWS — visual interactive Internet environment for learning OO software testing |
| |
Sita Ramakrishnan
|
|
Pages: 692-695 |
|
doi>10.1145/337180.337532 |
|
Full text: PDF
|
|
The Internet has been recognised not only as a tool for communication in the 21st century but also as an environment for enabling changes in the paradignm of teaching and learning. This paper describes our development effort, sponsored by the Committee ...
The Internet has been recognised not only as a tool for communication in the 21st century but also as an environment for enabling changes in the paradignm of teaching and learning. This paper describes our development effort, sponsored by the Committee of University Teaching Development (CUTSD98) Grant, in designing educational material on Object-Oriented (O-O) testing in an Internet environment. The aim of this work is to enhance the state of the art in learning O-O testing by visualizing the testing process and interactive courseware in virtual communities. We have endeavoured to create an effective Internet-based courseware known as LIGHTVIEWS which contains O-O testing case studies described by visual images, animation, and interactive lessons, to assist active participation by learners to result in better understanding and knowledge retention. Our approach employs appropriate UML diagrams, makes the diagrams test ready by including details of constraints as part of state/event transitions, and provides interactive lessons for learning O-O software testing. We have used four case studies to explore the various test selection techniques. We have included black-box testing at unit level in case study 1, and at the system level in case study 3. The case study 2 has been used to illustrate event based testing by visually representing the dynamics of Java applets at work, and using interactivity to learn how to test Java applets, threads, and applet communication. The case study 4 explores the various aspects of distributed components testing. expand
|
|
|
The ICSE2000 doctoral workshop |
| |
Jeff Magee,
Mauro Pezzè
|
|
Page: 697 |
|
doi>10.1145/337180.337538 |
|
Full text: PDF
|
|
Doctoral research in software engineering is a major source of new ideas and of key importance in training scientists for the information technology community. The rapid evolution of information technology is challenging the relevance of doctoral programs ...
Doctoral research in software engineering is a major source of new ideas and of key importance in training scientists for the information technology community. The rapid evolution of information technology is challenging the relevance of doctoral programs in software engineering. These are facing the risk of losing their leading role in training scientists and engineers. Many Universities are threaten by a decreasing number of applications and an increasing number of drop outs. A major goal of the ICSE doctoral workshop, at the turn of the millennium, is to promote doctoral study and provide help and encouragement to those engaged in it. Consequently, the ICSE2000 Doctoral Workshop not only provides a forum for graduate students to present and discuss their dissertation research, it also provides (together with a panel session in the main conference) an opportunity to discuss the role of doctoral research in the new information society. An opening talk by Lee Osterweil is intended to give participants a clear view of both the goal of doctoral research and the methodology with which it is carried out. The presentation of doctoral plans and the open discussion between the committee and the invited students is a unique opportunity to compare PhD programs in different institutions and in different countries. The summary panel scheduled as part of the conference program is designed to open the discussion between the academic and the industrial communities on the role that doctoral research plays in the development and evolution of software engineering. expand
|
|
|
A logical framework for design composition |
| |
Jing Dong
|
|
Pages: 698-700 |
|
doi>10.1145/337180.337542 |
|
Full text: PDF
|
|
The design of a large component-based software system typically involves the composition of different components. The lack of rigorous reasoning about the correctness of composition is an important barrier towards the promise of “plug and play”. ...
The design of a large component-based software system typically involves the composition of different components. The lack of rigorous reasoning about the correctness of composition is an important barrier towards the promise of “plug and play”. In this paper, we describe a rigorous logic framework to reason about component compositions. We focus our analysis on design components, such as design patterns, which have been used by a large number of applications. We also propose methods to verify structural and behavioral composition correctness. expand
|
|
|
Algorithmic cost estimation for software evolution |
| |
Juan F. Ramil
|
|
Pages: 701-703 |
|
doi>10.1145/337180.337587 |
|
Full text: PDF
|
|
This study addresses the problem of cost estimation in the context of software evolution by building a set of quantitative models and assessing their predictive power. The models aim at capturing the relationship between effort, productivity and a suite ...
This study addresses the problem of cost estimation in the context of software evolution by building a set of quantitative models and assessing their predictive power. The models aim at capturing the relationship between effort, productivity and a suite of metrics of software evolution extracted from empirical data sets. expand
|
|
|
Estimating software fault-proneness for tuning testing activities |
| |
Giovanni Denaro
|
|
Pages: 704-706 |
|
doi>10.1145/337180.337592 |
|
Full text: PDF
|
|
|
|
|
Formal verification applied to Java concurrent software |
| |
Radu Iosif
|
|
Pages: 707-709 |
|
doi>10.1145/337180.337594 |
|
Full text: PDF
|
|
Applying existing finite-state verification tools to software systems is not yet easy for a variety of reasons. This research activity aims to integrate formal verification with programming languages currently used in software development. In particular, ...
Applying existing finite-state verification tools to software systems is not yet easy for a variety of reasons. This research activity aims to integrate formal verification with programming languages currently used in software development. In particular, it focuses on elaborating a formal method for the specification and validation of temporal logic properties concerning the behavior of Java concurrent programs. expand
|
|
|
Supporting dynamic distributed work processes with a component and event based approach |
| |
Peter J. Kammer
|
|
Pages: 710-712 |
|
doi>10.1145/337180.337596 |
|
Full text: PDF
|
|
|
|
|
Platform-independent and tool-neutral test descriptions for automated software testing |
| |
Chang Liu
|
|
Pages: 713-715 |
|
doi>10.1145/337180.337598 |
|
Full text: PDF
|
|
Current automatic test execution techniques are sensitive to changes in program implementation. Moreover, different test descriptions are required by different testing tools. As a result, it is difficult to maintain or port test descriptions. To address ...
Current automatic test execution techniques are sensitive to changes in program implementation. Moreover, different test descriptions are required by different testing tools. As a result, it is difficult to maintain or port test descriptions. To address this problem, we developed TestTalk, a comprehensive testing language. TestTalk test descriptions are platform-independent and tool-neutral. The same software test in TestTalk can be automatically executed by different testing tools on different platforms. The goal of TestTalk is to make software test descriptions, which represent a significant portion of a software project, last as long as the software project. expand
|
|
|
Contribution to simplifying the mobile agent programming |
| |
Marek Paralič
|
|
Pages: 716-718 |
|
doi>10.1145/337180.337600 |
|
Full text: PDF
|
|
This paper introduces an experimental framework for mobile agents. It utilizes expressiveness and formal foundation of concurrent constraint programming to solve the problem of system support for dynamic rebinding of not transferable resources and inter-agent ...
This paper introduces an experimental framework for mobile agents. It utilizes expressiveness and formal foundation of concurrent constraint programming to solve the problem of system support for dynamic rebinding of not transferable resources and inter-agent collaboration based on logic variables. Proposed solutions help to make the agent-based programming easier and more straightforward and at the same time offer a basis for more sophisticated multi-agent systems. expand
|
|
|
Spontaneous software: a Web-based, object computing paradigm |
| |
Glêdson Elias da Silveira
|
|
Pages: 719-721 |
|
doi>10.1145/337180.337603 |
|
Full text: PDF
|
|
|
|
|
Automated refactoring to introduce design patterns |
| |
Mel Ó Cinnéide
|
|
Pages: 722-724 |
|
doi>10.1145/337180.337612 |
|
Full text: PDF
|
|
Software systems have to be flexible in order to cope with evolving requirements. However, since it is impossible to predict with certainty what future requirements will emerge, it is also impossible to know exactly what flexibility to build into a system. ...
Software systems have to be flexible in order to cope with evolving requirements. However, since it is impossible to predict with certainty what future requirements will emerge, it is also impossible to know exactly what flexibility to build into a system. Design patterns are often used to provide this flexibility, so this question frequently reduces to whether or not to apply a given design pattern. We address this problem by developing a methodology for the construction of automated transformations that introduce design patterns. This enables a programmer to safely postpone the application of a design pattern until the flexibility it provides becomes necessary. Our approach deals with the issues of reuse of existing transformations, preservation of program behaviour and the application of the transformations to existing program code. expand
|
|
|
High-integrity code generation for state-based formalisms |
| |
Michael W. Whalen
|
|
Pages: 725-727 |
|
doi>10.1145/337180.337615 |
|
Full text: PDF
|
|
We are attempting to create a translator for a formal state-based specification language (RSML-&egr;) that is suitable for use in safety-critical systems. For such a translator, there are two main concerns: the generated code must ...
We are attempting to create a translator for a formal state-based specification language (RSML-&egr;) that is suitable for use in safety-critical systems. For such a translator, there are two main concerns: the generated code must be shown to be semantically equivalent to the specification, and it must be fast enough to be used in the intended target environment. We address the first concern by providing a formal proof of the translation, and by keeping the implementation of the tool as simple as possible. The second concern is addressed through a variety of methods: (1) decomposing a specification into parallel subtasks, (2) providing provably-correct optimizations, and (3) making worst-case performance guarantees on the generated code. expand
|
|
|
Alcoa: the alloy constraint analyzer |
| |
Daniel Jackson,
Ian Schechter,
Hya Shlyahter
|
|
Pages: 730-733 |
|
doi>10.1145/337180.337616 |
|
Full text: PDF
|
|
Alcoa is a tool for analyzing object models. It has a range of uses. At one end, it can act as a support tool for object model diagrams, checking for consistency of multiplicities and generating sample snapshots. At the other end, it embodies a lightweight ...
Alcoa is a tool for analyzing object models. It has a range of uses. At one end, it can act as a support tool for object model diagrams, checking for consistency of multiplicities and generating sample snapshots. At the other end, it embodies a lightweight formal method in which subtle properties of behaviour can be investigated.Alcoa's input language, Alloy, is a new notation based on Z. Its development was motivated by the need for a notation that is more closely tailored to object models (in the style of UML), and more amenable to automatic analysis. Like Z, Alloy supports the description of systems whose state involves complex relational structure. State and behavioural properties are described declaratively, by conjoining constraints. This makes it possible to develop and analyze a model incrementally, with Alcoa investigating the consequences of whatever constraints are given.Alcoa works by translating constraints to boolean formulas, and then applying state-of-the-art SAT solvers. It can analyze billions of states in seconds. expand
|
|
|
Hyper/J: multi-dimensional separation of concerns for Java |
| |
Harold Ossher,
Peri Tarr
|
|
Pages: 734-737 |
|
doi>10.1145/337180.337618 |
|
Full text: PDF
|
|
Hyper/J™ supports flexible, multi-dimensional separation of concerns for Java™ software. This demonstration shows how to use Hyper/J in some important development and evolution seenarios, emphasizing the software engineering benefits it provides.
Hyper/J™ supports flexible, multi-dimensional separation of concerns for Java™ software. This demonstration shows how to use Hyper/J in some important development and evolution seenarios, emphasizing the software engineering benefits it provides. expand
|
|
|
A software engineering approach and tool set for developing Internet applications |
| |
David A. Marca,
Beth A. Perdue
|
|
Pages: 738-741 |
|
doi>10.1145/337180.337619 |
|
Full text: PDF
|
|
If a business built a plant to produce products without first designing a process to manufacture them, the risk would be lack of capacity without significant plant redesign. Similarly, lacking a software engineering approach and tools for designing ...
If a business built a plant to produce products without first designing a process to manufacture them, the risk would be lack of capacity without significant plant redesign. Similarly, lacking a software engineering approach and tools for designing e-business connections before creating them, can risk: 1) designing the business partnership incorrectly, 2) not implementing the connection quickly enough, or 3) having operations that cannot adapt to changes in business direction. This paper presents a software engineering tool for developing process-oriented Internet applications that implement e-business connections. It gives an approach for using this tool in conjunction with standard commercial IDEFO tools to create adaptable connections. It is organized to match a formal demonstration that shows the step-by-step usage of these tools, and cites software engineering principles that, when applied, ensure adaptability. expand
|
|
|
The FUJABA environment |
| |
Ulrich Nickel,
Jörg Niere,
Albert Zündorf
|
|
Pages: 742-745 |
|
doi>10.1145/337180.337620 |
|
Full text: PDF
|
|
However, a single collaboration diagram is usually not expressive enough to model complex operations performing several modifications at different parts of the overall object structure. Such series of modifications need several collaboration diagrams ...
However, a single collaboration diagram is usually not expressive enough to model complex operations performing several modifications at different parts of the overall object structure. Such series of modifications need several collaboration diagrams to be modeled. In addition, there may be different situations where certain collaboration diagrams should be executed and others not. Thus, we need additional control structures to control the execution of collaboration diagrams. In our approach we combine collaboration diagrams with statecharts and activity diagrams for this purpose. This means, instead of just pseudo code, any state or activity may contain a collaboration diagram modeling the do-action of this step.Figure 1 illustrates the main concepts of Fujaba. Fujaba uses a combination of statecharts and collaboration diagrams to model the behavior of active classes. A combination of activity diagrams and collaboration diagrams models the bodies of complex methods. This integration of class diagrams and UML behavior diagrams enables Fujaba to perform a lot of static analysis work facilitating the creation of a consistent overall specification. In addition, it turns these UML diagrams into a powerful visual programming language and allows to cover the generation of complete application code. During testing and maintenance the code of an application may be changed on the fly, e.g. to fix small problems. Some application parts like the graphical user interface or complex mathematical computations may be developed with other tools. In cooperative (distributed) software development projects some developers may want to use Fujaba, others may not. Code of different developers may be merged by a version management tool. There might already exist a large application and one wants to use Fujaba only for new parts. One may want to do a global search-and-replace to change some text phrases. One may temporarily violate syntactic code structures while she or he restructures some code. For all these reasons, Fujaba aims to provide not just code generation but also the recovery of UML diagrams from Java code. One may analyse (parts of) the application code, recover the corresponding UML diagram (parts), modify these diagram (parts), and generate new code (into the remaining application code). So far, this works reasonable for class diagrams and to some extend for the combination of activity and collaboration diagrams. For statecharts this is under development.The next chapters outline the (forward engineering) capabilities of Fujaba with the help of an example session. expand
|
|
|
Managing software artifacts on the Web with Labyrinth |
| |
Fabiano Cattaneo,
Elisabetta Di Nitto,
Alfonso Fuggetta,
Luigi Lavazza,
Giuseppe Valetto
|
|
Pages: 746-749 |
|
doi>10.1145/337180.337621 |
|
Full text: PDF
|
|
Software developers are increasingly exploiting the Web as a document management system. However, the Web has some limitations, since it is not aware of the structure and semantics associated to pieces of information (e.g., the fact ...
Software developers are increasingly exploiting the Web as a document management system. However, the Web has some limitations, since it is not aware of the structure and semantics associated to pieces of information (e.g., the fact that a document is a requirement specification) and of the semantics of relationships between pieces of information (e.g., the fact that a requirement specification document may be associated to some design specification document). In the Labyrinth project we enhance the capabilities of the Web as a document management system by means of a semantic model (called schema, in analogy with database schemas), which is associated to Web documents. This model is itself a Web document and can be accessed and navigated through a simple Web browser. expand
|
|
|
Galileo: a tool built from mass-market applications |
| |
David Coppit,
Kevin J. Sullivan
|
|
Pages: 750-753 |
|
doi>10.1145/337180.337622 |
|
Full text: PDF
|
|
We present Galileo, an innovative engineering modeling and analysis tool built using an approach we call package-oriented programming (POP). Galileo represents an ongoing evaluation of the POP approach, where multiple large, architecturally ...
We present Galileo, an innovative engineering modeling and analysis tool built using an approach we call package-oriented programming (POP). Galileo represents an ongoing evaluation of the POP approach, where multiple large, architecturally coherent components are tightly integrated in an overall software system. Galileo utilizes Microsoft Word, Internet Explorer, and Visio to provide a low cost, richly functional fault tree modeling superstructure. Based on the success of previous prototypes of the tool, we are now building a version for industrial use under an agreement with NASA Langley Research Center. expand
|
|
|
Little-JIL/Juliette: a process definition language and interpreter |
| |
Aaron G. Cass,
Barbara Staudt Lerner,
Stanley M. Sutton, Jr.,
Eric K. McCall,
Alexander Wise,
Leon J. Osterweil
|
|
Pages: 754-757 |
|
doi>10.1145/337180.337623 |
|
Full text: PDF
|
|
Little-JIL, a language for programming coordination in processes is an executable, high-level language with a formal (yet graphical) syntax and rigorously defined operational semantics. The central abstraction in Little-JIL is the “step,” ...
Little-JIL, a language for programming coordination in processes is an executable, high-level language with a formal (yet graphical) syntax and rigorously defined operational semantics. The central abstraction in Little-JIL is the “step,” which is the focal point for coordination, providing a scoping mechanism for control, data, and exception flow and for agent and resource assignment. Steps are organized into a static hierarchy, but can have a highly dynamic execution structure including the possibility of recursion and concurrency.Little-JIL is based on two main hypotheses. The first is that coordination structure is separable from other process language issues. Little-JIL provides rich control structures while relying on separate systems for resource, artifact, and agenda management. The second hypothesis is that processes are executed by agents that know how to perform their tasks but benefit from coordination support. Accordingly, each Little-JIL step has an execution agent (human or automated) that is responsible for performing the work of the step.This approach has proven effective in supporting the clear and concise expression of agent coordination for a wide variety of software, workflow, and other processes. expand
|
|
|
Analyzing software architectures with Argus-I |
| |
Marlon E. R. Vieira,
Marcio S. Dias,
Debra J. Richardson
|
|
Pages: 758-761 |
|
doi>10.1145/337180.337624 |
|
Full text: PDF
|
|
This formal research demonstration attempts to present an approach to develop and assess architecture and component-based systems based on specifying software architecture augmented by statecharts representing component behavioral specifications [1]. ...
This formal research demonstration attempts to present an approach to develop and assess architecture and component-based systems based on specifying software architecture augmented by statecharts representing component behavioral specifications [1]. The approach is applied for the C2 style [2] and associated ADL and is supported within a quality-focussed environment, called Argus-I, which assists specification-based analysis and testing at both the component and architecture levels. expand
|
|
|
Bandera: a source-level interface for model checking Java programs |
| |
James C. Corbett,
Matthew B. Dwyer,
John Hatcliff,
Robby
|
|
Pages: 762-765 |
|
doi>10.1145/337180.337625 |
|
Full text: PDF
|
|
Despite emerging tool support for assertion-checking and testing of object-oriented programs, providing convincing evidence of program correctness remains a difficult challenge. This is especially true for multi-threaded programs. Techniques for reasoning ...
Despite emerging tool support for assertion-checking and testing of object-oriented programs, providing convincing evidence of program correctness remains a difficult challenge. This is especially true for multi-threaded programs. Techniques for reasoning about finite-state systems have been developing rapidly over the past decade and have the potential to form the basis of powerful software validation theologies.We have developed the Bandera toolset [1] to harness the power of existing model checking tools to apply them to reason about correctness requirements of Java programs. Bandera provides tool support for defining and managing collections of requirements for a program, for extracting compact finite-state models of the program to enable tractable analysis, and for displaying analysis results to the user through a debugger-like interface. This paper describes and illustrates the use of Bandera's source-level user interface for model checking Java programs. expand
|
|
|
Developing mobile computing applications with LIME |
| |
Gian Pietro Picco,
Amy L. Murphy,
Gruia-Catalin Roman
|
|
Pages: 766-769 |
|
doi>10.1145/337180.337626 |
|
Full text: PDF
|
|
Mobile computing defines a very dynamic and challenging scenario for which software engineering practices are still largely in their initial developments. LIME is a middleware designed to enable the rapid development of dependable applications in the ...
Mobile computing defines a very dynamic and challenging scenario for which software engineering practices are still largely in their initial developments. LIME is a middleware designed to enable the rapid development of dependable applications in the mobile environment. The model underlying LIME allows for coordination of physical and logical mobile units by exploiting a reactive, transiently shared tuple space whose contents changes according to connectivity. In this demonstration, we report about initial experiences in developing applications for physical mobility using LIME. expand
|
|
|
Component composition (poster session) |
| |
Bart Michiels,
Bart Wydaeghe
|
|
Page: 771 |
|
doi>10.1145/337180.337627 |
|
Full text: PDF
|
|
This poster depicts a novel approach to document components in a uniform and abstract way. Every use of component is expressed with a specific kind of message sequence charts (MSC), using a limited set of standard primitives with predefined semantics. ...
This poster depicts a novel approach to document components in a uniform and abstract way. Every use of component is expressed with a specific kind of message sequence charts (MSC), using a limited set of standard primitives with predefined semantics. These primitives are mapped on the actual API of the component(s). This documentation is used to find compatible components and to detect conflicts when composing components. Because of the standard set of primitives, components from different sources can be matched and developers do not have to rely on the concrete API. The behavioural flavour of MSC's is suited to document, as a set of usage scenarios, how a component expects to interact with other components when configured in an application. This complements existing documentation techniques. expand
|
|
|
Third eye — specification-based analysis of software execution traces (poster session) |
| |
Raimondas Lencevicius,
Alexander Ran,
Rahav Yairi
|
|
Page: 772 |
|
doi>10.1145/337180.337628 |
|
Full text: PDF
|
|
Another concept of Third Eye is the tracing state. Tracing state is a set of event types generated in that state, other event types are filtered out and not reported. The system is always in a specific tracing state. Tracing states correspond ...
Another concept of Third Eye is the tracing state. Tracing state is a set of event types generated in that state, other event types are filtered out and not reported. The system is always in a specific tracing state. Tracing states correspond to specifications. A program specification describes a set of constraints on events. The event types used in a specification have to be monitored to validate a trace against this specification. All event types contained in a specification and monitored for this specification form a tracing state. Tracing states also control the overhead of tracing on the executing system.The Third Eye framework includes modules for event type definition, event generation and reporting, tracing state definition and management, trace logging, query and browsing interfaces. Modules of event type definition, event reporting facility and tracing state controler are integrated with the software of the system under trace (SUT). The rest of the modules are independent from the SUT and can be deployed on a different execution platform to minimize the influence on system performance. Trace delivery for logging and analysis uses alternative interfaces to accommodate devices with different data storage and connectivity capabilities. We have implemented Third Eye framework prototype currently used by the Third Eye project team in collaboration with product development teams in Nokia's business units. We used Third Eye to test a number of software systems: the memory subsystem of one of Nokia's handsets, Apache Web Server, and WAP (Wireless Application Protocol) client. WAP is an industrial standard for applications and services that operate over wireless communication networks. We validated message sequences in this protocol by adding events in the functions that correspond to the protocol primitives and then checking whether the event sequence corresponds to the protocol message sequence. Events are mapped to Prolog facts and constraints are expressed as Prolog rules. Third Eye can be used for debugging, monitoring, specification validation, and performance measurements. These scenarios use typed events—a concept simple and yet expressive enough to be shared by product designers and developers. The Third Eye has an open architecture allowing easy replacement of third-party tools, including databases, analysis and validation tools. Third Eye is a practical framework for specification-based analysis and adaptive execution tracing of software systems. expand
|
|
|
Empirical investigation of a novel approach to check the integrity of software engineering measuring processes (poster session) |
| |
Skylar Lei,
Michael Smith,
Giancarlo Succi
|
|
Page: 773 |
|
doi>10.1145/337180.337629 |
|
Full text: PDF
|
|
This distribution is counter-intuitive for at least two reasons. First it would seem “obvious” that the numbers drawn from a list generated from widely different arbitrary processes would have roughly equally probabilities for the digits ...
This distribution is counter-intuitive for at least two reasons. First it would seem “obvious” that the numbers drawn from a list generated from widely different arbitrary processes would have roughly equally probabilities for the digits 1 and 9 to be first digits. This is not normally the case. If the list of numbers does not have artificial limits, or include invented numbers such as postal codes, then approximately 30% of the numbers will have 1 as their first digit, but only 5% will have 9 as their first digit. Deviations from the expected Benford Distribution indicate the presence of some special characteristic of the data. The second, more theoretically challenging, problem is: What is the underlying property associated with so many widely different processes which generates lists of numbers that follow Benford's Law?We have conducted an empirical investigation to determine under what circumstances various software metrics follow Benford's Law, and whether any special characteristics, or irregularities, in the data can be uncovered if the data are found not to follow the law. The more tricky problem of understanding why the list of metrics might follow Benford's Law is left to another study. Lists were form from three software metrics extracted from 100 public domain industrial Java Projects. These metrics were Lines of Code (LOC), Fan-Out (FO) and McCabe Cyclomatic Complexity (MCC). Given that a Benford's Law analysis requires a list of considerable length, the data were divided into two groups. The first groups was from projects containing more than 100 files. This was intended as the “control group” and what was expected to follow Benford's Law if that Law was applicable for the analysis of software engineering metrics. To study the sensitivity of the digital analysis technique to project size, projects with a smaller number of files were compared to the control group.The empirical results indicate that the first digits of numbers in lists of LOC metrics extracted from the projects closely followed the probabilities predicted by Benford's Law than an “equal probability of occurrence” suggested by intuitive reasoning. This was shown using both qualitative and quantitative measures. The FO and MCC metrics did not follow the standard Benford's Law as well as did the LOC metrics. This is because the FO and MCC lists contain a significant number of numbers less than 10 and follow a different first digit distribution. Further investigation of the digital analysis technique is necessary to evaluate the applicability of Benford's Law in the total context of Software Metrics. expand
|
|
|
The implication of different learning styles on the modeling of object-oriented systems (poster session) |
| |
Lynda A. Thomas
|
|
Page: 774 |
|
doi>10.1145/337180.337630 |
|
Full text: PDF
|
|
This poster reports on work in progress on the implication of thinking and learning styles on the modelling of Object-Oriented Systems. In particular, analyses of learning modalities are presented and then considered in light of using the Unified Modelling ...
This poster reports on work in progress on the implication of thinking and learning styles on the modelling of Object-Oriented Systems. In particular, analyses of learning modalities are presented and then considered in light of using the Unified Modelling Language (UML) as a tool for system modelling. The results of testing UML CASE tool learners will be available before the conference. expand
|
|
|
A culture-centered multilevel software process cycle model (poster session) |
| |
Silvia Teresita Acuña,
Graciela Elisa Barchini,
Mabel Sosa
|
|
Page: 775 |
|
doi>10.1145/337180.337631 |
|
Full text: PDF
|
|
In this paper a culture-centered multilevel software process cycle model (MPCM) is presented. This model interrelates the socio-cultural, scientific/technological and paradigmatological environments. The proposed model is composed of ...
In this paper a culture-centered multilevel software process cycle model (MPCM) is presented. This model interrelates the socio-cultural, scientific/technological and paradigmatological environments. The proposed model is composed of three environments made up of the ecological universe, engineering, management, development and evaluation levels which represent the process cycle, and the lines of “what”, “who” and “how”. MPCM considers cultural, social, organizational and personnel-related issues, normally absent in the traditional software process models. A cultural comprehensive procedure for the realization of the engineering level is developed. The different stages and their products of the Cultural Procedure of the MPCM are represented. This procedure is proposed in order to model the organization, to reveal its culture and to determine the abilities that a person (user or developer) has to reallocate him/her in the software process. expand
|
|
|
Using application states in software testing (poster session) |
| |
Chang Liu,
Debra J. Richardson
|
|
Page: 776 |
|
doi>10.1145/337180.337632 |
|
Full text: PDF
|
|
Algorithmic cost estimation in the context of software evolution is being addressed as part of the FEAST/2 project with encouraging results from an industrial case study.
Algorithmic cost estimation in the context of software evolution is being addressed as part of the FEAST/2 project with encouraging results from an industrial case study. expand
|
|
|
Effort estimation from change records of evolving software (poster session) |
| |
Juan F. Ramil,
Meir M. Lehman
|
|
Page: 777 |
|
doi>10.1145/337180.337633 |
|
Full text: PDF
|
|
Algorithmic cost estimation in the context of software evolution is being addressed as part of the FEAST/2 project with encouraging results from an industrial case study.
Algorithmic cost estimation in the context of software evolution is being addressed as part of the FEAST/2 project with encouraging results from an industrial case study. expand
|
|
|
Modeling deployment and configuration of CORBA systems with UML (poster session) |
| |
Alan D. Sloane
|
|
Page: 778 |
|
doi>10.1145/337180.337634 |
|
Full text: PDF
|
|
An area of CORBA-based distributed systems which has been difficult to design and document is that of deployment of server components and configuration information.This poster shows by way of an example taken from a health-care system how UML Deployment ...
An area of CORBA-based distributed systems which has been difficult to design and document is that of deployment of server components and configuration information.This poster shows by way of an example taken from a health-care system how UML Deployment Diagrams can be used to model configuration in a system based on Iona Technologies' Orbix. Using the UML models we compare several different centralized and distributed approaches.
We conclude by examining how extensions made to UML in recent revisions enhance the utility of our approach. expand
|
|
|
As strong as possible mobility (poster session) |
| |
Tim Walsh,
Paddy Nixon,
Simon Dobson
|
|
Page: 779 |
|
doi>10.1145/337180.337635 |
|
Full text: PDF
|
|
An executing thread, in an object oriented programming language, is spawned, directly or indirectly, by a main process. This in turn gets its instructions from a primary class. In Java there is no close coupling of a thread and the objects from which ...
An executing thread, in an object oriented programming language, is spawned, directly or indirectly, by a main process. This in turn gets its instructions from a primary class. In Java there is no close coupling of a thread and the objects from which they were created. The use of a container abstraction allows us to group threads and their respective objects into a single structure. A container that holds threads whose variables are all housed within the container is a perfect candidate for strong migration. To achieve this we propose a combination of three techniques to allow the containers to migrate in a manner that approaches strong mobility yet does not resort to retaining bindings to resources across distant and unreliable networks. expand
|
|
|
Hybrid domain representation archive (HyDRA) for requirements model synthesis across viewpoints (poster session) |
| |
K. Suzanne Barber,
Stephen R. Jernigan
|
|
Page: 780 |
|
doi>10.1145/337180.337636 |
|
Full text: PDF
|
|
|
|
|
The use of task analysis methods in support of the development of interactive systems |
| |
Yousef H. Daabaj
|
|
Page: 781 |
|
doi>10.1145/337180.337637 |
|
Full text: PDF
|
|
One of the major and continuing problems for the information technology community is the tendency to create technically excellent and advanced products, which do not meet the needs of the real users. Capturing and analysis of user requirements and tasks ...
One of the major and continuing problems for the information technology community is the tendency to create technically excellent and advanced products, which do not meet the needs of the real users. Capturing and analysis of user requirements and tasks are concepts that have frequently been suggested to address central problems within system development in recent years. In this research the use of a variety of Task Analysis (TA) methods has been used to assess the adequacy of a proposed design for a “World Wide Web (WWW)” system within an Interactive MultiMedia (IMM) context, domain and environment which will help research students conduct their doctoral program as carried out at Salford University, Manchester, UK. The results of the application for TA methods and their input into the design activities have been analyzed and compared both to each other and to a framework (desirable criteria). The findings have shown that TA methods have a number of weaknesses in the contributions that they make and therefore questions of how the methods can be improved to increase their capability were considered. expand
|
|
|
DeBOT — an approach for constructing high performance, scalable distributed object systems (panel session) |
| |
Anna Liu
|
|
Page: 782 |
|
doi>10.1145/337180.337638 |
|
Full text: PDF
|
|
The Internet creates new opportunities for component distribution. Infrastructure for dynamic, Web-based composition of software components appears to be a very impelling need. This demonstration focuses on a Web-based system that supports dynamic component ...
The Internet creates new opportunities for component distribution. Infrastructure for dynamic, Web-based composition of software components appears to be a very impelling need. This demonstration focuses on a Web-based system that supports dynamic component composition. expand
|
|
|
Exploring O-O framework usage (poster session) |
| |
Garry Froehlich,
Amr Kamel,
Paul Sorenson
|
|
Page: 783 |
|
doi>10.1145/337180.337639 |
|
Full text: PDF
|
|
|
|
|
Tracking, predicting and assessing software reuse costs: an automated tool |
| |
A. Mili,
S. Fowler Chmiel,
R. Gottumukkala,
L. Zhang
|
|
Page: 785 |
|
doi>10.1145/337180.337640 |
|
Full text: PDF
|
|
|
|
|
Holmes: a system to support software product lines |
| |
Giancarlo Succi,
Jason Yip,
Eric Liu,
Witold Pedrycz
|
|
Page: 786 |
|
doi>10.1145/337180.337641 |
|
Full text: PDF
|
|
|
|
|
Supporting dynamic composition of components |
| |
Giancarlo Succi,
Raymond Wong,
Eric Liu,
Michael Smith
|
|
Page: 787 |
|
doi>10.1145/337180.337642 |
|
Full text: PDF
|
|
The Internet creates new opportunities for component distribution. Infrastructure for dynamic, Web-based composition of software components appears to be a very impelling need. This demonstration focuses on a Web-based system that supports dynamic component ...
The Internet creates new opportunities for component distribution. Infrastructure for dynamic, Web-based composition of software components appears to be a very impelling need. This demonstration focuses on a Web-based system that supports dynamic component composition. expand
|
|
|
Prompter — a project planning assistant |
| |
Rory O'Connor,
Robert Cochran,
Tony Moynihan
|
|
Page: 788 |
|
doi>10.1145/337180.337643 |
|
Full text: PDF
|
|
The aim of the Prompter project was to develop the Prompter tool, a “decision-support tool to assist in the planning and managing of a software project”. Prompter has the ability to help software project planners assimilate best practice ...
The aim of the Prompter project was to develop the Prompter tool, a “decision-support tool to assist in the planning and managing of a software project”. Prompter has the ability to help software project planners assimilate best practice and 'know how' in the field of software project planning and incorporate expert critiquing which will assist project planners in solving the complex problems associated with the planning of a software project. expand
|
|
|
Visualizing software release histories with 3DSoftVis |
| |
Claudio Riva
|
|
Page: 789 |
|
doi>10.1145/337180.337644 |
|
Full text: PDF
|
|
This paper briefly introduces a 3-D visualization tool (3DSoftVis) that has been developed for the analysis of the evolution of an industrial software system.
This paper briefly introduces a 3-D visualization tool (3DSoftVis) that has been developed for the analysis of the evolution of an industrial software system. expand
|
|
|
Legacy systems migration in CelLEST |
| |
Eleni Stroulia,
Mohammad El-Ramly,
Paul Sorenson,
Roland Penner
|
|
Page: 790 |
|
doi>10.1145/337180.337645 |
|
Full text: PDF
|
|
|
|
|
Process engineering with Spearmint/EPG |
| |
Ulrike Becker-Kornstaedt,
Louise Scott,
Jörg Zettel
|
|
Page: 791 |
|
doi>10.1145/337180.337646 |
|
Full text: PDF
|
|
This paper presents the Spearmint process modeling tool and the Electronic Process Guide (EPG) generator. Together they enable process engineers to elicit, model, analyze and document software processes and then to automatically generate web-based guidebooks ...
This paper presents the Spearmint process modeling tool and the Electronic Process Guide (EPG) generator. Together they enable process engineers to elicit, model, analyze and document software processes and then to automatically generate web-based guidebooks based on the documented processes. expand
|
|
|
An overview of the ICSE 2000 workshop program |
| |
Antonia Bertolino,
Gail C. Murphy
|
|
Page: 793 |
|
doi>10.1145/337180.337816 |
|
Full text: PDF
|
|
Past ICSE attendees will recognize—with pleasure, we hope—workshops that have been successful in previous years. Indeed, we have tried to balance the program between workshops based on novel and promising ideas, with those strongly continuing ...
Past ICSE attendees will recognize—with pleasure, we hope—workshops that have been successful in previous years. Indeed, we have tried to balance the program between workshops based on novel and promising ideas, with those strongly continuing the work started in previous ICSEs. In two cases, the program also includes workshops that already have some tradition, but are associated with ICSE for the first time: the ISAW workshop (4th edition) and the DSV-IS workshop (7th edition).Advanced summaries of many of the workshops follow this overview. For those of you unable to attend a workshop, we hope that this provides a flavor of the interesting discussions that occurred. For those of you who were able to attend, we hope it serves as a reminder of those discussions.We would like to thank Pascale Le Gall (University of Evry, France) and Premkumar (Prem) Devanbu (University of California at Davis, USA) for helping us in reviewing submissions and making the program. Their contributions have been invaluable! expand
|
|
|
Second ICSE Workshop on Web Engineering (workshop session) |
| |
San Murugesan,
Yogesh Deshpande
|
|
Pages: 794-795 |
|
doi>10.1145/337180.337818 |
|
Full text: PDF
|
|
WEB ENGINEERING is a rapidly emerging new discipline focusing on various aspects of successful development and deployment of large, complex Web sites and Web-based systems. Since 1997, interest in Web Engineering has been growing [1-9] and will continue ...
WEB ENGINEERING is a rapidly emerging new discipline focusing on various aspects of successful development and deployment of large, complex Web sites and Web-based systems. Since 1997, interest in Web Engineering has been growing [1-9] and will continue to grow as Web-based systems become critical elements in a large number of applications.This second ICSE Workshop, in a continuing series over the last three years, is in response to the increasing need to systematise the current mainly ad hoc approaches to developing and maintaining Web-based applications. It builds upon the First ICSE Workshop on Web Engineering held in Los Angeles in 1999 [7] and three other Workshops on this theme held at the World Wide Web Conferences in 1998-2000 [4-6].The Workshop attracts what have been traditionally divergent groups of researchers and practitioners to address the problems of building scalable, maintainable, and reliable large complex Web-based systems. expand
|
|
|
The First International Workshop on Automated Program Analysis, Testing and Verification (workshop session) |
| |
Nigel Tracey,
John Penix,
Willem C. Visser
|
|
Page: 796 |
|
doi>10.1145/337180.337819 |
|
Full text: PDF
|
|
Program analysis, testing and verification are key techniques for building confidence in and increasing the quality of software systems. Such activities typically cost upwards of 50% of total development costs. Automation aims to allow both reduced costs ...
Program analysis, testing and verification are key techniques for building confidence in and increasing the quality of software systems. Such activities typically cost upwards of 50% of total development costs. Automation aims to allow both reduced costs and more thorough analysis, testing and verification and is vital to keep pace with increasing software complexity. expand
|
|
|
COTS Workshop: continuing collaborations for successful COTS development |
| |
John Dean,
Tricia Oberndorf,
Mark Vigder
|
|
Pages: 797-798 |
|
doi>10.1145/337180.337821 |
|
Full text: PDF
|
|
|
|
|
Beg, borrow, or steal (workshop session): using multidisciplinary approaches in empirical software engineering research |
| |
Janice Singer,
Margaret-Anne Storey,
Susan Elliott Sim
|
|
Pages: 799-800 |
|
doi>10.1145/337180.337823 |
|
Full text: PDF
|
|
The goal of this workshop is to provide an interactive forum for software engineers and empirical researchers to investigate the feasibility of applying proven methods from other research disciplines to software engineering research. Participants submitted ...
The goal of this workshop is to provide an interactive forum for software engineers and empirical researchers to investigate the feasibility of applying proven methods from other research disciplines to software engineering research. Participants submitted position papers describing problems that might benefit from a multidisciplinary approach. Expert guest speakers from software engineering and other disciplines will address the issues highlighted in the papers with the goal of encouraging more multidisciplinary research. expand
|
|
|
The Second International Symposium on Constructing Software Engineering Tools (CoSET2000) (workshop session) |
| |
Jonathan Gray,
Louise Scott,
Ian Ferguson
|
|
Pages: 801-802 |
|
doi>10.1145/337180.337822 |
|
Full text: PDF
|
|
|
|
|
Design, specification, and verification of interactive systems (workshop session) |
| |
Philippe Palanque,
Fabio Paternò
|
|
Pages: 803-804 |
|
doi>10.1145/337180.337824 |
|
Full text: PDF
|
|
|
|
|
Workshop on standard exchange format (WoSEF) (workshop session) |
| |
Susan Elliott Sim,
Ric Holt,
Rainer Koschke
|
|
Pages: 805-806 |
|
doi>10.1145/337180.337825 |
|
Full text: PDF
|
|
|
|
|
3rd workshop on software engineering over the Internet (workshop session) |
| |
Frank Mauer
|
|
Pages: 807-808 |
|
doi>10.1145/337180.337826 |
|
Full text: PDF
|
|
|
|
|
Workshop on multi-dimensional separation of concerns in software engineering (workshop session) |
| |
Peri Tarr,
William Harrison,
Harold Ossher,
Anthony Finkelsteiin,
Bashar Nuseibeh,
Dewayne Perry
|
|
Pages: 809-810 |
|
doi>10.1145/337180.337827 |
|
Full text: PDF
|
|
Separation of concerns has been central to software engineering for decades, yet its many advantages are still not fully realized. A key reason is that traditional modularization mechanisms do not allow simultaneous decomposition according to multiple ...
Separation of concerns has been central to software engineering for decades, yet its many advantages are still not fully realized. A key reason is that traditional modularization mechanisms do not allow simultaneous decomposition according to multiple kinds of (overlapping and interacting) concerns. This workshop was intended to bring together researchers working on more advanced modularization mechanisms, and practitioners who have experienced the need for them, as a step towards a common understanding of the issues, problems and research challenges. expand
|
|
|
The 2nd International Workshop on Economics-Driven Software Engineering Research (workshop session) |
| |
Kevin J. Sullivan
|
|
Page: 811 |
|
doi>10.1145/337180.337829 |
|
Full text: PDF
|
|
The need for research in this area is indicated by the serious shortfalls in our understanding of how best to design software for value creation. There are at least two basic dimensions to this shortfall. First, the core competency of software engineers ...
The need for research in this area is indicated by the serious shortfalls in our understanding of how best to design software for value creation. There are at least two basic dimensions to this shortfall. First, the core competency of software engineers is making technical software product and process design decisions. However, today there is a disconnect between the technical criteria taught to software engineers and the strategic value creation objectives of the organizations for which software is designed.This disconnect is reflected in the culture and the literature of software design. For example, of sixteen books on software architecture and object-oriented design surveyed, the word cost appeared in the index of only two. Part of the problem is that the links between technical concepts and value creation are not understood well, even in theory. We have an inadequate understanding of and a lack models for the connections between technical decision criteria and value. For example, we lack models for how information hiding modularity adds value to a system, and how much. Today this lack of understanding is intolerable. Software design and use decisions are coupled with fundamental business, public service, and other decisions in almost field. It is becoming critical to develop a better understanding of how software design decisions relate to value creation.The second dimension of the problem is that existing knowledge in software economics is inadequate. To simplify, most present knowledge focuses on cost and risk reduction in traditional government or large industry projects; but today organizations are often driven more by competition and time-to-market as by direct cost. New life-cycle models and software technologies are also being used that tend to invalidate the empirical bases of older models.The EDSER workshops seek to raise the visibility of the economic dimension of software design and use and to foster the emergence and evaluation of economics-oriented concepts, models and tools to improve software production. The EDSER-2 Workshop was made possible in part by the National Science Foundation under grant CCR-9804078. expand
|
|
|
WISE3: the Third International Workshop on Intelligent Software Engineering (workshop session) |
| |
Tim Menzies
|
|
Pages: 812-813 |
|
doi>10.1145/337180.337831 |
|
Full text: PDF
|
|
There is a growing realization that the design of effective software engineering tools must be smarter. Real world software specs can be very intricate. Manual browsing by a software engineer cannot reveal its subtleties. Automatic tools are required ...
There is a growing realization that the design of effective software engineering tools must be smarter. Real world software specs can be very intricate. Manual browsing by a software engineer cannot reveal its subtleties. Automatic tools are required to reflect over business knowledge to identify what is missing or could be effectively changed. At the same time, many AI researchers now realize that software engineering provides the best testbed for AI tools and techniques. While these AI tools are all potentially useful, the core question remains: Which of these tools, if any, are truly cost-effective????A sample of these AI tools is listed below. For a further list of techniques, see the proceedings of WISE1 and WISE2.During analysis- Knowledge acquisition methods for requirements elicitation.
- Knowledge representation methods for the business knowledge.
- Non-classical logics for requirements engineering
During- Knowledge-based program synthesis.
- Knowledge based techniques.
- Knowledge-based validation techniques to detect bad semantics.
- Theorem proving and formal reasoning for managing changing specs.
During- AI tools to maintain declarative and procedural knowledge.
- AI tools for program comprehension and reverse engineering.
The purpose of the WISE series is to assess the utility of the above technique WISE- Collectchaems in software engineering i.e.
- A description of some task.
- A web site where researchers can download further materials on that task.
- Any an initial solution including baselinures for effort and effectiveness of that solution.
- Collectwar-stories of the application of novel technologies to the challenge problems
- A careful and critical evaluation of the relative merits of different technologies for the challenge problems.
WISE4 and beyond would then encourage different solutions to these challenge problems. Our long-term goal (e.g. WISE6) is to foster the develop of some yet-to-be-funded rigorous evaluation experiment. To succeed in attracting that adequate funding, we must first build a strong community, clearly define the open issues, and develop good evaluation methods.To encourage all the above, the materials and challenge problems for WISE3 are all collected together at http: //www.tim.menzies.com/wise3/. So, even if you miss WISE3, you can still download challenge problems and explore them for WISE4 and beyond. expand
|
|
|
Software product lines (workshop session): economics, architectures, and applications |
| |
Peter Knauber,
Giancarlo Succi
|
|
Pages: 814-815 |
|
doi>10.1145/337180.337832 |
|
Full text: PDF
|
|
|
|
|
Agent-oriented software engineering (workshop session) |
| |
Paolo Ciancarini,
Michael Wooldridge
|
|
Pages: 816-817 |
|
doi>10.1145/337180.337833 |
|
Full text: PDF
|
|
|
|
|
Specifying and measuring quality in use (tutorial session) |
| |
Nigel Bevan
|
|
Page: 819 |
|
doi>10.1145/337180.337834 |
|
Full text: PDF
|
|
select a product from among alternative products
- select a product from among alternative products
expand
|
|
|
Designing and analyzing software architectures using ABASs (tutorial session) |
| |
Rick Kazman,
Mark Klein
|
|
Page: 820 |
|
doi>10.1145/337180.337836 |
|
Full text: PDF
|
|
This tutorial will discuss, exemplify, and involve the students in the use of Attribute-Based Architectural Styles (ABASs)—architectural styles accompanied by explicit analysis reasoning frameworks—in both the design and analysis of software ...
This tutorial will discuss, exemplify, and involve the students in the use of Attribute-Based Architectural Styles (ABASs)—architectural styles accompanied by explicit analysis reasoning frameworks—in both the design and analysis of software and system architectures. The tutorial has several objectives: to introduce the students to a catalog of ABASs covering performance, availability, testability, modifiability, and usability; to convince students that ABASs provide a basis for insightful reasoning about a software architecture's ability to meet its quality attribute goals; and to demonstrate the utility of ABASs by showing examples of how ABASs are used to design and analyze real-world system architectures. We will present some large excerpts from our growing ABAS handbook and show that ABASs help us in designing architectures efficiently and predictably and in quickly finding architectural risks and tradeoffs when doing analysis. expand
|
|
|
Building modular object-oriented systems with reusable collaborations (tutorial session) |
| |
Karl J. Lieberherr,
David H. Lorenz,
Mira Mezini
|
|
Page: 821 |
|
doi>10.1145/337180.337838 |
|
Full text: PDF
|
|
New approaches propose to deal with the tangling of logical units by extending the object-oriented language to support module (de)composition along more than one dimension of concern. The tutorial will briefly survey Aspect-Oriented Programming (@@@@ectJ ...
New approaches propose to deal with the tangling of logical units by extending the object-oriented language to support module (de)composition along more than one dimension of concern. The tutorial will briefly survey Aspect-Oriented Programming (@@@@ectJ tool), Adaptive Programming (the Demeter tool), and Hyper-dimensional Separation of Concerns (the Hyper/J tool). The primary focus of the tutorial, however, will be on A daptive Plug-and-Play Components (AP&PC) [2, 1].The AP&PC model enables the programmer to define reusable collaborations, in the sense of the Unified Modeling Language (UML), in separate modules. In the AP&PC approach, an application is built out of a set of base classes that lay down the static structure of the application and several modules of reusable collaborations that are non-intrusively adapted to the needs of the base classes by means of explicit connectors, or adapter constructs. Each module itself may be a composition of simpler collabortion modules. The adaptation includes the embedding of UML class diagrams into more elaborate UML class diagrams. We will show how such embeddings may be conveniently expressed using the traversal language of Adaptive Programming (which is also used in the XML traversal language called XPath.) We will discuss how the model supports the building of better modular object-oriented systems. In addition, the advantages and disadvantages of static versus dynamic adaptation of the reusable collaborations will be considered.In summary, the tutorial presents AP&PC and adapters as useful constructs to encapsulate logical units of design that cut across several classes. We compare AP&PC with other approaches to the tangling problem in software design and implementation. expand
|
|
|
Introduction to CORBA (tutorial session) |
| |
Steve Vinoski
|
|
Page: 822 |
|
doi>10.1145/337180.337839 |
|
Full text: PDF
|
|
This tutorial provides the basics that developers need to begin understanding the Common Object Request Broker Architecture (CORBA) and using it to write industrial-strength distributed systems. You will learn about the basics of the Object Management ...
This tutorial provides the basics that developers need to begin understanding the Common Object Request Broker Architecture (CORBA) and using it to write industrial-strength distributed systems. You will learn about the basics of the Object Management Group's (OMG) Object Management Architecture (OMA), with a focus on its CORBA component. By the end of the tutorial, you will understand how to write object interface specifications using the OMG Interface Definition Language (IDL), how to write simple distributed applications in C++, how to use the Portable Object Adapter (POA), the Dynamic Invocation Interface (DII) and the Dynamic Skeleton Interface (DSI), and the Interface Repository (IFR). You will also know the basics of several CORBA services such as Naming, Trading, and Events. expand
|
|
|
Moving from ISO9000 to higher levels of the CMM (tutorial session) |
| |
Pankaj Jalote
|
|
Page: 823 |
|
doi>10.1145/337180.337840 |
|
Full text: PDF
|
|
Practices in a “general” ISO organizationBrief introduction to CMMLeveraging ISO structures for CMMGaps in an ISO organization with respect to different levels of the CMM.Target maturity ...
- Practices in a “general” ISO organization
- Brief introduction to CMM
- Leveraging ISO structures for CMM
- Gaps in an ISO organization with respect to different levels of the CMM.
- Target maturity level for an ISO organization
- Managing the transition
- Co-existence of ISO and CMM
expand
|
|
|
Planning realistic schedules using software architecture (tutorial session) |
| |
Robert L. Nord,
Daniel J. Paulish,
Dilip Soni
|
|
Page: 824 |
|
doi>10.1145/337180.337843 |
|
Full text: PDF
|
|
|
|
|
Improving design and source code modularity using AspectJ (tutorial session) |
| |
Cristina Videira Lopes,
Gregor Kiczales
|
|
Page: 825 |
|
doi>10.1145/337180.337848 |
|
Full text: PDF
|
|
Using only traditional techniques the implementation of concerns like exception handling, multi-object protocols, synchronization constraints, and security policies tends to be spread out in the code. The lack of modularity for these concerns makes them ...
Using only traditional techniques the implementation of concerns like exception handling, multi-object protocols, synchronization constraints, and security policies tends to be spread out in the code. The lack of modularity for these concerns makes them more difficult to develop and maintain. This tutorial shows how to use Aspect-oriented programming (AOP) [2, 3] to implement concerns like these in a concise modular way. We discuss the effect aspects have on software design and on code modularity. The concrete examples in the tutorial use AspectJ [1], a freely available aspect-oriented extension to the Java™ programming language. expand
|
|
|
Scalability issues in CORBA-based systems (tutorial session) |
| |
Steve Vinoski
|
|
Page: 826 |
|
doi>10.1145/337180.337849 |
|
Full text: PDF
|
|
This tutorial addresses how both the Object Management Group (OMG) specifications and the implementation choices made by middleware providers and application developers affect Common Object Request Broker Architecture (CORBA) application scalability. ...
This tutorial addresses how both the Object Management Group (OMG) specifications and the implementation choices made by middleware providers and application developers affect Common Object Request Broker Architecture (CORBA) application scalability. We will cover a range of scalability issues, starting with Object Request Broker (ORB) internals and working outward to full-scale applications, addressing issues such as connection management, Portable Object Adapter (POA) scalability features, multithreading, object lifecycle issues, object location, system configuration, maintenance, and management, and common application architectures. This tutorial is not language-centric and is useful to developers using Java, C++, or any other language to develop CORBA-based applications. expand
|
|
|
Intellectual property protection for software in the United States and Europe (tutorial session): the changing roles of patents and copyrights |
| |
Gregory J. Kirsch,
Yannis Skulikaris
|
|
Page: 827 |
|
doi>10.1145/337180.337852 |
|
Full text: PDF
|
|
This tutorial addresses how both the Object Management Group (OMG) specifications and the implementation choices made by middleware providers and application developers affect Common Object Request Broker Architecture (CORBA) application scalability. ...
This tutorial addresses how both the Object Management Group (OMG) specifications and the implementation choices made by middleware providers and application developers affect Common Object Request Broker Architecture (CORBA) application scalability. We will cover a range of scalability issues, starting with Object Request Broker (ORB) internals and working outward to full-scale applications, addressing issues such as connection management, Portable Object Adapter (POA) scalability features, multithreading, object lifecycle issues, object location, system configuration, maintenance, and management, and common application architectures. This tutorial is not language-centric and is useful to developers using Java, C++, or any other language to develop CORBA-based applications. expand
|
|
|
Software process improvement (tutorial session): best practices and lessons learned |
| |
Bill Curtis
|
|
Page: 828 |
|
doi>10.1145/337180.337853 |
|
Full text: PDF
|
|
|
|
|
Designing real-time and distributed applications with the UML (tutorial session)4 |
| |
Hassan Gomaa
|
|
Page: 829 |
|
doi>10.1145/337180.337855 |
|
Full text: PDF
|
|
Object-oriented concepts are crucial in software design because they address fundamental issues of adaptation and evolution. With the proliferation of object-oriented notations and methods, the Unified Modeling Language (UML) has emerged to provide a ...
Object-oriented concepts are crucial in software design because they address fundamental issues of adaptation and evolution. With the proliferation of object-oriented notations and methods, the Unified Modeling Language (UML) has emerged to provide a standardized notation for describing object-oriented models. However, for the UML notation to be effectively applied, it needs to be used with an object-oriented analysis and design method. This tutorial describes the COMET method for designing real-time and distributed applications, which integrates OO and concurrency concepts and uses the UML notation. expand
|
|
|
System development using application services over the Net (tutorial session) |
| |
Kenji Takahashi,
Wolfgang Emmerich,
Anthony Finkelsteiin,
Sofia Guerra
|
|
Page: 830 |
|
doi>10.1145/337180.337856 |
|
Full text: PDF
|
|
|
|
|
Software reliability (tutorial session): basic concepts and assessment methods |
| |
Bev Littlewood,
Lorenzo Strigini
|
|
Page: 831 |
|
doi>10.1145/337180.337858 |
|
Full text: PDF
|
|
|
|
|
Product-line architectures, aspects, and reuse (tutorial session) |
| |
Don Batory
|
|
Page: 832 |
|
doi>10.1145/337180.337860 |
|
Full text: PDF
|
|
GenVoca PLA designs have been created for diverse domains: 2-way radios, extensible compilers, communication protocols, command-and-control fire support, avionics, and matrix computation libraries [7]. GenVoca designs are used in industry; its central ...
GenVoca PLA designs have been created for diverse domains: 2-way radios, extensible compilers, communication protocols, command-and-control fire support, avionics, and matrix computation libraries [7]. GenVoca designs are used in industry; its central concepts relate a wide variety of contemporary and classical research topics, including: aspect-oriented programming, parameterized programming, OO frameworks, Perry's lite semantics [8], generative programming [7], design maintenance [9], and layered software. expand
|
|
|
Advanced visual modeling (tutorial session): beyond UML |
| |
Joseph Gil,
John Howse,
Stuart Kent
|
|
Page: 833 |
|
doi>10.1145/337180.337861 |
|
Full text: PDF
|
|
The tutorial is example driven and illustrates how the new notations are combined with those of UML, including OCL. Some of the examples are drawn from industrial contexts, in particular the telecomms sector. Highlights include:A crash critique ...
The tutorial is example driven and illustrates how the new notations are combined with those of UML, including OCL. Some of the examples are drawn from industrial contexts, in particular the telecomms sector. Highlights include:- A crash critique of UML, stressing its weaknesses and strengths.
- A rich visual constraint language and an insight into subtle issues that arise when defining a visual language.
- Lots of examples, some taken from an industrial context.
- A demonstration of a graphical editor (available free from the web and on disk at the tutorial) for the constraint-diagrams language.
- A series of 3D notations for providing rich visualizations of dynamic behavior.
- A vision for visual modeling tools of the future
For more information see http://www.cs.ukc.ac.uk/people/staff/sjhk/cds.html expand
|
|
|
Understanding code mobility (tutorial session) |
| |
Gian Pietro Picco
|
|
Page: 834 |
|
doi>10.1145/337180.337863 |
|
Full text: PDF
|
|
The tutorial provides a conceptual framework for code mobility by illustrating a taxonomy of related technologies, architectural paradigms, and applications. As a final case study, the concepts developed in the taxonomy are then applied to a quantitative ...
The tutorial provides a conceptual framework for code mobility by illustrating a taxonomy of related technologies, architectural paradigms, and applications. As a final case study, the concepts developed in the taxonomy are then applied to a quantitative assessment of the benefits of mobile code technologies and architectures in the network management application domain. expand
|
|
|
Fault tolerance via diversity against design faults (tutorial session): design principles and reliability assessment |
| |
Bev Littlewood,
Lorenzo Strigini
|
|
Page: 835 |
|
doi>10.1145/337180.337864 |
|
Full text: PDF
|
|
Research results indicate that (as usual in software engineering) these question can only be answered with reference to each specific application context and that diversity is no “silver bullet”. But diversity is an attractive option, made ...
Research results indicate that (as usual in software engineering) these question can only be answered with reference to each specific application context and that diversity is no “silver bullet”. But diversity is an attractive option, made more interesting by current trends like the preference for COTS items, and it is important for practitioners to go beyond the summary opinions and misunderstanding that surround it.This tutorial is designed for people involved in system design, acceptance or certification, especially in companies with high dependability requirements or plans to improve on current levels to move into more demanding markets. It is also appropriate for researchers in software engineering wishing to obtain an up-to-date view of knowledge in this area.This tutorial describes:- the motivations behind the use of software fault tolerance, and thus the circumstances in which it should be considered as a possible choice;
- what design schemes one may adopt, and which issues a designer needs to be aware of, for effective application. We present both examples of industrial use and explanations of the important design choices and trade-offs. In this part, we cover the widely published solutions of N-version programming and recovery blocks, but also describe the various options available to a designer, and interesting specific solutions adopted in the railway and aviation industry, and scheme for applications to safety systems. We discuss the factors that may decide the scheme to be adopted and the design of adjudication between conflicting results;
- “what one should really believe” about the effectiveness of software fault tolerance in improving reliability, beyond the controversy and the misunderstandings surrounding it. We give a picture, assembled from more than 10 years of research, of what evidence has really been produced for and against software diversity. We explain the weaknesses of the extreme opinions voiced for and against software fault tolerance, and discuss the criteria that should affect practical decisions about using it, about how to improve its effectiveness by appropriate decisions in developing alternate versions of software components, and about its value for system acceptance.
expand
|
|
|
Improving software inspections by using reading techniques (tutorial session) |
| |
Victor Basili,
Oliver Laitenberger,
Forrest Shull,
Ioana Rus
|
|
Page: 836 |
|
doi>10.1145/337180.337865 |
|
Full text: PDF
|
|
|