E-Book, Englisch, 420 Seiten
Mistrik / Bahsoon / Eeles Relating System Quality and Software Architecture
1. Auflage 2014
ISBN: 978-0-12-417168-8
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
E-Book, Englisch, 420 Seiten
ISBN: 978-0-12-417168-8
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
System Quality and Software Architecture collects state-of-the-art knowledge on how to intertwine software quality requirements with software architecture and how quality attributes are exhibited by the architecture of the system. Contributions from leading researchers and industry evangelists detail the techniques required to achieve quality management in software architecting, and the best way to apply these techniques effectively in various application domains (especially in cloud, mobile and ultra-large-scale/internet-scale architecture) Taken together, these approaches show how to assess the value of total quality management in a software development process, with an emphasis on architecture. The book explains how to improve system quality with focus on attributes such as usability, maintainability, flexibility, reliability, reusability, agility, interoperability, performance, and more. It discusses the importance of clear requirements, describes patterns and tradeoffs that can influence quality, and metrics for quality assessment and overall system analysis. The last section of the book leverages practical experience and evidence to look ahead at the challenges faced by organizations in capturing and realizing quality requirements, and explores the basis of future work in this area. - Explains how design decisions and method selection influence overall system quality, and lessons learned from theories and frameworks on architectural quality - Shows how to align enterprise, system, and software architecture for total quality - Includes case studies, experiments, empirical validation, and systematic comparisons with other approaches already in practice.
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Relating System Quality and Software Architecture;4
3;Copyright;5
4;Contents;6
5;Acknowledgements;16
6;About the Editors;18
7;List of Contributors;22
8;Foreword by Bill Curtis Managing Systems Qualities through Architecture;26
8.1;About the Author;27
9;Foreword by Richard Mark Soley Software Quality Is Still a Problem;28
9.1;Quality Testing in Software;28
9.2;Enter Automated Quality Testing;29
9.3;Whither Automatic Software Quality Evaluation?;29
9.4;Architecture Intertwined with Quality;30
9.5;About the Author;30
10;Preface;32
10.1;Part 1: Human-centric Evaluation for System Qualities and Software Architecture;34
10.2;Part 2: Analysis, Monitoring, and Control of Software Architecture for System Qualities;36
10.3;Part 3: Domain-specific Software Architecture and Software Qualities;38
11;Chapter 1: Relating System Quality and Software Architecture: Foundations and Approaches;42
11.1;Introduction;42
11.1.1;Quality;42
11.1.2;Architecture;43
11.1.3;System;43
11.1.4;Architectural scope;43
11.1.5;System quality and software quality;43
11.2;1.1. Quality Attributes;44
11.3;1.2. State of the Practice;46
11.3.1;1.2.1. Lifecycle approaches;46
11.3.1.1;1.2.1.1. Waterfall;46
11.3.1.2;1.2.1.2. Incremental;47
11.3.1.3;1.2.1.3. Iterative;47
11.3.1.4;1.2.1.4. Agile;49
11.3.2;1.2.2. Defining requirements;50
11.3.3;1.2.3. Defining the architecture;50
11.3.3.1;1.2.3.1. Documenting an architecture;51
11.3.4;1.2.4. Assessing an architecture;52
11.3.4.1;1.2.4.1. Quantitative versus qualitative approaches;52
11.3.4.2;1.2.4.2. Scenario-based evaluation;52
11.3.4.3;1.2.4.3. Experience-based evaluation;53
11.4;1.3. State of the Art;53
11.4.1;1.3.1. Loose coupling;53
11.4.2;1.3.2. Designing for reuse;54
11.4.3;1.3.3. Quality-centric design;54
11.4.4;1.3.4. Lifecycle approaches;54
11.4.5;1.3.5. Architecture representation;56
11.4.6;1.3.6. Qualities at runtime through self-adaptation;57
11.4.7;1.3.7. A value-driven perspective to architecting quality;58
11.5;References;60
12;Part I: Human-Centric Evaluation for Systems Qualities and Software Architecture;62
12.1;Chapter 2: Exploring How the Attribute Driven Design Method Is Perceived;64
12.1.1;Introduction;64
12.1.2;2.1. Background;65
12.1.2.1;2.1.1. ADD method;65
12.1.2.2;2.1.2. Technology acceptance model;67
12.1.3;2.2. The Empirical Study;68
12.1.3.1;2.2.1. Research questions;68
12.1.3.2;2.2.2. Experiment design and study variables;68
12.1.3.3;2.2.3. Participants and training;69
12.1.3.4;2.2.4. The architecting project;70
12.1.3.5;2.2.5. Data collection;70
12.1.4;2.3. Results;71
12.1.4.1;2.3.1. Questionnaire reliability;71
12.1.4.2;2.3.2. Descriptive statistics;71
12.1.4.2.1;2.3.2.1. Usefulness of ADD method;71
12.1.4.2.2;2.3.2.2. Ease of use of ADD method;72
12.1.4.2.3;2.3.2.3. Willingnes of use;72
12.1.4.3;2.3.3. Hypotheses tests;72
12.1.5;2.4. Discussion;73
12.1.5.1;2.4.1. ADD issues faced by subjects;73
12.1.5.1.1;2.4.1.1. Team workload division and assignment;74
12.1.5.1.2;2.4.1.2. No consensus in terminology;74
12.1.5.1.3;2.4.1.3. ADD first iteration;74
12.1.5.1.4;2.4.1.4. Mapping quality attributes to tactics, and tactics to patterns;75
12.1.5.2;2.4.2. Analysis of the results;76
12.1.5.3;2.4.3. Lessons learned;77
12.1.5.4;2.4.4. Threats to validity;78
12.1.6;2.5. Conclusions and Further Work;78
12.1.7;References;79
12.2;Chapter 3: Harmonizing the Quality View of Stakeholders;82
12.2.1;Introduction;82
12.2.2;3.1. Adopted Concepts of the UFO;83
12.2.2.1;3.1.1. Selection of the Foundational Ontology;86
12.2.3;3.2. Assessment and Related Concepts;86
12.2.3.1;3.2.1. Specification-level concepts;87
12.2.3.2;3.2.2. Execution-level concepts;90
12.2.3.3;3.2.3. State of the Art: Addressing basic quality-related concepts;91
12.2.4;3.3. The Harmonization Process;92
12.2.4.1;3.3.1. Quality Subjects' positions in the harmonization process;92
12.2.4.2;3.3.2. Process definition and harmonization levels;93
12.2.4.3;3.3.3. Running example;94
12.2.4.4;3.3.4. View harmonization process;94
12.2.4.4.1;3.3.4.1. Stage 1: Harmonizing artifacts;94
12.2.4.4.1.1;3.3.4.1.1. Artifact harmonization example;95
12.2.4.4.2;3.3.4.2. Stage 2: Harmonizing property types;95
12.2.4.4.2.1;3.3.4.2.1. Example of harmonizing property types;95
12.2.4.4.3;3.3.4.3. Stage 3: Aligning quality views;96
12.2.4.4.3.1;3.3.4.3.1. Quality view alignment example;96
12.2.4.5;3.3.5. Quality harmonization process;97
12.2.4.5.1;3.3.5.1. Substitution artifacts;97
12.2.4.5.2;3.3.5.2. Rank-oriented and property-oriented harmonization. Expected property state;98
12.2.4.5.3;3.3.5.3. Stage 1: Producing an initial example property state;98
12.2.4.5.3.1;3.3.5.3.1. Example for producing the initial example property state;99
12.2.4.5.4;3.3.5.4. Stage 2: Executing initial assessment and deciding on a negotiation;99
12.2.4.5.4.1;3.3.5.4.1. Example of a negotiation decision;100
12.2.4.5.5;3.3.5.5. Stage 3: Performing negotiations;100
12.2.4.5.5.1;3.3.5.5.1. Example of negotiations;100
12.2.4.6;3.3.6. State of the art: Addressing harmonization process activities;101
12.2.4.6.1;3.3.6.1. Addressing organization sides;101
12.2.4.6.2;3.3.6.2. Addressing quality subjects;101
12.2.4.6.3;3.3.6.3. Generic process-based techniques;102
12.2.4.6.4;3.3.6.4. Addressing harmonization activities;102
12.2.5;3.4. Practical Relevance;103
12.2.5.1;3.4.1. Empirical studies;103
12.2.5.1.1;3.4.1.1. Conducting interviews;104
12.2.5.1.2;3.4.1.2. Postmortem analysis;104
12.2.5.1.3;3.4.1.3. Data processing;104
12.2.5.1.4;3.4.1.4. Soundness factors;105
12.2.5.2;3.4.2. Practical application;105
12.2.5.2.1;3.4.2.1. The QuASE process;105
12.2.5.2.2;3.4.2.2. QuOntology and QuIRepository;106
12.2.5.2.3;3.4.2.3. Elicitation stage;106
12.2.6;3.5. Conclusions and Future Research Directions;107
12.2.6.1;3.5.1. Basic conclusions;107
12.2.6.2;3.5.2. Future research and implementation directions;108
12.2.7;Acknowledgment;108
12.2.8;References;109
12.3;Chapter 4: Optimizing Functional and Quality Requirements According to Stakeholders' Goals;116
12.3.1;Introduction;116
12.3.2;4.1. Smart Grid;118
12.3.2.1;4.1.1. Description of smart grids;118
12.3.2.2;4.1.2. Functional requirements;120
12.3.2.3;4.1.3. Security and privacy requirements;121
12.3.2.4;4.1.4. Performance requirements;121
12.3.3;4.2. Background, Concepts, and Notations;123
12.3.3.1;4.2.1. The i* framework;123
12.3.3.2;4.2.2. Problem-oriented requirements engineering;123
12.3.3.3;4.2.3. Valuation of requirements;125
12.3.3.4;4.2.4. Optimization;127
12.3.4;4.3. Preparatory Phases for QuaRO;128
12.3.4.1;4.3.1. Understanding the purpose of the system;128
12.3.4.2;4.3.2. Understanding the problem;130
12.3.5;4.4. Method for Detecting Candidates for Requirements Interactions;130
12.3.5.1;4.4.1. Initialization phase: Initial setup;133
12.3.5.1.1;4.4.1.1. Set up initial tables;133
12.3.5.1.2;4.4.1.2. Set up life cycle;135
12.3.5.2;4.4.2. Phase 1: Treating case 1;135
12.3.5.3;4.4.3. Phase 2: Treating case 2;136
12.3.5.4;4.4.4. Phase 3: Treating case 3;137
12.3.5.5;4.4.5. Phase 4: Treating case 4;138
12.3.6;4.5. Method for Generation of Alternatives;139
12.3.6.1;4.5.1. Relaxation template for security;139
12.3.6.2;4.5.2. Relaxation template for performance;143
12.3.7;4.6. Valuation of Requirements;145
12.3.8;4.7. Optimization of Requirements;152
12.3.9;4.8. Related Work;156
12.3.10;4.9. Conclusions and Perspectives;157
12.3.11;Acknowledgment;158
12.3.12;References;159
13;Part II: Analysis, Monitoring, and Control of Software Architecture for System Qualities;162
13.1;Chapter 5: HASARD: A Model-Based Method for Quality Analysis of Software Architecture;164
13.1.1;Introduction;164
13.1.1.1;Motivation;164
13.1.1.2;Related works and open problems;165
13.1.1.2.1;Software quality models;165
13.1.1.2.2;Quality analysis of software architecture;166
13.1.1.2.3;Hazard analysis methods and techniques;167
13.1.1.3;Overview of the proposed approach;168
13.1.1.4;Organization of the chapter;169
13.1.2;5.1. Hazard Analysis of Software Architectural Designs;170
13.1.2.1;5.1.1. Identification of design hazards;170
13.1.2.2;5.1.2. Cause-consequence analysis;171
13.1.3;5.2. Graphical Modeling of Software Quality;174
13.1.3.1;5.2.1. Graphic notation of quality models;174
13.1.3.2;5.2.2. Construction of a quality model;176
13.1.4;5.3. Reasoning About Software Quality;177
13.1.4.1;5.3.1. Contribution factors of a quality attribute;177
13.1.4.2;5.3.2. Impacts of design decisions;179
13.1.4.3;5.3.3. Quality risks;180
13.1.4.4;5.3.4. Relationships between quality issues;181
13.1.4.5;5.3.5. Trade-off points;183
13.1.5;5.4. Support Tool SQUARE;184
13.1.6;5.5. Case Study;186
13.1.6.1;5.5.1. Research questions;186
13.1.6.2;5.5.2. The object system;186
13.1.6.3;5.5.3. Process of the case study;187
13.1.6.4;5.5.4. Main results of quality analysis;188
13.1.6.5;5.5.5. Conclusions of the case study;190
13.1.7;5.6. Conclusion;192
13.1.7.1;5.6.1. Comparison with related work;192
13.1.7.1.1;5.6.1.1. Software quality models;192
13.1.7.1.2;5.6.1.2. Hazard analysis;193
13.1.7.1.3;5.6.1.3. Evaluation and assessment of software architecture;194
13.1.7.2;5.6.2. Limitations and future work;194
13.1.8;Acknowledgments;195
13.1.9;References;195
13.2;Chapter 6: Lightweight Evaluation of Software Architecture Decisions;198
13.2.1;Introduction;198
13.2.2;6.1. Architecture Evaluation Methods;199
13.2.3;6.2. Architecture Decisions;201
13.2.3.1;6.2.1. Decision forces;204
13.2.4;6.3. Decision-Centric Architecture Review;206
13.2.4.1;6.3.1. Participants;206
13.2.4.2;6.3.2. Preparation;207
13.2.4.3;6.3.3. DCAR method presentation;208
13.2.4.4;6.3.4. Business drivers and domain overview presentation;208
13.2.4.5;6.3.5. Architecture presentation;208
13.2.4.6;6.3.6. Decisions and forces completion;209
13.2.4.7;6.3.7. Decision prioritization;209
13.2.4.8;6.3.8. Decision documentation;210
13.2.4.9;6.3.9. Decision evaluation;211
13.2.4.10;6.3.10. Retrospective;211
13.2.4.11;6.3.11. Reporting the results;212
13.2.4.12;6.3.12. Schedule;212
13.2.5;6.4. Industrial Experiences;213
13.2.5.1;6.4.1. Industrial case studies;213
13.2.5.2;6.4.2. Additional observations made in our own projects;215
13.2.6;6.5. Integrating DCAR with Scrum;216
13.2.6.1;6.5.1. Up-front architecture approach;216
13.2.6.2;6.5.2. In sprints approach;217
13.2.7;6.6. Conclusions;218
13.2.8;Acknowledgments;218
13.2.9;References;219
13.3;Chapter 7: A Rule-Based Approach to Architecture Conformance Checking as a Quality Management Measure;222
13.3.1;Introduction;222
13.3.2;7.1. Challenges in Architectural Conformance Checking;223
13.3.3;7.2. Related Work;224
13.3.4;7.3. A Formal Framework for Architectural Conformance Checking;226
13.3.4.1;7.3.1. Formal representation of component-based systems;227
13.3.4.2;7.3.2. Formal representation of models;230
13.3.4.2.1;7.3.2.1. Classification of models;230
13.3.4.2.2;7.3.2.2. Transformation of models;231
13.3.4.3;7.3.3. Conformance of models;231
13.3.4.4;7.3.4. Prototypical implementation;232
13.3.5;7.4. Application of the Proposed Approach;235
13.3.5.1;7.4.1. The common component modeling example;235
13.3.5.1.1;7.4.1.1. Architectural aspects of CoCoME;235
13.3.5.1.2;7.4.1.2. The architectural rules of CoCoME;239
13.3.5.1.3;7.4.1.3. Results of checking the architectural rules of CoCoME;240
13.3.5.2;7.4.2. Further case studies;241
13.3.5.2.1;7.4.2.1. Checking layers;241
13.3.5.2.2;7.4.2.2. Checking domain-specific reference architectures;242
13.3.5.3;7.4.3. Results;242
13.3.6;7.5. Conclusion;243
13.3.6.1;7.5.1. Contribution and limitations;243
13.3.6.2;7.5.2. Future work;245
13.3.6.3;7.5.3. Summary;246
13.3.7;References;246
13.4;Chapter 8: Dashboards for Continuous Monitoring of Quality for Software Product under Development;250
13.4.1;Introduction;250
13.4.2;8.1. Developing Large Software Products Using Agile and Lean Principles;252
13.4.3;8.2. Elements of Successful Dashboards;253
13.4.3.1;8.2.1. Standardization;253
13.4.3.2;8.2.2. Focus on early warning;255
13.4.3.3;8.2.3. Focus on triggering decisions and monitoring their implementation;256
13.4.3.4;8.2.4. Succinct visualization;256
13.4.3.5;8.2.5. Assuring information quality;257
13.4.4;8.3. Industrial Dashboards;258
13.4.4.1;8.3.1. Companies;258
13.4.4.1.1;8.3.1.1. Ericsson;259
13.4.4.1.2;8.3.1.2. Volvo Car Corporation;259
13.4.4.1.3;8.3.1.3. Saab electronic defense systems;259
13.4.4.2;8.3.2. Dashboard at Ericsson;260
13.4.4.3;8.3.3. Dashboard at VCC;262
13.4.4.4;8.3.4. Dashboard at Saab electronic defense systems;264
13.4.5;8.4. Recommendations for Other Companies;266
13.4.5.1;8.4.1. Recommendations for constructing the dashboards;266
13.4.5.2;8.4.2. Recommendations for choosing indicators and measures;267
13.4.6;8.5. Further Reading;267
13.4.7;8.6. Conclusions;268
13.4.8;References;269
14;Part III: Domain-Specific Software Architecture and Software Qualities;272
14.1;Chapter 9: Achieving Quality in Customer-Configurable Products;274
14.1.1;Introduction;274
14.1.1.1;Outline of the chapter;275
14.1.2;9.1. The Flight Management System Example;276
14.1.3;9.2. Theoretical Framework;277
14.1.3.1;9.2.1. Configurable models;277
14.1.3.1.1;9.2.1.1. System views;277
14.1.3.1.2;9.2.1.2. Variability within the views;278
14.1.3.1.3;9.2.1.3. Variability view;278
14.1.3.1.4;9.2.1.4. Feature configuration, products, and resolution of model variance points;279
14.1.3.2;9.2.2. Quality assurance of configurable systems;280
14.1.3.2.1;9.2.2.1. Product-centered approaches;281
14.1.3.2.2;9.2.2.2. Product-line-centered approaches;281
14.1.4;9.3. Model-Based Product Line Testing;282
14.1.4.1;9.3.1. What is MBT?;282
14.1.4.2;9.3.2. Test model for the flight management system;283
14.1.4.3;9.3.3. Applying MBT;283
14.1.4.4;9.3.4. Product-centered MBT;285
14.1.4.5;9.3.5. Product-line-centered MBT;286
14.1.4.6;9.3.6. Comparison;287
14.1.5;9.4. Model-Based deployment;288
14.1.5.1;9.4.1. What is deployment?;288
14.1.5.2;9.4.2. Spatial and temporal deployment;289
14.1.5.3;9.4.3. Application and resource models for the flight management system;290
14.1.5.4;9.4.4. Product-centered software deployment;292
14.1.5.4.1;9.4.4.1. Step 1: Deriving the product models;292
14.1.5.4.2;9.4.4.2. Step 2: Evaluating the deployment candidates;292
14.1.5.4.3;9.4.4.3. Step 3: Aggregation of results;294
14.1.5.5;9.4.5. Product-line-centered software deployment;294
14.1.5.5.1;9.4.5.1. Reuse of previously computed allocations;295
14.1.5.5.2;9.4.5.2. Maximum approach;296
14.1.6;9.5. Related Work;296
14.1.6.1;9.5.1. General product line approaches;296
14.1.6.2;9.5.2. Product line testing;297
14.1.6.3;9.5.3. Deployment;298
14.1.7;9.6. Conclusion;298
14.1.7.1;9.6.1. Model-based product line testing;298
14.1.7.2;9.6.2. Model-based deployment for product lines;299
14.1.8;References;299
14.2;Chapter 10: Archample-Architectural Analysis Approach for Multiple Product Line Engineering;304
14.2.1;Introduction;304
14.2.2;10.1. Background;305
14.2.2.1;10.1.1. Multiple product line engineering;305
14.2.2.2;10.1.2. Software architecture analysis methods;306
14.2.3;10.2. Case Description;307
14.2.4;10.3. MPL Architecture Viewpoints;308
14.2.5;10.4. Archample Method;311
14.2.5.1;10.4.1. Preparation phase;313
14.2.5.2;10.4.2. Selection of feasible MPL decomposition;313
14.2.5.3;10.4.3. Evaluation of selected MPL design alternative;314
14.2.5.4;10.4.4. Reporting and workshop;315
14.2.6;10.5. Applying Archample Within an Industrial Context;316
14.2.6.1;10.5.1. Preparation phase;316
14.2.6.2;10.5.2. Selection of feasible MPL decomposition;316
14.2.6.3;10.5.3. Evaluation of the selected MPL design alternative;322
14.2.6.4;10.5.4. Reporting and workshop;323
14.2.7;10.6. Related Work;323
14.2.8;10.7. Conclusion;325
14.2.9;Acknowledgments;325
14.2.10;References;325
14.3;Chapter 11: Quality Attributes in Medical Planning and Simulation Systems;328
14.3.1;Introduction;328
14.3.2;11.1. Chapter Contributions;330
14.3.3;11.2. Background and Related Work;331
14.3.3.1;11.2.1. MPS systems;331
14.3.3.2;11.2.2. Software development for MPS systems;331
14.3.3.3;11.2.3. Quality attributes of MPS;332
14.3.4;11.3. Challenges Related to Achieving Quality Attributes in MPS Systems;332
14.3.5;11.4. Quality Attributes in MPS Systems;334
14.3.5.1;11.4.1. Performance;334
14.3.5.2;11.4.2. Usability;335
14.3.5.3;11.4.3. Model correctness;335
14.3.6;11.5. Handling Quality Attributes at the Architecture Stage of MPS Systems;336
14.3.6.1;11.5.1. Architectural stakeholders;336
14.3.6.2;11.5.2. MPS architecture documentation;337
14.3.6.3;11.5.3. Architecture process;338
14.3.7;11.6. Conclusions;340
14.3.8;References;340
14.4;Chapter 12: Addressing Usability Requirements in Mobile Software Development;344
14.4.1;Introduction;344
14.4.2;12.1. Related Work;345
14.4.3;12.2. Usability Mechanisms for Mobile Applications;346
14.4.4;12.3. System Status Feedback;348
14.4.4.1;12.3.1. SSF generic component responsibilities;348
14.4.4.2;12.3.2. SSF architectural component responsibilities;351
14.4.5;12.4. User Preferences;351
14.4.5.1;12.4.1. User Preferences generic component responsibilities;353
14.4.5.2;12.4.2. User Preferences architectural component responsibilities;355
14.4.6;12.5. A Mobile Traffic Complaint System;356
14.4.6.1;12.5.1. Usability requirements;358
14.4.6.2;12.5.2. Impact on the software architecture;358
14.4.6.3;12.5.3. Usability and interactions between entities;361
14.4.7;12.6. Discussion;363
14.4.8;12.7. Conclusions;363
14.4.9;References;364
14.5;Chapter 13: Understanding Quality Requirements Engineering in Contract-Based Projects from the Perspective of Software Arc ...;366
14.5.1;Introduction;366
14.5.2;13.1. Motivation;367
14.5.3;13.2. Background on the Context of Contract-Based Systems Delivery;368
14.5.4;13.3. Empirical Studies on the Software Architecture Perspective on QRs;370
14.5.5;13.4. Research Process;370
14.5.5.1;13.4.1. Research objective and research plan;370
14.5.5.2;13.4.2. The case study participants;371
14.5.5.3;13.4.3. The research instrument for data collection;373
14.5.5.4;13.4.4. Data analysis strategy;374
14.5.6;13.5. Results;375
14.5.6.1;13.5.1. RQ1: How do the software architects understand their role with respect to engineering QRs?;375
14.5.6.2;13.5.2. RQ2: Do SAs and RE staff use different terminology for QRs?;378
14.5.6.3;13.5.3. RQ3: How do QRs get elicited?;379
14.5.6.4;13.5.4. RQ4: How do QRs get documented?;381
14.5.6.5;13.5.5. RQ5: How do QRs get prioritized?;381
14.5.6.6;13.5.6. RQ6: How do QRs get quantified, if at all?;383
14.5.6.7;13.5.7. RQ7: How do QRs get validated?;385
14.5.6.8;13.5.8. RQ8: How do QRs get negotiated?;386
14.5.6.9;13.5.9. RQ9: What role does the contract play in the way SAs cope with QRs?;387
14.5.7;13.6. Discussion;389
14.5.7.1;13.6.1. Comparing and contrasting the results with prior research;389
14.5.7.1.1;13.6.1.1. Role of SAs in engineering of QR;389
14.5.7.1.2;13.6.1.2. QRs vocabulary of SAs and RE staff;390
14.5.7.1.3;13.6.1.3. QR elicitation;390
14.5.7.1.4;13.6.1.4. QRs documentation;390
14.5.7.1.5;13.6.1.5. QRs prioritization;390
14.5.7.1.6;13.6.1.6. QRs quantification;390
14.5.7.1.7;13.6.1.7. QRs validation;391
14.5.7.1.8;13.6.1.8. QRs negotiation;391
14.5.7.1.9;13.6.1.9. Contract's role in SAs' coping strategies;392
14.5.7.2;13.6.2. Implications for practice;392
14.5.7.3;13.6.3. Implications for research;393
14.5.8;13.7. Limitations of the Study;394
14.5.9;13.8. Conclusions;395
14.5.10;Acknowledgments;396
14.5.11;References;396
15;Glossary;400
16;Author Index;402
17;Subject Index;414
Foreword by Richard Mark Soley: Software Quality Is Still a Problem
Richard Mark Soley, Ph.D., Chairman and Chief Executive Officer, Object Management Group, Lexington, Massachusetts, U.S.A.
Since the dawn of the computing age, software quality has been an issue for developers and end users alike. I have never met a software user—whether mainframe, minicomputer, personal computer, or personal device—who is happy with the level of quality of that device. From requirements definition, to user interface, to likely use case, to errors and failures, software infuriates people every minute of every day.
Worse, software failures have had life-changing effects on people. The well-documented Therac-25 user interface failure literally caused deaths. The initial Ariane-5 rocket launch failure was in software. The Mars Climate Orbiter crash landing was caused by a disagreement between two development teams on measurement units. Banking, trading, and other financial services failures caused by software failures surround us; no one is surprised when systems fail, and the (unfortunately generally correct) assumption is that software was the culprit.
From the point of view of the standardizer and the methodologist, the most difficult thing to accept is the fact that methodologies for software quality improvement are well known. From academic perches as disparate as Carnegie Mellon University and Queen's University (Prof. David Parnas) to Eidgenoessische Techniche Hochschule Zürich (Prof. Bertrand Meyer), detailed research and well-written papers have appeared for decades, detailing how to write better-quality software. The Software Engineering Institute, founded some 30 years ago by the United States Department of Defense, has focused precisely on the problem of developing, delivering, and maintaining better software, through the development, implementation, and assessment of software development methodologies (most importantly the Capability Maturity Model and later updates).
Still, trades go awry, distribution networks falter, companies fail, and energy goes undelivered because of software quality issues. Worse, correctable problems such as common security weaknesses (most infamously the buffer overflow weakness) are written every day into security-sensitive software.
Perhaps methodology isn't the only answer. It's interesting to note that, in manufacturing fields outside of the software realm, there is the concept of acceptance of parts. When Boeing and Airbus build aircraft, they do it with parts built not only by their own internal supply chains, but in great (and increasing) part, by including parts built by others, gathered across international boundaries and composed into large, complex systems. That explains the old saw that aircraft are a million parts, flying in close formation! The reality is that close formation is what keeps us warm and dry, miles above ground; and that close formation comes from parts that fit together well, that work together well, that can be maintained and overhauled together well. And that requires aircraft manufacturers to test the parts when they arrive in the factory and before they are integrated into the airframe. Sure, there's a methodology for building better parts—those methodologies even have well-accepted names, like “total quality management,” “lean manufacturing,” and “Six Sigma.” But those methodologies do not obviate the need to test parts (at least statistically) when they enter the factory.
Quality Testing in Software
Unfortunately, that ethos never made it into the software development field. Although you will find regression testing and unit testing, and specialized unit testing tools like JUnit in the Java world, there has never been a widely accepted practice of software part testing based solely on the (automated) examination of software itself. My own background in the software business included a (non-automated) examination phase (the Multics Review Board quality testing requirement for the inclusion of new code into the Honeywell Multics operating system 35 years ago measurably and visibly increased the overall quality of the Multics code base) showed that examination, even human examination, was of value to both development organizations and systems users. The cost, however, was rather high and has only been considered acceptable for systems with very high failure impacts (for example, in the medical and defense fields).
When Boeing and Airbus test parts, they certainly do some hand inspection, but there is far more automated inspection. After all, one can't see inside the parts without machines like X-rays and NMR machines, and one can't test metal parts to destruction (to determine tensile strength, for example) without automation. That same automation should and must be applied in testing software—increasing the objectivity of acceptance tests, increasing the likelihood that those tests will be applied (due to lower cost), and eventually increasing the quality of the software itself.
Enter Automated Quality Testing
In late 2009, the Object Management Group (OMG) and the Software Engineering Institute (SEI) came together to create the Consortium for IT Software Quality (CISQ). The two groups realized the need to find another approach to increase software quality, since
• Methodologies to increase software process quality (such as CMMI) had had insufficient impact on their own in increasing software quality.
• Software inspection methodologies based on human examination of code is an approach, which tend to be prone to errors, objective, inconsistent, and generally expensive to be widely deployed.
• Existing automated code evaluation systems had no consistent (standardized) set of metrics, resulting in inconsistent results and very limited adoption in the marketplace.
The need for the software development industry to develop and widely adopt automated quality tests was absolutely obvious, and the Consortium immediately set upon a process (based on OMG's broad and deep experience in standardization and SEI's broad and deep experience in assessment) to define automatable software quality standard metrics.
Whither Automatic Software Quality Evaluation?
The first standard that CISQ was able to bring through the OMG process, arriving at the end of 2012, featured a standard, consistent, reliable, and accurate complexity metric for code, in essence an update to the Function Point concept. First defined in 1979, there were five ISO standards for counting function points by 2012, none of which was absolutely reliable and repeatable; that is, individual (human) function counters could come up with different results when counting the same piece of software twice! CISQ's Automatic Function Point (AFP) standard features a fully automatable standard that has absolutely consistent results from one run to the next.
That doesn't sound like much of an accomplishment, until one realizes that one can't compute a defect, error, or other size-dependent metric without an agreed sizing strategy. AFP provides that strategy, and in a consistent, standardized fashion that can be fully automated, making it inexpensive and repeatable.
In particular, how can one measure the quality of a software architecture without a baseline, without a complexity metric? AFP provides that baseline, and further quality metrics under development by CISQ and expected to be standardized this year, provide the yardstick against which to measure software, again in a fully automatable fashion.
Is it simply lines-of-code that are being measured, or in fact entire software designs? Quality is in fact inextricably connected to architecture in several places; not only can poor software coding or modeling quality lead to poor usability and fit-for-purpose; but poor software architecture can lead to a deep mismatch with the requirements that led to the development of the system in the first place.
Architecture Intertwined with Quality
Clearly software quality—in fact, system quality in general—is a fractal concept. Requirements can poorly quantify the needs of a software system; architectures and other artifacts can poorly outline the analysis and design against those requirements; implementation via coding or modeling can poorly execute the design artifacts; testing can poorly exercise an implementation; and even quotidian use can incorrectly take advantage of a well-implemented, well-tested design. Clearly, quality testing must take into account design artifacts as well as those of implementation.
Fortunately, architectural quality methodologies (and indeed quality metrics across the landscape of software development) are active areas of research, with promising approaches. Given my own predilections and the technical focus of OMG over the past 16 years, clearly modeling (of requirements, of design, of analysis, of implementation, and certainly of architecture) must be at the fore, and model- and rule-based approaches to measuring architectures are featured here. But the tome you are holding also includes a wealth of current research and understanding from measuring requirements design against customer needs to usability testing of completed systems. If the software industry—and that's every industry these days—is going to increase not only the underlying but also the perceived level of...




