E-Book, Englisch, Band Volume 95, 342 Seiten
Reihe: Advances in Computers
Memon Advances in Computers
1. Auflage 2014
ISBN: 978-0-12-800324-4
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
E-Book, Englisch, Band Volume 95, 342 Seiten
Reihe: Advances in Computers
ISBN: 978-0-12-800324-4
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of significant, lasting value in this rapidly expanding field. - In-depth surveys and tutorials on new computer technology - Well-known authors and researchers in the field - Extensive bibliographies with most chapters - Many of the volumes are devoted to single themes or subfields of computer science
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Advances in Computers;4
3;Copyright;5
4;Contents;6
5;Preface;10
6;Chapter One: Automated Test Oracles: A Survey;14
6.1;1. Introduction;15
6.2;2. Test Oracles;17
6.3;3. Scope of the Survey and Review Protocol;20
6.4;4. The Test Oracle Process;22
6.5;5. Information Sources and Translations of Test Oracles;25
6.5.1;5.1. State-Based Specifications;25
6.5.2;5.2. Transition-Based Specifications;31
6.5.3;5.3. History-Based Specifications;32
6.5.4;5.4. Algebraic Specifications;35
6.5.5;5.5. Values from Other Versions;38
6.5.6;5.6. Results of Program Analysis;39
6.5.7;5.7. Machine Learning Models;40
6.5.8;5.8. Metamorphic Relations and Intrinsic Redundancy;42
6.6;6. Checkable Forms of Test Oracles;44
6.6.1;6.1. Program Code;45
6.6.1.1;6.1.1. Complete Test Cases;45
6.6.1.2;6.1.2. Test Templates;46
6.6.1.3;6.1.3. Sequences of Statements;46
6.6.1.4;6.1.4. Assertions and Runtime Monitors;47
6.6.1.5;6.1.5. General Test Code Generation;48
6.6.2;6.2. Expected Values;49
6.6.3;6.3. Executable Specifications;51
6.6.4;6.4. Machine Learning Models;53
6.7;7. Summary and Future Directions;54
6.8;References;55
7;Chapter Two: Automated Extraction of GUI Models for Testing;62
7.1;1. Introduction;63
7.2;2. Background;68
7.2.1;2.1. Testing and Modeling of GUIs;68
7.2.1.1;2.1.1. Graphical User Interface;68
7.2.1.2;2.1.2. GUI Testing;69
7.2.1.3;2.1.3. GUI Modeling;70
7.2.2;2.2. Software Test Automation;72
7.2.2.1;2.2.1. Model-Based Testing;73
7.3;3. Automated GUI Testing;74
7.3.1;3.1. Unit Testing Tools-Automating The Execution of Concrete GUI Test Cases;74
7.3.2;3.2. CR Tools-Automating the Creation of Concrete GUI Test Cases;75
7.3.3;3.3. Keywords and Action Words-Abstracting the Concrete GUI Test Cases;76
7.3.4;3.4. Model-Based GUI Testing-Automating the Creation of Abstract GUI Test Cases;76
7.3.4.1;3.4.1. Models for MBGT;77
7.3.4.2;3.4.2. Challenges in MBGT;78
7.3.4.3;3.4.3. Approaches for MBGT;79
7.3.5;3.5. Coverage and Effectiveness of GUI Testing;80
7.3.6;3.6. Test Suite Reduction and Prioritization for GUI Testing;82
7.4;4. Reverse Engineering and Specification Mining;83
7.4.1;4.1. Static Reverse Engineering;84
7.4.2;4.2. Dynamic Reverse Engineering;85
7.4.3;4.3. Combining Static and Dynamic Reverse Engineering;86
7.4.4;4.4. Reverse Engineering Models for Testing;86
7.4.4.1;4.4.1. Static Approaches for Testing;87
7.4.4.2;4.4.2. Dynamic Approaches for Testing;87
7.4.4.3;4.4.3. Hybrid Approaches for Testing;88
7.4.4.4;4.4.4. Challenges in Using Reverse Engineered Models for Testing;89
7.4.5;4.5. Reverse Engineering GUI Models;90
7.4.5.1;4.5.1. Static Approaches for Reverse Engineering of GUI Models;90
7.4.5.2;4.5.2. Dynamic Approaches for Reverse Engineering of GUI Models;91
7.4.5.3;4.5.3. Hybrid Approaches for Reverse Engineering of GUI Models;93
7.5;5. Using Extracted Models to Automate GUI Testing;93
7.5.1;5.1. Challenges in Using Extracted Models to Automate GUI Testing;94
7.5.1.1;5.1.1. Automated Test Oracles for GUI Testing;94
7.5.2;5.2. Approaches for Using Extracted Models to Automate GUI Testing;95
7.5.2.1;5.2.1. Memon et al.: GUI Ripping and Using Event-Based Graph Models for Automated GUI Testing;95
7.5.2.2;5.2.2. Campos et al.: Static Reverse Engineering of GUI Models for Usability Analysis and Testing;99
7.5.2.3;5.2.3. Paiva et al.: ReGUI Tool, Dynamic Reverse Engineering of GUI Models for Testing;100
7.5.2.4;5.2.4. Amalfitano et al.: Dynamic Reverse Engineering of RIA and Mobile Applications;102
7.5.2.5;5.2.5. Miao and Yang: Dynamic Reverse Engineering of GUI Test Automation Models;103
7.5.2.6;5.2.6. Aho et al.: Dynamic Reverse Engineering of State-Based GUI Models for Testing;105
7.5.2.7;5.2.7. Other Approaches Using Extracted Models for Automating GUI Testing;107
7.6;6. Conclusion and Discussion;109
7.6.1;6.1. Trends;110
7.6.2;6.2. Future;111
7.7;References;111
8;Chapter Three: Automated Test Oracles: State of the Art, Taxonomies, and Trends;126
8.1;1. Introduction;127
8.2;2. Background;129
8.2.1;2.1. Software Testing Concepts;130
8.2.2;2.2. Automated Software Testing;132
8.2.3;2.3. Test Oracles;134
8.2.3.1;2.3.1. The Oracle Problem;139
8.2.3.2;2.3.2. Trade-off on Test Oracles;140
8.3;3. Oracles Taxonomies;141
8.3.1;3.1. Generic Classifications;142
8.3.1.1;3.1.1. Pseudo-Oracles and Partial Oracles [81,82];142
8.3.1.2;3.1.2. Passive and Active Oracles;143
8.3.2;3.2. Specific Taxonomies;144
8.3.2.1;3.2.1. Source of Oracles [80];144
8.3.2.2;3.2.2. Classification by Automation Process [12,13,36];145
8.3.2.3;3.2.3. Oracle Categories According to Harman et al. [11];145
8.3.3;3.3. A Taxonomy by Oracle Information Characteristics;147
8.3.3.1;3.3.1. Specification-Based Oracles[1];148
8.3.3.1.1;3.3.1.1. Specification Location;148
8.3.3.1.2;3.3.1.2. Specification Paradigm;151
8.3.3.1.3;3.3.1.3. Temporal Specifications;153
8.3.3.2;3.3.2. MR-Based Oracles;154
8.3.3.3;3.3.3. ML-Based Oracles;156
8.3.3.4;3.3.4. Version-Based Oracles;157
8.4;4. A Quantitative Analysis and a Mapping of Studies;158
8.4.1;4.1. A Literature Review on Test Oracles;158
8.4.1.1;4.1.1. Study Selection;159
8.4.1.2;4.1.2. Study Classification;159
8.4.2;4.2. A Quantitative Analysis on Studies;160
8.4.2.1;4.2.1. Study Analysis per Area;161
8.4.2.2;4.2.2. SUT Analysis;163
8.4.2.3;4.2.3. Projects Analysis;166
8.4.2.4;4.2.4. Demographical Analysis;168
8.4.2.5;4.2.5. Publication Strategies;168
8.4.2.6;4.2.6. Prolific Researchers;171
8.4.3;4.3. Author´s Collaboration;173
8.4.4;4.4. Surveys and Position Studies;181
8.4.5;4.5. Supporting Tools;183
8.4.6;4.6. Academic Efforts on Test Oracles (Ph.D. and Masters);183
8.5;5. Discussions;183
8.5.1;5.1. High Level of Tools and Specification Languages;186
8.5.2;5.2. Complexity of SUTs Outputs;191
8.5.3;5.3. Generalization of Test Oracle Strategies and Properties;193
8.5.4;5.4. Trends on Test Oracle;194
8.6;6. Final and Concluding Remarks;195
8.7;Acknowledgments;197
8.8;References;197
9;Chapter Four: Anti-Pattern Detection: Methods, Challenges, and Open Issues;214
9.1;1. Anti-Pattern: Definitions and Motivations;215
9.2;2. Methods for the Detection of Anti-Patterns;216
9.2.1;2.1. Blob;217
9.2.1.1;2.1.1. Definition;217
9.2.1.2;2.1.2. Detection Strategies;218
9.2.1.3;2.1.3. Analysis of the Detection Accuracy;220
9.2.2;2.2. Feature Envy;221
9.2.2.1;2.2.1. Definition;221
9.2.2.2;2.2.2. Detection Strategies;221
9.2.2.3;2.2.3. Analysis of the Detection Accuracy;223
9.2.3;2.3. Duplicate Code;223
9.2.3.1;2.3.1. Definition;223
9.2.3.2;2.3.2. Detection Strategies;224
9.2.3.3;2.3.3. Analysis of the Detection Accuracy;224
9.2.4;2.4. Refused Bequest;225
9.2.4.1;2.4.1. Definition;225
9.2.4.2;2.4.2. Detection Strategies;225
9.2.4.3;2.4.3. Analysis of the Detection Accuracy;226
9.2.5;2.5. Divergent Change;226
9.2.5.1;2.5.1. Definition;226
9.2.5.2;2.5.2. Detection Strategies;226
9.2.5.3;2.5.3. Analysis of the Detection Accuracy;227
9.2.6;2.6. Shotgun Surgery;227
9.2.6.1;2.6.1. Definition;227
9.2.6.2;2.6.2. Detection Strategies;227
9.2.6.3;2.6.3. Analysis of the Detection Accuracy;228
9.2.7;2.7. Parallel Inheritance Hierarchies;228
9.2.7.1;2.7.1. Definition;228
9.2.7.2;2.7.2. Detection Strategies;228
9.2.7.3;2.7.3. Analysis of the Detection Accuracy;229
9.2.8;2.8. Functional Decomposition;229
9.2.8.1;2.8.1. Definition;229
9.2.8.2;2.8.2. Detection Strategies;229
9.2.8.3;2.8.3. Analysis of the Detection Accuracy;230
9.2.9;2.9. Spaghetti Code;230
9.2.9.1;2.9.1. Definition;230
9.2.9.2;2.9.2. Detection Strategies;230
9.2.9.3;2.9.3. Analysis of the Detection Accuracy;231
9.2.10;2.10. Swiss Army Knife;231
9.2.10.1;2.10.1. Definition;231
9.2.10.2;2.10.2. Detection Strategies;232
9.2.10.3;2.10.3. Analysis of the Detection Accuracy;232
9.2.11;2.11. Type Checking;232
9.2.11.1;2.11.1. Definition;232
9.2.11.2;2.11.2. Detection Strategies;233
9.2.11.3;2.11.3. Analysis of the Detection Accuracy;233
9.3;3. A New Frontier of Anti-Patterns: Linguistic Anti-Patterns;233
9.3.1;3.1. Does More Than it Says;234
9.3.2;3.2. Says More Than it Does;235
9.3.3;3.3. Does the Opposite;236
9.3.4;3.4. Contains More Than it Says;236
9.3.5;3.5. Says More Than it Contains;237
9.3.6;3.6. Contains the Opposite;237
9.4;4. Key Ingredients for Building an Anti-Pattern Detection Tool;238
9.4.1;4.1. Identifying and Extracting the Characteristics of Anti-Patterns;238
9.4.1.1;4.1.1. Extraction of Structural Properties Using Code Analysis Techniques;238
9.4.1.1.1;4.1.1.1. Method Calls;239
9.4.1.1.2;4.1.1.2. Shared Instance Variables;239
9.4.1.1.3;4.1.1.3. Inheritance Relationships;240
9.4.1.2;4.1.2. Extraction of Lexical Properties Through Natural Language Processing;240
9.4.1.3;4.1.3. Extraction of the History Properties Through Mining of Software Repositories;241
9.4.1.3.1;4.1.3.1. Co-Changes at File Level;241
9.4.1.3.2;4.1.3.2. Co-Changes at Method Level;241
9.4.2;4.2. Defining the Detection Algorithm;242
9.4.3;4.3. Evaluating the Accuracy of a Detection Tool;243
9.4.3.1;4.3.1. Evaluation Based on an Automatic Oracle;244
9.4.3.2;4.3.2. Evaluation Based on Developer´s Judgment;244
9.5;5. Conclusion and Open Issues;245
9.6;References;247
10;Chapter five: Classifying Problems into Complexity Classes;252
10.1;1. Introduction;253
10.2;2. Time and Space Classes;254
10.3;3. Relations Between Classes;257
10.4;4. DSPACE(1)=Regular Languages;257
10.5;5. L = DSPACE(log n);262
10.6;6. NL = NSPACE(log n);262
10.7;7. P = DTIME(nO(1));263
10.8;8. Randomized Polynomial Time: R;265
10.9;9. NP = NTIME(nO(1));266
10.9.1;9.1 Reasons to Think P . NP and some Intelligent Objections;268
10.9.2;9.2 NP Intermediary Problems;272
10.9.3;9.3 HaveWe Made Any Progress on P Versus NP?;276
10.9.4;9.4 Seriously, Can you give a more enlightening answer to HaveWe Made Any Progress on P Versus NP?;276
10.9.5;9.5 So You Think You’ve Settled P versus NP;277
10.9.6;9.6 Eight Signs a Claimed P . NP Proof is Wrong;278
10.9.7;9.7 How to Deal with Proofs that P = NP;280
10.9.8;9.8 A Third Category;280
10.10;10. PH: The Polynomial Hierarchy;281
10.11;11. #P;283
10.12;12. PSPACE;284
10.13;13. EXPTIME;284
10.14;14. EXPSPACE = NEXPSPACE;285
10.15;15. DTIME(TOWi(n));286
10.16;16. DSPACE(TOWi(nO(1)));287
10.17;17. Elementary;288
10.18;18. Primitive Recursive;288
10.19;19. Ackermann’s Function;291
10.20;20. The Goodstein Function;292
10.21;21. Decidable, Undecidable and Beyond;293
10.22;22. Summary of Relations Between Classes;297
10.23;23. Other Complexity Measures;298
10.24;24. Summary;299
10.25;25. What is Natural?;300
10.26;Acknowledgement;301
10.27;References;301
11;Author Index;306
12;Subject Index;320
13;Contents of Volumes in This Series;330
Automated Extraction of GUI Models for Testing
Pekka Aho*,†; Teemu Kanstrén*,‡; Tomi Räty*,§; Juha Röning¶ * VTT Technical Research Centre of Finland, Oulu, Finland
† Department of Computer Science, University of Maryland, College Park, Maryland, USA
‡ Department of Computer Science, University of Toronto, Toronto, Canada
§ Department of Electrical Engineering and Computer Science, University of California, Berkeley, California, USA
¶ Department of Computer Science and Engineering, University of Oulu, Oulu, Finland
Abstract
A significant challenge in applying model-based testing on software systems is that manually designing the test models requires considerable amount of effort and deep expertise in formal modeling. When an existing system is being modeled and tested, there are various techniques to automate the process of producing the models based on the implementation. Some approaches aim to fully automated creation of the models, while others aim to automate the first steps to create an initial model to serve as a basis to start the manual modeling process. Especially graphical user interface (GUI) applications, including mobile and Web applications, have been a good domain for model extraction, reverse engineering, and specification mining approaches. In this chapter, we survey various automated modeling techniques, with a special focus on GUI models and their usefulness in analyzing and testing of the modeled GUI applications.
Keywords
Graphical user interfaces
Model-based GUI testing
MBGT
Test automation
Model extraction
Reverse engineering
Specification mining
1 Introduction
Increasing, more ubiquitous use of software systems makes our daily lives more dependent on the software functioning without errors. In the worst case, a slight error in an airline coordinating system could cause a fatal accident, and a minor leak in Internet banking system could lead to loss of customers’ money and major financial problems. As software systems are simplified models of their real-world counterparts, no one can claim that they are perfect and free of defects [1]. Especially, errors or weaknesses in critical infrastructure are risks that have to be minimized, and that requires software testing. Software testing is a dynamic technique for increasing confidence in the correctness of the software, i.e., that the software behaves reliably as expected [2].
Graphical user interfaces (GUIs) constitute a large part of the software being developed today [3]. Most software today is interactive, and the code related to the user interface (UI) can be more than half of all code [4]. The UI of a computer program is the part that handles the output to the display and the input from the person using the program [5]. GUI is a graphical front-end to a software system that accepts user- and system-generated events as input and produces deterministic graphical output [6]. GUI is intended to make software easier to use by providing the user with visual controls [7]. GUI software is often large and complex and requires developers to deal with elaborate graphics, multiple ways of giving commands, multiple asynchronous input devices, and rapid semantic feedback [5]. Complexity is further increased when the goal is to create simple but flexible GUIs, e.g., providing default settings for quick and simple usage but allowing more skillful users to modify the settings. The correctness of GUI's behavior is essential to the correct execution of the overall software [7].
For the most part, the challenges in automating GUI testing have remained the same for a decade [8]. It is still challenging to define and implement coverage criteria for not measuring only code coverage of the test suite, but also coverage of all the possible interactions between the GUI and the underlying application. As the amount of possible interactions tends to be impractically large for nontrivial GUI applications, the challenge is to select a test suite having a good coverage with a smaller amount of test cases. While automatically crafting the models for model-based GUI testing (MBGT) has solved some issues, it still remains challenging to verify whether the GUI executes correctly as specified or expected. A GUI test case requires interleaving the oracle invocation with the test case execution, because an incorrect GUI state can lead to an unexpected screen making further test case execution useless [8]. Also, pinpointing an error's actual cause may otherwise be difficult, especially when the final output is correct but the intermediate outputs are incorrect [8]. Automated creation of test oracles is easier for regression testing, because the behavior of an existing, presumably correct version of the GUI software can be used for extracting the expected behavior, that is then stored as oracle information [9].
Most GUI applications are built using UI software tools [10] and iterative process of redesigning the UI after user testing [5]. Most designers think that the behavior is more difficult to prototype than the appearance and communicating the behavior to the developers is more difficult than the appearance [11]. Due to the iterative “rapid prototyping” development processes, the requirements, design, and implementation of GUI software change often, making GUI testing a challenging field for test automation. The maintenance effort required for updating the test suites because of the constant changes in the GUI decreases the benefits gained from test automation. Another challenge in GUI test automation is that automated testing through a GUI is more difficult than testing the application through its API (Application Programming Interface) because it requires additional programming effort to simulate user actions, to observe the outputs produced and to check its correctness, even when using auxiliary libraries [12], such as Microsoft UI Automation [13] or Jemmy [14]. Furthermore, it is even more difficult to automate the evaluation of the user experience, as it depends also on the needs of the targeted users, and usually, end users evaluate concepts and prototypes already during development [4].
Manual testing and the use of capture/replay (CR) tools have been the most popular GUI testing approaches in industry [15]. While CR tools are an easy and straightforward first step toward more effective use of the testing resources, a large number of test cases have to be manually updated after changes in the GUI [16]. The next step, also used by the latest CR tools, is to use GUI automation at a higher level of abstraction, describing test cases with action or key words and using auxiliary libraries to translate and execute the user actions and observe the produced outputs. Usually, this protects the test cases from breaking due to the smallest changes in the GUI, and sometimes the same test cases can be used for testing the software on different platforms and test environments just by changing the auxiliary library.
A number of research results have shown that model-based testing (MBT) reduces the maintenance costs compared to the CR tools [17]. Traditionally in MBT, an abstract test model is created manually based on the requirements of the system under test (SUT), and then an MBT tool is used to generate tests based on the model and execute the tests to verify if the SUT implements the requirements correctly [18]. A significant challenge in applying MBT on software systems is that manually designing the test models requires considerable amount of effort and deep expertise in formal modeling, even more so during iterative development processes or when the documentation of the system is not accurate or updated [12]. Modeling has been one of the most significant obstacles to why MBT has not been taken into use by industry on a large scale [19]. It also requires consolidating the differences between the specification that is modeled and the implementation that is tested and providing mapping between the model and the implementation to be able to execute the derived test cases on a concrete system [12]. The manual effort in designing and maintaining the models negates a significant part of the increased efficiency gained from automated testing [16].
When an existing system is being modeled, there are various model extraction techniques to automate the process of producing the models based on the implementation. The process of automatically creating models or other specifications based on existing design and implementation artifacts is also referred to as model extraction, reverse engineering, specification mining, or specification inference. In this chapter, these terms are used interchangeably unless otherwise specified. Especially GUI applications, including mobile and Web applications, have been a good domain for reverse engineering and specification mining approaches. Usually, the goal of reverse engineering is to analyze the software and represent it in another level of abstraction, and using the abstract form for software maintenance, migrating, testing, reengineering, reuse, documenting purposes, or just to understand how the software works...




