E-Book, Englisch, Band Volume 16, 586 Seiten, Web PDF
Gelsema / Kanal Pattern Recognition in Practice IV: Multiple Paradigms, Comparative Studies and Hybrid Systems
1. Auflage 2014
ISBN: 978-1-4832-9784-2
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, Band Volume 16, 586 Seiten, Web PDF
Reihe: Machine Intelligence and Pattern Recognition
ISBN: 978-1-4832-9784-2
Verlag: Elsevier Science & Techn.
Format: PDF
Kopierschutz: 1 - PDF Watermark
The era of detailed comparisons of the merits of techniques of pattern recognition and artificial intelligence and of the integration of such techniques into flexible and powerful systems has begun.So confirm the editors of this fourth volume of Pattern Recognition in Practice, in their preface to the book.The 42 quality papers are sourced from a broad range of international specialists involved in developing pattern recognition methodologies and those using pattern recognition techniques in their professional work. The publication is divided into six sections: Pattern Recognition, Signal and Image Processing, Probabilistic Reasoning, Neural Networks, Comparative Studies, and Hybrid Systems, giving prospective users a feeling for the applicability of the various methods in their particular field of specialization.
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Pattem Recognition in Practice IV: Multiple Paradigms, Comparative Studies and
Hybrid Systems;4
3;Copyright Page;5
4;Table of Contents;12
5;PREFACE;6
6;ACKNOWLEDGEMENTS;10
7;PART I:
PATTERN RECOGNITION;18
7.1;Chapter 1.
Patterns in the role of knowledge representation;20
7.1.1;1. INTRODUCEN;20
7.1.2;2. PATTERN REPRESENTATION;21
7.1.3;3. PATTERN RELATIONS;22
7.1.4;4.
METMIC;23
7.1.5;5. HUMAN INTERACTION;24
7.1.6;6. PROJECTS;25
7.1.7;7. CONCLUSION;26
7.1.8;APPENDIX;26
7.1.9;REFERENCES;27
7.2;Chapter 2. Application of evidence theory to
k-NN pattern classification;30
7.2.1;1. D-S THEORY;30
7.2.2;2. THE METHOD;32
7.2.3;3. SIMULATION RESULTS;36
7.2.4;4. CONCLUSION;41
7.2.5;REFERENCES;41
7.3;Chapter 3.
Decision trees and domain knowledge in pattern recognition;42
7.3.1;1. INTRODUCTION;42
7.3.2;2. OVERVIEW OF DECISION TREE METHODOLOGIES;43
7.3.3;3. A FRAMEWORK FOR DECISION TREE CONSTRUCTION;44
7.3.4;4. EXPERIMENTAL EVALUATION;48
7.3.5;5. CONCLUSIONS;51
7.3.6;REFERENCES;51
7.4;Chapter 4.
Object recognition using hidden Markov models;54
7.4.1;1. INTRODUCTION;54
7.4.2;2. STATISTICAL OBJECT RECOGNITION;55
7.4.3;3. HIDDEN MARKOV MODELS;55
7.4.4;4. OBJECT ORIENTED IMPLEMENTATION OF HMMS;57
7.4.5;5. AFFINE INVARIANT FEATURES;57
7.4.6;6. EXPERIMENTAL RESULTS;58
7.4.7;7. SUMMARY AND CONCLUSIONS;60
7.4.8;ACKNOWLEDGEMENT;60
7.4.9;REFERENCES;61
7.5;Chapter 5.
Inference of syntax for point sets;62
7.5.1;1. INTRODUCTION;62
7.5.2;2. CHUNKING;65
7.5.3;3. LOW ORDER MOMENT DESCRIPTORS;67
7.5.4;4. INFERENCE;68
7.5.5;5. GENERALISATION;70
7.5.6;6. INVARIANCE;70
7.5.7;7. NOISE;71
7.5.8;8. BINDING AND OCCLUSION;72
7.5.9;9. NEURAL MODELS;72
7.5.10;10. SUMMARY AND CONCLUSIONS;75
7.5.11;REFERENCES;75
7.6;Chapter 6.
Recognising cubes in images;76
7.6.1;1. INTRODUCTION;76
7.6.2;2. FINDING LINE SEGMENTS;78
7.6.3;3. FINDING SQUARES;80
7.6.4;4. FINDING CUBES;81
7.6.5;5. NOISE;82
7.6.6;6. INVARIANCE AND MANIFOLDS;83
7.6.7;7. OCCLUSION;87
7.6.8;8. CONCLUSION;90
7.6.9;REFERENCES;90
7.7;Chapter 7.
Syntactic pattern classification of moving objects in a domestic environment;92
7.7.1;1. INTRODUCTION;92
7.7.2;2. METHOD;93
7.7.3;3. RESULTS;100
7.7.4;4. ROBUSTNESS;103
7.7.5;5. CONCLUSION AND FURTHER WORK;106
7.7.6;REFERENCES;106
7.8;Chapter 8.
Initializing the EM algorithm for use in Gaussian mixture modelling;108
7.8.1;1. INTRODUCTION;108
7.8.2;2. THE EM ALGORITHM;110
7.8.3;3. CONVERGENCE AND INITIAL CONDITIONS;111
7.8.4;4. CLUSTERING TECHNIQUES;112
7.8.5;5. THE DOG RABBIT STRATEGY;114
7.8.6;6. RESULTS;118
7.8.7;7. CONCLUSION AND SUMMARY;120
7.8.8;REFERENCES;122
7.9;Chapter 9.
Predicting REM in sleep EEG using a structural approach;124
7.9.1;1. INTRODUCTION;124
7.9.2;2. MODELING FRAMEWORK;127
7.9.3;3. METHODOLOGY;128
7.9.4;4. EXAMPLE;131
7.9.5;5. CONCLUSIONS;133
7.9.6;REFERENCES;133
7.10;Discussions Part I:
Paper Vamos;136
8;PART II:
SIGNAL- AND IMAGE PROCESSING;144
8.1;Chapter 10.
On the problem of restoring original structure of signals (images) corrupted by noise;146
8.1.1;1. INTRODUCTION;146
8.1.2;2. PIECE-WISE-LINEAR REGRESSION;147
8.1.3;3. MODEL SELECTION PROBLEM;149
8.1.4;4. APPLICATION TO IMAGE ANALYSIS;152
8.1.5;5. ROBUST REGRESSION AND HOUGH TRANSFORM;152
8.1.6;6. SUMMARY;156
8.1.7;REFERENCES;156
8.2;Chapter 11.
Reflectance ratios: An extension of Land's retinex theory;158
8.2.1;1. INTRODUCTION;158
8.2.2;2. REFLECTANCE RATIOS;160
8.2.3;3. RECOGNITION USING REFLECTANCE RATIOS;162
8.2.4;4. EXPERIMENTS;165
8.2.5;5. DISCUSSION;167
8.2.6;REFERENCES;169
8.3;Chapter 12.
A segmentation algorithm based on AI techniques;170
8.3.1;1. INTRODUCTION;170
8.3.2;2. THE PROPOSED TECHNIQUE;171
8.3.3;3. EXPERIMENTAL RESULTS;178
8.3.4;4. CONCLUDING REMARKS;181
8.3.5;REFERENCES;181
8.4;Chapter 13.
Graph matching by discrete relaxation;182
8.4.1;1. INTRODUCTION;182
8.4.2;2. RELATONAL GRAPHS;184
8.4.3;3. MATCHING PROBABILITIES;185
8.4.4;4. EXPERIMENTS;188
8.4.5;5. CONCLUSIONS;191
8.4.6;REFERENCES;192
8.5;Chapter 14.
Inexact matching using neural networks;194
8.5.1;1. INTRODUCTION;194
8.5.2;2. INEXACT MATCHING USING HOPFIELD NETWORKS;196
8.5.3;3. EXPERIMENTATIONS;197
8.5.4;4. DISCUSSION AND CONCLUSIONS;199
8.5.5;REFERENCES;201
8.6;Chapter 15.
Matching of Curvilinear Structures : Application to the Identification of Cortical Sulci on 3D magnetic resonance brain image;202
8.6.1;I. INTRODUCTION;202
8.6.2;2. INITIAL DATA AND PREPROCESSING;203
8.6.3;3. METHODS;204
8.6.4;4. PROPOSED SYSTEM;205
8.6.5;5. EXPERIMENTS AND RESULTS;209
8.6.6;6. CONCLUSION;210
8.6.7;REFERENCES;211
8.7;Chapter 16.
Knowledge Based Image Analysis of Agricultural Fields in Remotely Sensed Images;214
8.7.1;1. INTRODUCTION;214
8.7.2;2. PROBLEM SOLVING;215
8.7.3;3 MODEL INVERSION, HYPOTHESIS AND PARAMETER ESTIMATION;219
8.7.4;4 RADIOMETRIC MODELLING AND STATE DEFINITION;221
8.7.5;5 PARAMETER ESTIMATION;222
8.7.6;6 THE EXPERIMENT;224
8.7.7;7 CONCLUSIONS;226
8.7.8;8 REFERENCES;228
8.8;Chapter 17.
A texture classification experiment for SAR radar images;230
8.8.1;1. INTRODUCTION;230
8.8.2;2. SPECKLE NOISE ELIMINATION;231
8.8.3;3. TEXTURE FEATURES;234
8.8.4;4. IMAGE CLASSIFICATION AND SEGMENTATION;236
8.8.5;5. RESULT AND DISCUSSION;237
8.8.6;6. CONCLUSIONS AND FUTURE WORK;240
8.8.7;ACKNOWLEDGEMENT;241
8.8.8;REFERENCES;241
8.9;Discussions Part II:
Paper Brailovsky and Kempner;242
9;PART II.: PROBABILISTIC
REASONING;248
9.1;Chapter 18.
Spatio/temporal causal models;250
9.1.1;I. INTRODUCTION;250
9.1.2;2. FORMAL MODEL;250
9.1.3;3.TRACTABILITY;252
9.1.4;4. FUTURE WORK;256
9.1.5;REFERENCES;256
9.2;Chapter 19.
Potentials of Bayesian decision networks for planning under uncertainty;258
9.2.1;1. INTRODUCTION;258
9.2.2;2. PRELIMINARIES;258
9.2.3;3. RELATED APPROACHTES;259
9.2.4;4. EVALUATING DECISION NETWORKS;260
9.2.5;5. GENERATED STRATEGIES AND DECISION TREES;263
9.2.6;6. DECISION NETWORK POTENTIALS;268
9.2.7;7. CONCLUSION;269
9.2.8;REFERENCES;270
9.3;Chapter 20.
Qualitative recognition using Bayesian reasoning;272
9.3.1;1. INTRODUCTION;272
9.3.2;2. GEON BASED RECOGNITION;273
9.3.3;3. BAYESIAN NETWORKS;274
9.3.4;4. BAYESIAN NETWORK FOR RECOGNITION;276
9.3.5;5. A CONTROL STRUCTURE FOR RECOGNITION;278
9.3.6;6. EXPERIMENTS;279
9.3.7;7. CONCLUSION;282
9.3.8;8. Acknowledgement;283
9.3.9;REFERENCES;283
9.4;Chapter 21.
Learning characteristic rules in a target language;284
9.4.1;1. INTRODUCTION;284
9.4.2;2. AN EXAMPLE;285
9.4.3;3. TARGET LANGUAGE;285
9.4.4;4. DISCRIMINATION RULES IN TARGET LANGUAGE;287
9.4.5;5. CHARACTERISTIC RULES IN TARGET LANGUAGE;290
9.4.6;ß. A GLOBAL PERSPECTIVE;293
9.4.7;7. SOME EXTENSIONS;294
9.4.8;8. CONCLUSION;294
9.4.9;ACKNOWLEDGMENT;295
9.4.10;References;295
9.5;Discussions Part III;296
10;PART IV:
NEURAL NETWORKS;302
10.1;Chapter 22.
Why do multilayer perceptrons have favorable small sample properties?;304
10.1.1;1. INTRODUCTION;304
10.1.2;2. PARAMETERS ( WEIGHTS ) COMMON FOR ALL CLASSES;305
10.1.3;3. INFLUENCE OF A LOSS FUNCTION IN ANN TRAINING;308
10.1.4;4. INTRINSIC DIMENSIONALITY AND A GOOD SEPARABILITY OF THE CLASSES;311
10.1.5;5. DISCUSSION;313
10.1.6;ACKNOWLEDGMENT;314
10.1.7;REFERENCES;314
10.2;Chapter 23.
Using Boltzmann Machines for probability estimation: A general framework for neural network learning;316
10.2.1;1. INTRODUCTION;316
10.2.2;2. BOLTZMANN MACHINES WITH RESTRICTED STATE SPACE;318
10.2.3;3. THE BOLTZMANN PERCEPTRON;320
10.2.4;4. JOINT PROBABILITY ESTIMATION;322
10.2.5;5. DISCUSSION;328
10.2.6;ACKNOWLEDGEMENTS;329
10.2.7;REFERENCES;329
10.3;Chapter 24.
Symbolic approximation of feedforward neural networks;330
10.3.1;1. INTRODUCTION;330
10.3.2;2. SYMBOLIC MAPPING OF A PERCEPTRON;331
10.3.3;3. SYMBOLIC ANALYSIS OF FEEDFORWARD NETWORKS;336
10.3.4;4. SUMMARY AND CONCLUSIONS;339
10.3.5;REFERENCES;340
10.4;Chapter 25.
Analytical approaches to the neural net architecture design;342
10.4.1;1. INTRODUCTION;342
10.4.2;2. PROBLEM FORMULATION;344
10.4.3;3. PROBABILISTIC RELAXATION;347
10.4.4;4. NEURAL NET IMPLEMENTATION;349
10.4.5;5. CONCLUSIONS;351
10.4.6;REFERENCES;351
10.5;Chapter 26.
An Altemative Feedforward Approach to Neural Classification Problems;354
10.5.1;1. INTRODUCTION;354
10.5.2;2. THE SENSITISED PATH TRAINING SCHEME;355
10.5.3;3. RESULTS;358
10.5.4;4. DISCUSSION;361
10.5.5;REFERENCES;362
10.6;Chapter 27.
Contribution analysis of multi-layer perceptrons. Estimation of the input sources' importance for the classification;364
10.6.1;1. INTRODUCTION;364
10.6.2;2. ATTRIBUTE SELECTION;366
10.6.3;3. CONTRIBUTION ANALYSIS;366
10.6.4;4. APPLICATIONS;370
10.6.5;5. DISCUSSION;373
10.6.6;6. CONCLUSION;373
10.6.7;REFERENCES;374
10.7;Chapter 28.
Neural networks - advantages and applications;376
10.7.1;1. INTRODUCTION;376
10.7.2;2. SOM IN COMPUTER VISION, CLUSTERING, AND VISUALIZATION;378
10.7.3;3. MLP IN COMPLEX SYSTEM MODELLING;380
10.7.4;4. SUMMARY;381
10.7.5;REFERENCES;381
10.8;Chapter 29.
Relative effectiveness of neural networks for image noise suppression;384
10.8.1;1. INTRODUCTION;384
10.8.2;2. THE BASIC APPROACH;385
10.8.3;3. THE MAIN SET OF EXPERIMENTS;386
10.8.4;4. EXPERIMENTS ON EDGE-SHIFTING;392
10.8.5;5. CONCLUDING REMARKS;394
10.8.6;REFERENCES;395
10.9;Discussions Part IV;396
11;PART V: COMPARATIVE STUDIES;406
11.1;Chapter 30.
An experimental comparison of neural classifiers with 'traditional' classifiers;408
11.1.1;1. INTRODUCTION;408
11.1.2;2. THE NETTALK EXPERIMENTS;408
11.1.3;3. A COMPARISON OF THREE TRADmONAL DATA SETS;414
11.1.4;4. A COMPARISON OF THREE OTHER DATA SETS;416
11.1.5;5. CONCLUSIONS AND DISCUSSION;418
11.1.6;REFERENCES;419
11.2;Chapter 31.
Comparative study of techniques for large-scale feature selection;420
11.2.1;1. INTRODUCTION;420
11.2.2;2. FEATURE SUBSET SEARCH ALGORITHMS;422
11.2.3;3. IMPLEMENTATION DETAILS;424
11.2.4;4. EXPERIMENTS;425
11.2.5;5. DISCUSSION;428
11.2.6;6. CONCLUDING REMARKS;429
11.2.7;REFERENCES;430
11.3;Chapter 32.
Neural nets and classification trees: A comparison in the domain of ECG analysis;432
11.3.1;1. INTRODUCTION;432
11.3.2;2. BACKGROUND ON ALGORITHMS;433
11.3.3;3. EXPERIMENTAL SET-UP;434
11.3.4;4. RESULTS;434
11.3.5;5. DISCUSSION AND CONCLUSION;439
11.3.6;REFERENCES;440
11.4;Chapter 33.
An empirical study of the performance of heuristic methods for clustering;442
11.4.1;1. INTRODUCTION;442
11.4.2;2. HEURISTIC METHODS FOR CLUSTERING;443
11.4.3;3. DESIGN CHOICES AND PARAMETER SELECTION;445
11.4.4;4. EXPERIMENTAL SETUP;446
11.4.5;5. RESULTS AND DISCUSSION;447
11.4.6;6. CONCLUSIONS;452
11.4.7;References;453
11.5;Chapter 34.
A comparative study of different classifiers for handprinted character recognition;454
11.5.1;1. INTRODUCTION;454
11.5.2;2. CHARACTER DATABASE AND FEATURES;455
11.5.3;3. CLASSIFIERS;458
11.5.4;4. EXPERIMENTAL RESULTS;462
11.5.5;5. CONCLUSIONS;464
11.5.6;REFERENCES;464
11.6;Chapter 35.
A Comparison of the Randomised Hough Transform and a Genetic Algorithm for Ellipse Extraction;466
11.6.1;1. INTRODUCTION;466
11.6.2;2. METHOD;467
11.6.3;3. EXPERIMENTAL RESULTS;471
11.6.4;4. CONCLUSIONS;476
11.6.5;REFERENCES;477
11.7;Discussions Part V;478
12;PART VI:
HYBRID SYSTEMS;488
12.1;Chapter 36.
Relative feature importance: A classifier-independent approach to feature selection;490
12.1.1;1. INTRODUCTION;490
12.1.2;2. TOWARDS CLASSIFIER-INDEPENDENT FEATURE SELECTION;491
12.1.3;3. RELATIVE FEATURE IMPORTANCE;492
12.1.4;4. GENETIC NEURAL FEATURE IMPORTANCE ESTIMATOR (GENFIE);500
12.1.5;5. CONCLUSIONS;503
12.1.6;REFERENCES;503
12.2;Chapter 37.
An intelligent planner for multisensory robot vision;506
12.2.1;1. INTRODUCTION;506
12.2.2;2. MULTISENSORY VISION SYSTEM;507
12.2.3;3. VISION PLANNER;510
12.2.4;4. IMPLEMENTATION AND EXPERIMENTAL RESULTS;514
12.2.5;5. DISCUSSION AND CONCLUSIONS;516
12.2.6;REFERENCES;517
12.3;Chapter 38.
Hybrid knowledge bases for real-time robotic reasoning;518
12.3.1;1. INTRODUCTION;518
12.3.2;2. MOTIVATING EXAMPLE;519
12.3.3;3. THE HYBRID KNOWLEDGE BASE FRAMEWORK;520
12.3.4;4. MULTILEVEL INTEGRATION ARCHITECTURE;525
12.3.5;REFERENCES;529
12.4;Chapter 39.
Hybrid systems for constraint-based spatial reasoning;530
12.4.1;1. INTRODUCTION;530
12.4.2;2. TRANSFORMATION PROCEDURE;531
12.4.3;3. HYBRID GENETIC ALGORITHM/NEURAL NETWORK PROCEDURE;536
12.4.4;4. EXPERIMENTAL RESULTS;538
12.4.5;5. CONCLUSIONS;540
12.4.6;REFERENCES;540
12.5;Chapter 40.
Detecting novel fault conditions with hidden Markov models and neural networks;542
12.5.1;1. INTRODUCTION;542
12.5.2;2. DISCRIMINATION AND DETECTION OF FAULT CONDITIONS;542
12.5.3;3. HIDDEN MARKOV MODELS FOR ONLINE MONITORING;544
12.5.4;4. APPLICATION TO ANTENNA POINTING SYSTEM MONITORING;547
12.5.5;5. EXPERIMENTAL MODELS AND RESULTS;548
12.5.6;6. DISCUSSION OF RELATED WORK;551
12.5.7;7. CONCLUSION;552
12.5.8;REFERENCES;552
12.6;Chapter 41.
A handwriting recognition system batsed on multiple AI techniques;554
12.6.1;1. INTRODUCTION;554
12.6.2;2. THE MODEL;555
12.6.3;3. THE FRAMEWORK;556
12.6.4;4. NUSCRIPT;556
12.6.5;5. AI AND PATTERN RECOGNITION TECHNIQUES USED;558
12.6.6;6. THE KNOWLEDGE SOURCES;564
12.6.7;7. RESULTS;564
12.6.8;8. CONCLUSIONS AND FURTHER WORK;566
12.6.9;REFERENCES;566
12.7;Chapter 42.
A hybrid system to detect hand orientation in stereo images;568
12.7.1;1. INTRODUCTION;568
12.7.2;2. OUTLINE OF THE APPROACH;569
12.7.3;3. FORMALISMS FOR NEURAL AND SEMANTIC NETWORKS;570
12.7.4;4. APPLICATION;572
12.7.5;5. RESULTS;575
12.7.6;6. CONCLUSION;578
12.7.7;REFERENCES;579
12.8;Discussions Part VI;580
13;LIST OF AUTHORS;590
14;LIST OF KEYWORDS;592