E-Book, Englisch, 640 Seiten
Reihe: Emerging Trends in Computer Science and Applied Computing
Deligiannidis / Arabnia Emerging Trends in Image Processing, Computer Vision and Pattern Recognition
1. Auflage 2014
ISBN: 978-0-12-802092-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
E-Book, Englisch, 640 Seiten
Reihe: Emerging Trends in Computer Science and Applied Computing
ISBN: 978-0-12-802092-0
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark
Emerging Trends in Image Processing, Computer Vision, and Pattern Recognition discusses the latest in trends in imaging science which at its core consists of three intertwined computer science fields, namely: Image Processing, Computer Vision, and Pattern Recognition. There is significant renewed interest in each of these three fields fueled by Big Data and Data Analytic initiatives including but not limited to; applications as diverse as computational biology, biometrics, biomedical imaging, robotics, security, and knowledge engineering. These three core topics discussed here provide a solid introduction to image processing along with low-level processing techniques, computer vision fundamentals along with examples of applied applications and pattern recognition algorithms and methodologies that will be of value to the image processing and computer vision research communities. Drawing upon the knowledge of recognized experts with years of practical experience and discussing new and novel applications Editors' Leonidas Deligiannidis and Hamid Arabnia cover; - Many perspectives of image processing spanning from fundamental mathematical theory and sampling, to image representation and reconstruction, filtering in spatial and frequency domain, geometrical transformations, and image restoration and segmentation - Key application techniques in computer vision some of which are camera networks and vision, image feature extraction, face and gesture recognition and biometric authentication - Pattern recognition algorithms including but not limited to; Supervised and unsupervised classification algorithms, Ensemble learning algorithms, and parsing algorithms. - How to use image processing and visualization to analyze big data. - Discusses novel applications that can benefit from image processing, computer vision and pattern recognition such as computational biology, biometrics, biomedical imaging, robotics, security, and knowledge engineering. - Covers key application techniques in computer vision from fundamentals to mid to high level processing some of which are camera networks and vision, image feature extraction, face and gesture recognition and biometric authentication. - Presents a number of pattern recognition algorithms and methodologies including but not limited to; supervised and unsupervised classification algorithms, Ensemble learning algorithms, and parsing algorithms. - Explains how to use image processing and visualization to analyze big data.
Autoren/Hrsg.
Weitere Infos & Material
1;Front Cover;1
2;Emerging Trends in Image Processing, Computer Vision, and Pattern Recognition;4
3;Copyright;5
4;Contents;6
5;Contributors;22
6;Acknowledgments;30
7;Preface;32
8;Introduction;36
9;Part 1: Image and signal processing;38
9.1;Chapter 1: Denoising camera data:
Shape-adaptive noise reduction for color filter array image data;40
9.1.1;1. Introduction;40
9.1.2;2. Camera noise;41
9.1.3;3. Adaptive raw data denoising;43
9.1.3.1;3.1. Luminance Transformation of Bayer Data;43
9.1.3.2;3.2. LPA-ICI for Neighborhood Estimation;44
9.1.3.3;3.3. Shape-adaptive DCT and Denoising via Hard Thresholding;44
9.1.4;4. Experiments: Image quality vs system performance;45
9.1.4.1;4.1. Visual Quality of Denoising Results;46
9.1.4.2;4.2. Processing Real Camera Data;47
9.1.5;5. Video Sequences;51
9.1.5.1;5.1. Implementation Aspects;52
9.1.6;6. Conclusion;52
9.1.7;References;53
9.2;Chapter 2: An approach to classifying four-part music in multidimensional space;56
9.2.1;1. Introduction;56
9.2.1.1;1.1. Related Work;56
9.2.1.2;1.2. Explanation of Musical Terms;56
9.2.2;2. Collecting the pieces-training and test pieces;57
9.2.2.1;2.1. Downloading and Converting Files;58
9.2.2.2;2.2. Formatting the MusicXML;58
9.2.3;3. Parsing musicXML-training and test pieces;60
9.2.3.1;3.1. Reading in Key and Divisions;61
9.2.3.2;3.2. Reading in Notes;61
9.2.3.3;3.3. Handling Note Values;62
9.2.3.4;3.4. Results;63
9.2.4;4. Collecting Piece Statistics;63
9.2.4.1;4.1. Metrics;63
9.2.5;5. Collecting Classifier Statistics-Training Pieces Only;65
9.2.5.1;5.1. Approach;66
9.2.6;6. Classifying Test Pieces;66
9.2.6.1;6.1. Classification Techniques;67
9.2.6.2;6.2. User Interface;68
9.2.6.3;6.3. Classification Steps;68
9.2.6.4;6.4. Testing the Classification Techniques;69
9.2.6.5;6.5. Classifying from Among Two Composers;69
9.2.6.6;6.6. Classifying from Among Three Composers;70
9.2.6.7;6.7. Selecting the Best Metrics;70
9.2.7;7. Additional Composer and Metrics;71
9.2.7.1;7.1. Lowell Mason;71
9.2.7.2;7.2. Additional Metrics;73
9.2.8;8. Conclusions;74
9.2.9;References;53
9.2.10;Further reading;74
9.3;Chapter 3: Measuring rainbow trout by using simple statistics;76
9.3.1;1. Introduction;76
9.3.2;2. Experimental prototype;77
9.3.2.1;2.1. Canalization System;78
9.3.2.2;2.2. Illumination System;78
9.3.2.3;2.3. Vision System;79
9.3.3;3. Statistical Measuring Approach;79
9.3.4;4. Experimental framework;80
9.3.4.1;4.1. Testing Procedure;81
9.3.5;5. Performance evaluation;85
9.3.6;6. Conclusions;89
9.3.7;Acknowledgments;89
9.3.8;References;53
9.4;Chapter 4: Fringe noise removal of retinal fundus images using trimming regions;92
9.4.1;1. Introduction;92
9.4.1.1;1.1. Image Processing;93
9.4.1.2;1.2. Retinal Image Processing;94
9.4.1.2.1;1.2.1. Ophthalmological Data;94
9.4.2;2. Methodology;95
9.4.2.1;2.1. Implementation;97
9.4.3;3. Results and Discussion;99
9.4.4;4. Conclusion;99
9.4.5;References;100
9.5;Chapter 5: pSQ: Image quantizer based on contrast band-pass filtering;104
9.5.1;1. Introduction;104
9.5.2;2. Related Work: JPEG 2000 Global Visual Frequency Weighting;105
9.5.3;3. Perceptual quantization;105
9.5.3.1;3.1. Contrast Band-Pass Filtering;105
9.5.3.2;3.2. Forward Inverse Quantization;106
9.5.3.3;3.3. Perceptual Inverse Quantization;110
9.5.4;4. Experimental results;111
9.5.4.1;4.1. Based on Histogram;111
9.5.4.2;4.2. Correlation Analysis;111
9.5.5;5. Conclusions;115
9.5.6;Acknowledgments ;122
9.5.7;References;122
9.6;Chapter 6: Rebuilding IVUS images from raw data of the RF signal exported by IVUS equipment;124
9.6.1;1. Introduction;124
9.6.2;2. Method for IVUS image reconstruction;125
9.6.2.1;2.1. RF Dataset;126
9.6.2.2;2.2. Band-Pass Filter;127
9.6.2.3;2.3. Time Gain Compensation;127
9.6.2.4;2.4. Signal Envelope;129
9.6.2.5;2.5. Log-Compression;130
9.6.2.6;2.6. Digital Development Process;130
9.6.2.7;2.7. Postprocessing;130
9.6.3;3. Experimental results;131
9.6.4;4. Discussion, conclusion, and future work;132
9.6.5;Acknowledgments;133
9.6.6;References;133
9.7;Chapter 7: XSET: Image coder based on contrast band-pass filtering;136
9.7.1;1. Introduction;136
9.7.2;2. Related Work: JPEG2000 Global Visual Frequency Weighting;137
9.7.3;3. Image entropy encoding: XSET algorithm;138
9.7.3.1;3.1. Perceptual Quantization;138
9.7.3.2;3.2. Startup Considerations;139
9.7.3.3;3.3. Coding Algorithm;142
9.7.4;4. Experiments and results;144
9.7.5;5. Conclusions;150
9.7.6;Acknowledgment;151
9.7.7;References;151
9.8;Chapter 8: Security surveillance applications utilizing parallel video-processing techniques in the spatial domain;154
9.8.1;1. Introduction;154
9.8.2;2. Graphical Processing Unit and Compute Unified Device Architecture;154
9.8.3;3. Parallel algorithms for image processing;156
9.8.4;4. Applications for surveillance using parallel video processing;158
9.8.4.1;4.1. Motion Detector;159
9.8.4.2;4.2. Over a Line Motion Detector;160
9.8.4.3;4.3. Line Crossing Detector;161
9.8.4.4;4.4. Area Motion Detector;164
9.8.4.5;4.5. Fire Detection;164
9.8.5;5. Conclusion;165
9.8.6;Acknowledgments;165
9.8.7;References;165
9.9;Chapter 9: Highlight image filter significantly improves optical character recognition on text images;168
9.9.1;1. Introduction;168
9.9.1.1;1.1. Properties of Highlight Image Filter;169
9.9.2;2. Description of smart contrast image filter;169
9.9.2.1;2.1. Contrast Image Filter;170
9.9.2.1.1;2.1.1. Description of the optimized implementation of contrast image filter using ``color matrix´´ technique;171
9.9.2.2;2.2. New Image Filter: Smart Contrast;172
9.9.2.3;2.3. Visual Result of Applying Smart Contrast on Images;173
9.9.3;3. Description of highlight image filter;174
9.9.3.1;3.1. Description of the Image Filters Visual Effects That Are Included in Highlights Visual Effect;174
9.9.3.2;3.2. New Image Filter: Highlight;175
9.9.3.3;3.3. Visual Results of Applying Highlight Filter on Images;179
9.9.3.4;3.4. Highlight Image Filter Program Code and Visual Representation;180
9.9.4;4. Description of the optimized implementation of smart contrast and highlight Using ``Byte Buffer´´ Techniques;180
9.9.5;5. Conclusions;183
9.9.6;Nomenclature;184
9.9.7;References;184
9.10;Chapter 10: A study on the relationship between depth map quality and stereoscopic image quality using upsampled depth maps;186
9.10.1;1. Introduction;186
9.10.2;2. Objective quality assessment tools;188
9.10.2.1;2.1. FR IQA Tools;189
9.10.2.1.1;2.1.1. Peak signal-to-noise ratio;189
9.10.2.1.2;2.1.2. Structural similarity index measure;189
9.10.2.1.3;2.1.3. Visual information fidelity;189
9.10.2.2;2.2. NR IQA Tools;190
9.10.2.2.1;2.2.1. Sharpness degree;190
9.10.2.2.2;2.2.2. Blur metric;190
9.10.2.2.3;2.2.3. Blind image quality index;190
9.10.2.2.4;2.2.4. Natural image quality evaluator;191
9.10.3;3. 3D Subjective Quality Assessment;191
9.10.4;4. Experimental results;191
9.10.5;5. Conclusion;196
9.10.6;References;196
9.11;Chapter 11: .GBbBShift: Method for introducing perceptual criteria to region of interest coding;198
9.11.1;1. Introduction;198
9.11.2;2. Related work;200
9.11.2.1;2.1. BbB Shift;200
9.11.2.2;2.2. GBbBShift;202
9.11.3;3. Perceptual GBbBShift;203
9.11.3.1;3.1. Quantization;203
9.11.3.2;3.2. .GBbBShift Algorithm;204
9.11.4;4. Experimental results;205
9.11.4.1;4.1. Application in Well-Known Test Images;205
9.11.4.2;4.2. Application in Other Image Compression Fields;216
9.11.5;5. Conclusions;217
9.11.6;Acknowledgments ;218
9.11.7;References;218
9.12;Chapter 12: DT-Binarize: A decision tree based binarization for protein crystal images;220
9.12.1;1. Introduction;220
9.12.2;2. Background;222
9.12.2.1;2.1. Image Binarization Methods;222
9.12.2.1.1;2.1.1. Otsu threshold;222
9.12.2.1.2;2.1.2. 90th Percentile green intensity threshold (g90);222
9.12.2.1.3;2.1.3. Maximum green intensity threshold (g100);224
9.12.3;3. DT-Binarize: Selection of best binarization method using decision tree;225
9.12.3.1;3.1. Overview;225
9.12.3.2;3.2. Stages of the Algorithm;225
9.12.3.2.1;3.2.1. Median filter;225
9.12.3.2.2;3.2.2. Contrast stretching;226
9.12.3.2.3;3.2.3. Decision tree;226
9.12.3.3;3.3. Application of Dt-Binarize on Protein Crystal Images;227
9.12.4;4. Experiments and results;228
9.12.4.1;4.1. Dataset;228
9.12.4.1.1;4.1.1. 2D Plates;228
9.12.4.1.2;4.1.2. Small 3D crystals;228
9.12.4.1.3;4.1.3. Large 3D crystals;230
9.12.4.2;4.2. Correctness Measurement;230
9.12.4.3;4.3. Results;232
9.12.5;5. Conclusion;235
9.12.6;Acknowledgment;236
9.12.7;References;236
9.13;Chapter 13: Automatic mass segmentation method in mammograms based on improved VFC snake model;238
9.13.1;1. Introduction;238
9.13.2;2. Methodology;239
9.13.2.1;2.1. Mammogram Database;239
9.13.2.2;2.2. Mammogram Preprocessing;240
9.13.2.2.1;2.2.1. Label removal;240
9.13.2.2.2;2.2.2. Image enhancement;240
9.13.2.2.3;2.2.3. Morphological filter;241
9.13.2.3;2.3. ROI Extraction and Location;241
9.13.2.3.1;2.3.1. Edge extraction;241
9.13.2.3.2;2.3.2. Hough transform detection;241
9.13.2.3.3;2.3.3. Mass location parameter;244
9.13.2.4;2.4. Mass Segmentation;244
9.13.2.4.1;2.4.1. Typical VFC Snake model;244
9.13.2.4.2;2.4.2. Improved VFC Snake model;245
9.13.3;3. Experiment results and discussion;246
9.13.3.1;3.1. Experiments Results;249
9.13.3.2;3.2. Algorithm Performance Analysis;249
9.13.3.2.1;3.2.1. Detection rate;249
9.13.3.2.2;3.2.2. Segmentation accuracy;250
9.13.3.2.3;3.2.3. Segmentation similarity;251
9.13.4;4. Conclusions;252
9.13.5;Acknowledgments;253
9.13.6;References;253
9.14;Chapter 14: Correction of intensity nonuniformity in breast MR images;256
9.14.1;1. Introduction;256
9.14.2;2. Preprocessing steps;257
9.14.2.1;2.1. Noise Reduction;257
9.14.2.2;2.2. Bias Field Reduction;258
9.14.2.3;2.3. Locally Normalization Step;259
9.14.2.4;2.4. Hybrid Method for Bias Field Correction;259
9.14.2.4.1;2.4.1. Bias field model;260
9.14.2.4.2;2.4.2. Correction step;260
9.14.2.4.3;2.4.3. Field estimation;261
9.14.3;3. Experimental Results;263
9.14.4;4. Conclusion;264
9.14.5;Acknowledgments;265
9.14.6;References;265
9.15;Chapter 15: Traffic control by digital imaging cameras;268
9.15.1;1. Introduction;268
9.15.2;2. Paper Overview;269
9.15.3;3. Implementation;269
9.15.4;4. Traffic detectors;270
9.15.4.1;4.1. Induction Loops;270
9.15.4.2;4.2. Microwave Radar;271
9.15.4.3;4.3. Infrared Sensors;271
9.15.4.4;4.4. Video Detection;272
9.15.5;5. Image processing;273
9.15.5.1;5.1. Basic Types of Images;274
9.15.5.1.1;5.1.1. Binary image;274
9.15.5.1.2;5.1.2. Grayscale image;274
9.15.5.1.3;5.1.3. True color or RGB image;274
9.15.5.1.4;5.1.4. Indexed images;275
9.15.6;6. Project design;275
9.15.6.1;6.1. Red-Light Violation;277
9.15.6.2;6.2. Speed Violation;278
9.15.6.3;6.3. Plate Numbers Recognition;279
9.15.7;7. Performance analysis;280
9.15.7.1;7.1. Speed Violation;280
9.15.7.2;7.2. Red Violation;281
9.15.7.3;7.3. Plate Position Determination;281
9.15.8;8. General Conclusion;283
9.15.8.1;8.1. Problems;283
9.15.8.2;8.2. Future Work;283
9.15.9;References;283
9.16;Chapter 16: Night color image enhancement via statistical law and retinex;286
9.16.1;1. Introduction;286
9.16.2;2. Overview of Retinex Theory;287
9.16.2.1;2.1. The Basic Idea of Retinex Theory;287
9.16.2.2;2.2. The "halo effect";287
9.16.3;3. Analyzing the transformation law and enhancing the nighttime image;287
9.16.4;4. Comparison and results;291
9.16.5;5. Application;296
9.16.6;6. The conclusion;296
9.16.7;References;297
10;Part 2: Computer vision and recognition systems;300
10.1;Chapter 17: Trajectory evaluation and behavioral scoring using JAABA in a noisy system;302
10.1.1;1. Introduction;302
10.1.2;2. Methods;303
10.1.2.1;2.1. ML in JAABA and Trajectory Scoring;305
10.1.3;3. Results;306
10.1.4;4. Discussion;310
10.1.5;Acknowledgments;312
10.1.6;References;312
10.2;Chapter 18: An algorithm for mobile vision-based localization of skewed nutrition labels that maximizes specificity;314
10.2.1;1. Introduction;314
10.2.2;2. Previous work;315
10.2.3;3. Skewed NL localization;316
10.2.3.1;3.1. Detection of Edges, Lines, and Corners;316
10.2.3.2;3.2. Corner Detection and Analysis;319
10.2.3.3;3.3. Selection of Boundary Lines;320
10.2.3.4;3.4. Finding Intersections in Cartesian Space;321
10.2.4;4. Experiments;323
10.2.4.1;4.1. Complete and Partial True Positives;323
10.2.4.2;4.2. Results;325
10.2.4.3;4.3. Limitations;326
10.2.5;5. Conclusions;327
10.2.6;References;329
10.3;Chapter 19: A rough fuzzy neural network approach for robust face detection and tracking;332
10.3.1;1. Introduction;332
10.3.2;2. Theoretical background;334
10.3.3;3. Face-detection method;335
10.3.3.1;3.1. The Proposed Multiscale Method;337
10.3.3.2;3.2. Clustering Subnetwork;338
10.3.4;4. Skin Map Segmentation;341
10.3.4.1;4.1. Skin Map Segmentation Results;341
10.3.5;5. Face detection;342
10.3.6;6. Face Tracking;343
10.3.7;7. Experiments;344
10.3.7.1;7.1. Face-Detection Experiments;344
10.3.7.1.1;7.1.1. Experiment 1;344
10.3.7.1.2;7.1.2. Experiment 2;346
10.3.7.2;7.2. Face-Tracking Experiments;347
10.3.7.2.1;7.2.1. Experiment 1;347
10.3.7.2.2;7.2.2. Experiment 2;348
10.3.7.2.3;7.2.3. Experiment 3;348
10.3.8;8. Conclusions and Future Works;349
10.3.9;Acknowledgments;349
10.3.10;References;350
10.4;Chapter 20: A content-based image retrieval approach based on document queries;352
10.4.1;1. Introduction;352
10.4.2;2. Related Work;353
10.4.3;3. Our approach;354
10.4.4;4. Experimental setup;359
10.4.5;5. Future research;364
10.4.6;Acknowledgments;365
10.4.7;References;365
10.5;Chapter 21: Optical flow-based representation for video action detection;368
10.5.1;1. Introduction;368
10.5.2;2. Related work;369
10.5.3;3. Temporal segment representation;371
10.5.4;4. Optical flow;373
10.5.4.1;4.1. Derivation of Optical Flow;374
10.5.4.2;4.2. Algorithms;374
10.5.4.2.1;4.2.1. Differential Techniques;375
10.5.4.2.2;4.2.2. Region-Based Matching;375
10.5.4.2.3;4.2.3. Energy-Based Methods;376
10.5.4.2.4;4.2.4. Phase-Based Techniques;376
10.5.5;5. Optical flow-based segment representation;376
10.5.5.1;5.1. Optical Flow Estimation;376
10.5.5.2;5.2. Proposed Representation;378
10.5.6;6. Cut Detection Inspiration;381
10.5.7;7. Experiments and results;382
10.5.8;8. Conclusion;385
10.5.9;References;386
10.6;Chapter 22: Anecdotes extraction from webpage context as image annotation;390
10.6.1;1. Introduction;390
10.6.2;2. Literature background;391
10.6.2.1;2.1. Automatic Image Annotation;391
10.6.2.2;2.2. Keyword Extraction;391
10.6.2.3;2.3. Lexical Chain;392
10.6.3;3. Research design;393
10.6.3.1;3.1. Research Model Overview;393
10.6.3.2;3.2. Chinese Lexical Chain Processing;394
10.6.3.2.1;3.2.1. Step 1: Build a directed graph;395
10.6.3.2.2;3.2.2. Step 2: Calculate average distribution rate and degree to concatenate vertices;396
10.6.3.2.3;3.2.3. Step 3: Run iteration;397
10.6.3.2.4;3.2.4. Step 4: Execute postprocessing;398
10.6.3.2.5;3.2.5. Term weighting;398
10.6.4;4. Evaluation;399
10.6.4.1;4.1. Evaluation of Primary Annotation;399
10.6.4.2;4.2. Expert Evaluation of Secondary Annotation;399
10.6.4.3;4.3. User Evaluation of Secondary Annotation;400
10.6.4.4;4.4. Results of Image Annotation;400
10.6.4.5;4.5. Performance Testing;401
10.6.5;5. Conclusion;402
10.6.6;Acknowledgments;402
10.6.7;References;402
10.7;Chapter 23: Automatic estimation of a resected liver region using a tumor domination ratio;406
10.7.1;1. Introduction;406
10.7.2;2. Estimating an ideal resected region using the TDR;408
10.7.3;3. Estimating an Optimal Resected Region Under the Practical Conditions in Surgery;411
10.7.4;4. Modifying a Resected Region Considering Hepatic Veins;413
10.7.5;5. Conclusion;414
10.7.6;References;415
10.8;Chapter 24: Gesture recognition in cooking video based on image features and motion features using Bayesian network class...;416
10.8.1;1. Introduction;416
10.8.2;2. Related work;418
10.8.3;3. Our Method;419
10.8.3.1;3.1. Our Recognition System Overview;419
10.8.3.2;3.2. Preprocessing Input Data;420
10.8.3.3;3.3. Image Feature Extraction;421
10.8.3.4;3.4. Motion Feature Extraction;422
10.8.3.5;3.5. BNs Training;422
10.8.4;4. Experiments;424
10.8.4.1;4.1. Dataset;424
10.8.4.2;4.2. Parameter Setting;424
10.8.4.3;4.3. Results;425
10.8.5;5. Conclusions;427
10.8.6;References;53
10.9;Chapter 25: Biometric analysis for finger vein data: Two-dimensional kernel principal component analysis;430
10.9.1;1. Introduction;430
10.9.2;2. Image Acquisition;431
10.9.3;3. Two-dimensional principal component analysis;432
10.9.4;4. Kernel mapping along row and column direction;433
10.9.4.1;4.1. Two-Dimensional KPCA;433
10.9.4.2;4.2. Kernel Mapping in Row and Column Directions and 2DPCA;434
10.9.5;5. Finger Vein Recognition Algorithm;435
10.9.5.1;5.1. ROI Extraction;435
10.9.5.2;5.2. Image Normalization;436
10.9.5.3;5.3. Feature Extraction and Classification Method;436
10.9.6;6. Experimental results on finger vein database;436
10.9.6.1;6.1. Experimental Setup-1;437
10.9.6.2;6.2. Experimental Setup-2;437
10.9.7;7. Conclusion;441
10.9.8;References;441
10.10;Chapter 26: A local feature-based facial expression recognition system from depth video;444
10.10.1;1. Introduction;444
10.10.2;2. Depth Image Preprocessing;445
10.10.3;3. Feature extraction;445
10.10.3.1;3.1. LDP Features;447
10.10.3.2;3.2. PCA on LDP Features;449
10.10.3.3;3.3. LDA on PCA Features;449
10.10.3.4;3.4. HMM for Expression Modeling and Recognition;450
10.10.4;4. Experiments and results;451
10.10.5;5. Concluding Remarks;454
10.10.6;Acknowledgments;454
10.10.7;References;454
10.11;Chapter 27: Automatic classification of protein crystal images;458
10.11.1;1. Introduction;458
10.11.2;2. Image Categories;459
10.11.3;3. System overview;460
10.11.4;4. Image preprocessing and feature extraction;461
10.11.4.1;4.1. Green Percentile Image Binarization;462
10.11.4.2;4.2. Region Features;463
10.11.4.3;4.3. Edge Features;463
10.11.4.4;4.4. Corner Features;465
10.11.4.5;4.5. Hough Line Features;465
10.11.5;5. Experimental results;465
10.11.6;6. Conclusion and Future Work;467
10.11.7;Acknowledgment;468
10.11.8;References;468
10.12;Chapter 28: Semi-automatic teeth segmentation in 3D models of dental casts using a hybrid methodology;470
10.12.1;1. Introduction;470
10.12.2;2. Dental Study Model;471
10.12.2.1;2.1. 3D Model Acquisition;471
10.12.3;3. Point cloud segmentation;472
10.12.3.1;3.1. RANSAC;473
10.12.3.2;3.2. Region Growing Segmentation;473
10.12.3.3;3.3. Min-Cut;473
10.12.3.4;3.4. Feature Sampling Using NARF;474
10.12.3.5;3.5. The Hybrid Technique;475
10.12.4;4. Results of segmentation techniques applied to 3D dental models;477
10.12.4.1;4.1. First, a Test Using RANSAC;477
10.12.4.2;4.2. Gum Extraction Using Region Growing;478
10.12.4.3;4.3. Per-Tooth Separation Using Min-Cut;478
10.12.4.4;4.4. Semi-Automatic Segmentation (Hybrid Technique);479
10.12.5;5. Comments and Discussions;480
10.12.6;6. Conclusion;481
10.12.7;Acknowledgments ;481
10.12.8;References;481
10.13;Chapter 29: Effective finger vein-based authentication: Kernel principal component analysis;484
10.13.1;1. Introduction;484
10.13.2;2. Image Acquisition;485
10.13.3;3. Principal component analysis;486
10.13.4;4. Kernel principal component analysis;486
10.13.4.1;4.1. KPCA Algorithm;486
10.13.4.2;4.2. Kernel Feature Space versus PCA Feature Space;487
10.13.5;5. Experimental results;488
10.13.6;6. Conclusion;491
10.13.7;References;491
10.14;Chapter 30: Detecting distorted and benign blood cells using the Hough transform based on neural networks and decision trees;494
10.14.1;1. Introduction;494
10.14.2;2. Related work;496
10.14.3;3. Hough transforms;498
10.14.4;4. Overview of NN;499
10.14.5;5. Overview of the classification and regression tree;500
10.14.6;6. The proposed algorithm;500
10.14.7;7. The experimental results;503
10.14.8;8. Conclusions;508
10.14.9;References;509
11;Part 3: Registration, matching, and pattern recognition;512
11.1;Chapter 31: Improving performance with different length templates using both of correlation and absolute difference on si...;514
11.1.1;1. Introduction;514
11.1.2;2. Structure of the proposed method;515
11.1.3;3. 1D degeneration from videos;516
11.1.3.1;3.1. Motion Extraction from MPEG Videos and Construction of Space-Time Image;517
11.1.3.2;3.2. Motion Compensation Vectors from MPEG Videos;517
11.1.3.3;3.3. Space-Time Image;517
11.1.3.4;3.4. Matching Between Template ST Image and Retrieved ST Image;519
11.1.4;4. Similarity Measure with Correlation and Absolute Difference in Motion Retrieving Method;519
11.1.4.1;4.1. Similarity Measure in Motion Space-Time Image Based on Correlations;519
11.1.4.2;4.2. Similarity Measure in Motion Space-Time Image Based on Absolute Differences;520
11.1.5;5. Experiments on baseball games and evaluations;520
11.1.5.1;5.1. Baseball Game;520
11.1.5.2;5.2. Experimental Objects;521
11.1.5.3;5.3. Experiment Process;521
11.1.5.4;5.4. Correlation-Based Similarity Measure in Pitching Retrieval;521
11.1.5.5;5.5. Absolute Difference Based Similarity Measure in Pitching Retrieval;522
11.1.5.6;5.6. Combination Both of Correlations and Absolute Differences;522
11.1.6;6. Conclusions;524
11.1.7;References;524
11.2;Chapter 32: Surface registration by markers guided nonrigid iterative closest points algorithm;526
11.2.1;1. Introduction;526
11.2.2;2. Materials and methods;527
11.2.3;3. Results;529
11.2.4;4. Discussion and conclusions;529
11.2.5;Acknowledgments;532
11.2.6;References;534
11.3;Chapter 33: An affine shape constraint for geometric active contours;536
11.3.1;1. Introduction;536
11.3.2;2. Shape alignment using fourier descriptors;537
11.3.2.1;2.1. Euclidean Shapes Alignment;537
11.3.2.2;2.2. Affine Shape Alignment;539
11.3.2.2.1;2.2.1. Reparametrization of closed curve;539
11.3.2.2.2;2.2.2. Contours alignment using geometrical affine parameters estimation;540
11.3.2.2.2.1;Estimation of the scale factor a;540
11.3.2.2.2.2;Computation of the shift value l0;540
11.3.2.2.2.3;Computation of the matrix A's parameters;541
11.3.2.3;2.3. Discussion;541
11.3.2.4;2.4. Global Matching Using Affine Invariants Descriptors;542
11.3.3;3. Shape Prior for Geometric Active Contours;543
11.3.4;4. Experimental results;544
11.3.4.1;4.1. Robustness of the Proposed Shape Priors;544
11.3.4.2;4.2. Application to Object Detection;545
11.3.4.2.1;4.2.1. Case of Euclidean transformation;545
11.3.4.2.2;4.2.2. Case of affine transformation;547
11.3.5;5. Conclusions;550
11.3.6;References;551
11.4;Chapter 34: A topological approach for detection of chessboard patterns for camera calibration;554
11.4.1;1. Introduction;554
11.4.2;2. X-corner detector;556
11.4.3;3. Topological filter;557
11.4.4;4. Point Correspondences;559
11.4.5;5. Location refinement;560
11.4.6;6. Experimental Results;561
11.4.7;7. Conclusions;566
11.4.8;References;566
11.5;Chapter 35: Precision distortion correction technique based on FOV model for wide-angle cameras in automotive sector;570
11.5.1;1. Introduction;570
11.5.2;2. Related research;571
11.5.3;3. Distortion center estimation method using FOV model and 2D patterns;573
11.5.3.1;3.1. Distortion Correction Method Considering Distortion Center Estimation;573
11.5.3.2;3.2. FOV Distortion Model;574
11.5.3.3;3.3. Distortion Coefficient Estimation of the FOV Model;575
11.5.3.4;3.4. Distortion Center Estimation Method Using 2D Patterns;576
11.5.4;4. Experiment and evaluation;577
11.5.5;5. Application of algorithm to products improving vehicle convenience;582
11.5.5.1;5.1. Rear View Camera;583
11.5.5.2;5.2. Surround View Monitoring (SVM) System;583
11.5.6;6. Conclusion;584
11.5.7;Acknowledgments;585
11.5.8;References;585
11.6;Chapter 36: Distances and kernels based on cumulative distribution functions;588
11.6.1;1. Introduction;588
11.6.2;2. Distance and Similarity Measures Between Distributions;588
11.6.3;3. Distances on cumulative distribution functions;590
11.6.4;4. Experimental results and discussions;593
11.6.5;5. Generalization;595
11.6.6;6. Conclusions and Future Work;596
11.6.7;References;596
11.7;Chapter 37: Practical issues for binary code pattern unwrapping in fringe projection method;598
11.7.1;1. Introduction;598
11.7.2;2. Prior and related work;599
11.7.3;3. Practical issues for fringe pattern generation;599
11.7.4;4. Binary code generation for phase ambiguity resolution;603
11.7.5;5. Practical issues for projected fringe pattern photography;604
11.7.6;6. Three-dimensional reconstruction;606
11.7.6.1;6.1. How to Compute the Initial (Wrapped) Phase;606
11.7.6.2;6.2. How to Compute the Unwrapped Phase via Two Previous Outcomes;606
11.7.6.3;6.3. Noise Removal from Unwrapped Phase;609
11.7.6.4;6.4. Compute Differential Phase;609
11.7.6.5;6.5. Noise Removal from Differential Phase;610
11.7.6.6;6.6. How to Make RGB Texture Image from Projected Fringe Pattern Images;611
11.7.6.7;6.7. Object Cropping;612
11.7.6.8;6.8. Convert Differential Phase to Depth and 3D Visualization;613
11.7.6.9;6.9. Accuracy Evaluation of 3D Point Cloud;613
11.7.7;7. Summary and conclusions;616
11.7.8;References;617
11.8;Chapter 38: Detection and matching of object using proposed signature;620
11.8.1;1. Introduction;620
11.8.2;2. Overview on SURF method;621
11.8.3;3. Overview on Image Segmentation;623
11.8.4;4. The proposed algorithm;623
11.8.5;5. Experimental results;625
11.8.6;6. Conclusions;631
11.8.7;References;632
12;Index;634
Contributors
A. Abdel-Dayem Department of Mathematics and Computer Science, Laurentian University, Sudbury, Ontario, Canada Ryo Aita Graduate school of Engineering, Utsunomiya University, 7-1-2, Yoto, Utsunomiya, Tochigi, Japan Samet Akpinar Department of Computer Engineering, Middle East Technical University, Ankara, Turkey Ferda Nur Alpaslan Department of Computer Engineering, Middle East Technical University, Ankara, Turkey Kyota Aoki Graduate school of Engineering, Utsunomiya University, 7-1-2, Yoto, Utsunomiya, Tochigi, Japan Hamid R. Arabnia University of Georgia, Computer Science, Athens, GA, USA S. Arboleda-Duque Department of Electric, Electronic and Computer Engineering, Universidad Nacional de Colombia Department of Telecommunications Engineering, Universidad Católica de Manizales, Manizales, Caldas, Colombia R. Ardekani Molecular and Computational Biology, Department of Biological Sciences, USC, Los Angeles, CA, USA Ramazan S. Aygün DataMedia Research Lab, Computer Science Department, University of Alabama Huntsville, Huntsville, AL, USA Pham The Bao Faculty of Mathematics and Computer Science, Ho Chi Minh University of Science, Ho Chi Minh City, Viet Nam Robert Beck Department of Computing Sciences, Villanova University, Villanova, PA, USA Christopher Blay YouTube Corporation, San Bruno, CA, USA H. Chen Department of Preventive Medicine, Keck School of Medicine, USC, Los Angeles, CA, USA Haijung Choi SANE Co., Ltd, Seoul, Korea Clarimar José Coelho Computer Science and Computer Engineering Department (CMP), Pontifical Catholic University of Goiás (PUC-GO), Goiânia, Brazil Eduardo Tavares Costa Department of Biomedical Engineering, DEB/FEEC/UNICAMP, Campinas, Brazil Anderson da Silva Soares Computer Science Institute (INF), Federal University of Goiás (UFG), Goiânia, Brazil Sepehr Damavandinejadmonfared Department of Computing, Advanced Cyber Security Research Centre, Macquarie University, Sydney, New South Wales, Australia Maria Stela Veludo de Paiva Engineering School of São Carlos (EESC), Electrical Engineering Department, University of São Paulo (USP), São Paulo, Brazil Leonidas Deligiannidis Wentworth Institute of Technology, Department of Computer Science, Boston, MA, USA Imren Dinç DataMedia Research Lab, Computer Science Department, University of Alabama Huntsville, Huntsville, AL, USA Semih Dinç DataMedia Research Lab, Computer Science Department, University of Alabama Huntsville, Huntsville, AL, USA Gregory Doerfler Department of Computing Sciences, Villanova University, Villanova, PA, USA Min Dong School of Information Science and Engineering, Lanzhou University, Lanzhou, China Arezoo Ektesabi Swinburne University of Technology, Melbourne, Victoria, Australia Hany A. Elsalamony Mathematics Department, Faculty of Science, Helwan University, Cairo, Egypt B. Foley Molecular and Computational Biology, Department of Biological Sciences, USC, Los Angeles, CA, USA Faouzi Ghorbel GRIFT Research Group, CRISTAL Laboratory, Ecole Nationale des Sciences de l’Informatique (ENSI), Campus Universitaire de la Manouba, Manouba, Tunisia J.B. Gómez-Mendoza Department of Electric, Electronic and Computer Engineering, Universidad Nacional de Colombia, Manizales, Caldas, Colombia Marco Aurélio Granero Department of Biomedical Engineering, DEB/FEEC/UNICAMP, Campinas Federal Institute of Education, Science and Technology São Paulo—IFSP, Sao Paulo, Brazil Marco Antônio Gutierrez Division of Informatics/Heart Institute, HCFMUSP, Sao Paulo, Brazil M. Hariyama Graduate School of Information Sciences, Tohoku University, Sendai, Miyagi, Japan A. Hematian Department of Computer and Information Sciences, Towson University, Towson, MD, USA Chuen-Min Huang Department of Information Management, National Yunlin University of Science & Technology, Yunlin, Taiwan, ROC Nguyen Tuan Hung Faculty of Mathematics and Computer Science, Ho Chi Minh University of Science, Ho Chi Minh City, Viet Nam M. Ilie “Dunarea de Jos” University of Galati, Faculty of Automatic Control, Computers, Electrical and Electronics Engineering, Galati, Romania Rowa’a Jamal Electrical Engineering Department, University of Jordan, Amman, Jordan J. Johnson Department of Mathematics and Computer Science, Laurentian University, Sudbury, Ontario, Canada Eui Sun Kang Soongsil University, Seoul, Korea Ajay Kapoor Swinburne University of Technology, Melbourne, Victoria, Australia A. Karimian Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran Loay Khalaf Electrical Engineering Department, University of Jordan, Amman, Jordan Jin Young Kim School of Electrical and Computer Engineering, Chonnam National University, Gwangju, South Korea Manbae Kim Department of Computer and Communications Engineering, Kangwon National University, Chunchon, Gangwon, Republic of Korea Bernd Klässner Technische Universität München, München, Germany Vladimir Kulyukin Department of Computer Science, Utah State University, Logan, UT, USA Gustavo Teodoro Laureano Computer Science Institute (INF), Federal University of Goiás (UFG), Goiânia, Brazil Xiangyu Lu School of Information Science and Engineering, Lanzhou University, Lanzhou, China Yide Ma School of information Science Engineering, Lanzhou University, Lanzhou, China Saeed Mahmoudpour Department of Computer and Communications Engineering, Kangwon National University, Chunchon, Gangwon, Republic of Korea Karmel Manaa Electrical Engineering Department, University of Jordan, Amman, Jordan P. Marjoram Department of Preventive Medicine, Keck School of Medicine, USC, Los Angeles, CA, USA Mohamed Amine Mezghich GRIFT Research Group, CRISTAL Laboratory, Ecole Nationale des Sciences de l’Informatique (ENSI), Campus Universitaire de la Manouba, Manouba, Tunisia Slim M’Hiri GRIFT Research Group, CRISTAL Laboratory, Ecole Nationale des Sciences de l’Informatique (ENSI), Campus Universitaire de la Manouba, Manouba, Tunisia José Manuel...