E-Book, Englisch, 294 Seiten
Liu / Gu / Howlett Robot Intelligence
1. Auflage 2010
ISBN: 978-1-84996-329-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
An Advanced Knowledge Processing Approach
E-Book, Englisch, 294 Seiten
Reihe: Advanced Information and Knowledge Processing
ISBN: 978-1-84996-329-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
Robot intelligence has become a major focus of intelligent robotics. Recent innovation in computational intelligence including fuzzy learning, neural networks, evolutionary computation and classical Artificial Intelligence provides sufficient theoretical and experimental foundations for enabling robots to undertake a variety of tasks with reasonable performance. This book reflects the recent advances in the field from an advanced knowledge processing perspective; there have been attempts to solve knowledge based information explosion constraints by integrating computational intelligence in the robotics context.
Dr. Honghai Liu is a Reader and Head of Intelligent Systems & Robotics Research Group (ISR), School of Creative Technologies, at the University of Portsmouth. He previously held research appointments at the Departments of Computing Science and Engineering in the Universities of London and Aberdeen, and project leader appointments in large-scale industrial control and system integration industry. Honghai has published over 150 refereed journal and conference papers including three Best Paper Awards. He is interested in approximate computation, machine intelligence, pattern recognition and their practical applications with an emphasis on approaches which could make contribution to the intelligent connection of perception to action in systems context. For this emphasis, he has been developing a framework based on approximate computing and it has been implemented into human motion analysis, multifingered robot manipulation, data novelty detection and intelligent control for electric vehicle suspensions with substantial results. He is a Senior Member of IEEE and a Member of IET.Dongbing Gu's current research interests include multi-agent systems, wireless sensor networks, distributed control algorithms, distributed information fusion, cooperative control, reinforcement learning, fuzzy logic and neural network based motion control, model predictive control, wavelet multi-scale image edge detection, and Bayesian multi-scale image segmentation. His work combines fundamental concepts and tools from computer science, networks, systems and control theory.Robert Howlett has considerable expertise in the use of Intelligent Systems in the solution of industrial problems. He has been successful in applying neural networks, expert & fuzzy methods, web intelligence and related technology to: Sustainability: renewable energy, measurement, control, simulation and modeling of energy systems; Condition monitoring: diagnostic tools and systems; fault location and identification; virtual sensors; Automotive electronics: engine management systems; monitoring and control of small engines. He is the Executive Chair of the UKES Internationalorganization, which facilitates knowledge transfer and research in areas including Intelligent Systems, Sustainability, and Knowledge Transfer. Through the UKES Smart Systems Centre he provides consultancy services on, for example, Knowledge Transfer Partnerships the EU Interreg Anglo-French funding programme, and technical subjects within his expertise. By setting up and managing over 20 collaborative projects with SMEs and other companies, managing the University of Brighton Knowledge Transfer Partnerships (KTP) Centre for a number of years, and Chairmanship of the KTP National Forum, he has become nationally recognised in knowledge and technology transfer, the commercialisation of research, and the third-mission agenda.Dr Yonghuai Liu has completed BSc and MSc studies and also holds two PhDs. Younghuai gained solid knowledge in the fields of Geography, Cartography, Mathematics, and Economics whilst studying for the BSc degree. Whilst studying for the MSc degree he gained knowledge in the fields of Pattern Recognition, Image Processing, and Mathematics. PhD study in China which gave him solid knowledge in the fields of Artificial Intelligence, Uncertain Reasoning, and also Mathematics. During this period of time, Yonghuai researched on Uncertain Reasoning, Expert Systems, Artificial Intelligence, Pattern Recognition, Image Processing, and Multimedia and taught both undergraduate and postgraduate courses on Artificial Intelligence, Discrete Mathematics, Combinatorial Mathematics, and Multimedia. Younghuai received the ORS award. As a result, he studied for his second PhD degree at The University of Hull under the supervision of Dr Marcos A Rodrigues. He is currently a lecturer at the Department of Computer Science, The University of Wales, Aberystwyth.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface;4
2;Contents;8
3;Contributors;10
4;Programming-by-Demonstration of Robot Motions;13
4.1;Introduction;13
4.2;Learning from Human Demonstration;15
4.2.1;Interpretation of Demonstrations in Hand-State Space;15
4.2.2;Skill Encoding Using Fuzzy Modeling;16
4.3;Generation and Execution of Robotic Trajectories Based on Human Demonstration;18
4.3.1;Mapping Between Human and Robot Hand States;19
4.3.2;Definition of Hand-States for Specific Robot Hands;20
4.3.3;Next-State-Planners for Trajectory Generation;22
4.3.4;Demonstrations of Pick-and-Place Tasks;24
4.3.4.1;Variance from Multiple Demonstrations;24
4.4;Experimental Platform;24
4.5;Experimental Evaluation;26
4.5.1;Experiment 1: Learning from Demonstration;26
4.5.1.1;Importance of the Demonstration;27
4.5.2;Experiment 2: Generalization in Workspace;29
4.5.3;Experiment 3: a Complete Pick-and-Place Task;32
4.6;Conclusions and Future Work;32
4.7;References;34
5;Grasp Recognition by Fuzzy Modeling and Hidden Markov Models;36
5.1;Introduction;36
5.2;An Experimental Platform for PBD;37
5.3;Simulation of Grasp Primitives;39
5.3.1;Geometrical Modeling;39
5.3.2;Modeling of Inverse Kinematics;40
5.4;Modeling of Grasp Primitives;42
5.4.1;Modeling by Time-Clustering;42
5.4.2;Training of Time Cluster Models Using New Data;43
5.5;Recognition of Grasps-Three Methods;44
5.5.1;Recognition of Grasps Using the Distance Between Fuzzy Clusters;44
5.5.2;Recognition Based on Qualitative Fuzzy Recognition Rules;45
5.5.2.1;Distance Norms;45
5.5.2.2;Extrema in the Distance Norms and Segmentation;46
5.5.2.3;Set of Fuzzy Rules;48
5.5.2.4;Similarity Degrees;49
5.5.3;Recognition Based on Time-Cluster Models and HMM;50
5.6;Experiments and Simulations;53
5.6.1;Time Clustering and Modeling;53
5.6.2;Grasp Segmentation and Recognition;55
5.7;Conclusions;57
5.8;References;58
6;Distributed Adaptive Coordinated Control of Multi-Manipulator Systems Using Neural Networks;59
6.1;Introduction;59
6.2;Preliminaries;61
6.2.1;Multi-Manipulator System Description;61
6.2.2;Radial Basis Function Neural Network;63
6.3;Controller Design;65
6.4;Performance Analysis;67
6.5;Simulation Example;71
6.6;Conclusion;75
6.7;References;78
7;A New Framework for View-Invariant Human Action Recognition;80
7.1;Introduction;80
7.2;Overview of the Proposed Approach;85
7.3;Exemplar Selection and Representation;87
7.3.1;Key Pose Extraction;87
7.3.2;2D Silhouette Image Generation;88
7.3.3;Contour Shape Feature;89
7.4;Action Modelling and Recognition;91
7.4.1;Exemplar-based Hidden Markov Model;91
7.4.2;Action Modelling;92
7.4.3;Action Recognition;92
7.4.4;Action Category Revalidation;93
7.5;Experiments;95
7.6;Conclusion;99
7.7;References;100
8;Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions;103
8.1;Introduction;103
8.2;Human Skeletal Representation;104
8.3;The Learning Method;106
8.3.1;Model Description: the Fuzzy Membership Function;106
8.3.2;Model Generation: Fuzzy Gaussian Inference;107
8.3.3;Membership Evaluation;109
8.4;Mathematical Properties;109
8.5;Extracting Fuzzy Rules Using Genetic Programming;112
8.6;Experiment and Results;113
8.6.1;Apparatus;113
8.6.2;Participants;113
8.6.3;Procedure;115
8.6.4;Results;118
8.7;Discussion;121
8.8;Conclusion;122
8.9;References;122
9;Obstacle Detection Using Cross-Ratio and Disparity Velocity;125
9.1;Introduction;125
9.1.1;Background;125
9.1.2;Algorithm Overview;126
9.2;Generation of Mesh Maps;128
9.2.1;Mesh Generation;128
9.3;Estimation of the Ground Floor;130
9.4;Identification of Safe Regions within the Ground Plane;134
9.4.1;Incremental Addition of Feature Points;134
9.4.2;Safe Path Detection;137
9.5;Further Evaluation;141
9.5.1;Estimation of Ground Floor;141
9.5.2;Obstacle Detection;144
9.6;Summary;145
9.7;References;148
10;Learning and Vision-Based Obstacle Avoidance and Navigation;150
10.1;Introduction;150
10.2;Depth Perception;152
10.2.1;Absolute Depth and Binocular Vision;152
10.2.1.1;Absolute Depth;152
10.2.2;Relative Depth and Monocular Vision;155
10.2.2.1;Edge Direction and Perspective;155
10.2.2.2;Clarity of Detail and Texture Gradient;156
10.2.2.3;Size Cues;157
10.2.2.4;Motion Depth Cues;157
10.2.2.5;Occlusion;158
10.3;Why Learning and How to Learn for Monocular Visions;158
10.3.1;The Role of Experience;158
10.3.2;Learning Methods;158
10.3.2.1;MILN Learning;161
10.4;Special Problem: Illumination Changes in Outdoor Scenes;162
10.5;Finding Passable Regions for Obstacle Avoidance from Single Image Using MILN ;162
10.5.1;Feature Vector;162
10.5.1.1;Edge;163
10.5.1.2;Clarity of Detail and Texture Gradient;163
10.5.1.3;Color Similarity with Lighting Invariance;163
10.5.1.4;Pixel Position and Region Connection;165
10.5.2;Training Data Generation and Experiment;166
10.5.3;Performance Evaluation;168
10.6;Control Law and Navigation;169
10.6.1;From Obstacle Boundaries to Motor Commands;170
10.7;Discussion;171
10.7.1;Learning Ability;171
10.7.2;Changing Lighting Conditions;171
10.7.3;Learning from Experience;172
10.8;References;172
11;A Fraction Distortion Model for Accurate Camera Calibration and Correction;175
11.1;Introduction;175
11.1.1;Previous Work;176
11.1.2;The Proposed Work;177
11.2;A New Distortion Model;178
11.3;A Novel Calibration Algorithm;179
11.3.1;Pin-Hole Camera Model;183
11.3.2;Optimisation of All Parameters;184
11.3.3;The Correction of the Distorted Image Points;185
11.3.4;Summary of the Novel Camera Calibration and Correction Algorithm;185
11.4;Experimental Results;186
11.4.1;Synthetic Data;186
11.4.1.1;Calibration and Correction;187
11.4.1.2;Collinearity Constraint;190
11.4.1.3;Different Levels of Noise;191
11.4.2;Real Images;192
11.5;Conclusion;193
11.6;References;195
12;A Leader-Follower Flocking System Based on Estimated Flocking Center;197
12.1;Introduction;197
12.2;Flocking System;199
12.3;Flocking Algorithms;201
12.4;Algorithm Stability;202
12.5;Experiments;204
12.6;Simulations;211
12.7;Conclusions;213
12.8;References;213
13;A Behavior Based Control System for Surveillance UAVs;215
13.1;Introduction;215
13.2;Platform and Atomic Actions;218
13.2.1;UAV Platform;218
13.2.2;System Structure;219
13.2.3;Atomic Actions;220
13.3;Software Architecture and Behavior Development;221
13.3.1;Software Architecture;221
13.3.2;Behavior Development;223
13.3.2.1;Ground Behavior;223
13.3.2.2;Takeoff Behavior;223
13.3.2.3;Hovering Behavior;223
13.3.2.4;GPS Landing Behavior;224
13.3.2.5;Vision Landing Behavior;224
13.3.2.6;Emergency Landing Behavior;224
13.3.2.7;GPS Tracking Behavior;224
13.3.2.8;Vision Tracking Behavior;224
13.3.2.9;Obstacle Avoidance Behavior;225
13.3.2.10;Trajectory Tracking Behavior;225
13.4;Vision Module Development;225
13.4.1;SURF Algorithm;225
13.4.2;Coordination Transformation;226
13.4.3;Kalman Filter;227
13.5;Experiment Results;228
13.5.1;Hovering Behavior;228
13.5.2;Vision Tracking Behavior;229
13.5.3;Trajectory Tracking Behavior;230
13.5.4;Trajectory Tracking Behavior with Obstacle Avoidance capability;231
13.5.5;GPS Landing Behavior;231
13.5.6;Vision Landing Behavior;231
13.6;Conclusion and Future Work;232
13.7;References;233
14;Hierarchical Composite Anti-Disturbance Control for Robotic Systems Using Robust Disturbance Observer;235
14.1;Introduction;235
14.2;Formulation of the Problem;237
14.3;Hierarchical Composite Anti-Disturbance Control (HCADC);238
14.4;Applications to a Two-Link Robotic System;240
14.5;Conclusions;243
14.6;Proof of the Lemma 11.1;245
14.7;Proof of the Lemma 11.2;247
14.8;References;248
15;Autonomous Navigation for Mobile Robots with Human-Robot Interaction;250
15.1;Introduction;250
15.2;Human-Robot Interaction;252
15.3;Subject Following with Target Pursuing;254
15.3.1;Correspondence;254
15.3.2;Multi-Cue Integration;256
15.3.3;Robust Tracking;257
15.3.4;Pursuing;258
15.3.5;Mapping;259
15.4;Qualitative Localization;261
15.4.1;Scene Association;262
15.4.2;Scene Recognition;264
15.5;Planning and Navigation;266
15.6;Conclusion;269
15.7;References;271
16;Prediction-Based Perceptual System of a Partner Robot for Natural Communication;274
16.1;Introduction;274
16.2;Prediction-Based Perceptual System for A Partner Robot;276
16.2.1;A Partner Robot; Hubot;276
16.2.2;A Prediction-Based Perceptual System;277
16.2.3;Perceptual Modules;279
16.2.4;Differential Extraction;279
16.2.5;Human Detection;280
16.2.6;Object Detection;281
16.2.7;Hand Motion Recognition;282
16.3;Architecture of Prediction Based Perceptual System;282
16.3.1;Input Layer Based on Spiking Neurons;282
16.3.2;Clustering Layer Based on Unsupervised Learning;283
16.3.3;Prediction Layer and Perceptual Module Selection Layer;284
16.3.4;Update of Learning Rate for Perceptual Module Selection;285
16.3.5;Learning for Prediction and Perceptual Module Selection;286
16.4;Experimental Results;287
16.4.1;Clustering for Prediction;287
16.4.2;Real-Time Learning in Interaction;290
16.4.3;Additional Learning;291
16.5;Conclusions;293
16.6;References;294
17;Index;296




