Bu¿oniu / Busoniu / Tamás | Handling Uncertainty and Networked Structure in Robot Control | E-Book | www2.sack.de
E-Book

E-Book, Englisch, Band 42, 407 Seiten

Reihe: Studies in Systems, Decision and Control

Bu¿oniu / Busoniu / Tamás Handling Uncertainty and Networked Structure in Robot Control


1. Auflage 2015
ISBN: 978-3-319-26327-4
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark

E-Book, Englisch, Band 42, 407 Seiten

Reihe: Studies in Systems, Decision and Control

ISBN: 978-3-319-26327-4
Verlag: Springer Nature Switzerland
Format: PDF
Kopierschutz: 1 - PDF Watermark



This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer vision, nonlinear and learning control, and multi-agent systems.

Lucian Busoniu received the M.Sc. degree (valedictorian) from the Technical University of Cluj-Napoca, Romania, in 2003 and the Ph.D. degree (cum laude) from the Delft University of Technology, the Netherlands, in 2009. He has held research positions in the Netherlands and France, and is currently an associate professor with the Department of Automation at the Technical University of Cluj-Napoca. His fundamental interests include planning-based methods for nonlinear optimal control, reinforcement learning and dynamic programming with function approximation, and multiagent systems; while his practical focus is applying these techniques to robotics. He has coauthored a book and more than 50 papers and book chapters on these topics. He was the recipient of the 2009 Andrew P. Sage Award for the best paper in the IEEE Transactions on Systems, Man, and Cybernetics.  Levente Tamas received the M.Sc. (valedictorian) and the Ph.D. degree in electrical engineering from Technical University of Cluj-Napoca, Romania, in 2005 and 2010, respectively. He took part in several postdoctoral programs dealing with 3D perception and robotics, the most recent one spent at the Bern University of Applied Sciences, Switzerland. He is currently with the Department of Automation, Technical University of Cluj-Napoca, Romania. His research focuses on 3D perception and planning for autonomous mobile robots, and has resulted in several well ranked conference papers, journal articles, and book chapters in this field.

Bu¿oniu / Busoniu / Tamás Handling Uncertainty and Networked Structure in Robot Control jetzt bestellen!

Weitere Infos & Material


1;Contents;6
2;Contributors;14
3;Acronyms;17
4;Introduction;20
5;Part I Learning Control in Unknown Environments;28
6;1 Robot Learning for Persistent Autonomy;29
6.1;1.1 Persistent Autonomy;29
6.2;1.2 Robot Learning Architecture;30
6.3;1.3 Learning of Reactive Behavior;31
6.3.1;1.3.1 Autonomous Robotic Valve Turning;32
6.3.2;1.3.2 Related Work;32
6.3.3;1.3.3 Hierarchical Learning Architecture;34
6.3.4;1.3.4 Learning Methodology;34
6.3.5;1.3.5 Imitation Learning;36
6.3.6;1.3.6 Force/Motion Control Strategy;38
6.3.7;1.3.7 Learning of Reactive Behavior Using RFDM;39
6.3.8;1.3.8 Iterative Learning Control;42
6.4;1.4 Learning to Recover from Failures;43
6.4.1;1.4.1 Methodology;44
6.4.2;1.4.2 Fault Detection Module;45
6.4.3;1.4.3 Problem Formulation;45
6.4.4;1.4.4 Learning Methodology;46
6.4.5;1.4.5 Experiments;48
6.5;1.5 Conclusion;52
6.6;References;52
7;2 The Explore--Exploit Dilemma in Nonstationary Decision Making under Uncertainty;55
7.1;2.1 Introduction;55
7.1.1;2.1.1 Decision Making and Control under Uncertainty in Nonstationary Environments;58
7.2;2.2 Case Study 1: Model-Based Reinforcement Learning for Nonstationary Environments;59
7.2.1;2.2.1 Gaussian Process Regression and Clustering;60
7.2.2;2.2.2 GP-NBC-MBRL Solution to Nonstationary MDPs;63
7.2.3;2.2.3 Example Experiment;63
7.2.4;2.2.4 Summary of Case Study 1;65
7.3;2.3 Case Study 2: Monitoring Spatiotemporally Evolving Processes using Unattended Ground Sensors and Data-Ferrying UAS;66
7.3.1;2.3.1 Connecting the FoW Functional to Data;67
7.3.2;2.3.2 Problem Definition;68
7.3.3;2.3.3 Solution Methods;70
7.3.4;2.3.4 Simulation Results;73
7.3.5;2.3.5 Summary of Case Study 2;76
7.4;References;76
8;3 Learning Complex Behaviors via Sequential Composition and Passivity-Based Control;79
8.1;3.1 Introduction;80
8.2;3.2 Sequential Composition;82
8.3;3.3 Passivity-Based Control;83
8.3.1;3.3.1 Interconnection and Damping Assignment Passivity-Based Control;84
8.3.2;3.3.2 Algebraic IDA-PBC;85
8.4;3.4 Estimating the Domain of Attraction;86
8.5;3.5 Learning via Composition;89
8.5.1;3.5.1 Actor--Critic;89
8.5.2;3.5.2 Algebraic Interconnection and Damping Assignment Actor--Critic;90
8.5.3;3.5.3 Sequential Composition Reinforcement Learning Algorithm;90
8.6;3.6 An Example Simulation;93
8.7;3.7 Conclusions;98
8.8;References;99
9;4 Visuospatial Skill Learning;101
9.1;4.1 Introduction;101
9.2;4.2 Related Work;103
9.3;4.3 Introduction to Visuospatial Skill Learning;106
9.3.1;4.3.1 Terminology;107
9.3.2;4.3.2 Problem Statement;107
9.3.3;4.3.3 Methodology;108
9.4;4.4 Implementation of VSL;109
9.4.1;4.4.1 Coordinate Transformation;110
9.4.2;4.4.2 Image Processing;110
9.4.3;4.4.3 Trajectory Generation;112
9.4.4;4.4.4 Grasp Synthesis;114
9.5;4.5 Experimental Results;114
9.5.1;4.5.1 Simulated Experiments;114
9.5.2;4.5.2 Real-World Experiments;117
9.6;4.6 Conclusions;123
9.7;References;123
10;Part II Dealing with Sensing Uncertainty;126
11;5 Observer Design for Robotic Systems via Takagi--Sugeno Models and Linear Matrix Inequalities;127
11.1;5.1 Introduction;127
11.2;5.2 Preliminaries;129
11.2.1;5.2.1 Descriptor Models of Robotic Systems;129
11.2.2;5.2.2 Takagi--Sugeno Models;131
11.2.3;5.2.3 Linear Matrix Inequalities;135
11.3;5.3 Observer Design for TS Descriptor Models;137
11.4;5.4 Simulation Example;146
11.5;5.5 Summary;150
11.6;References;151
12;6 Homography Estimation Between Omnidirectional Cameras Without Point Correspondences;153
12.1;6.1 Introduction;153
12.2;6.2 Planar Homography for Central Omnidirectional Cameras;155
12.3;6.3 Homography Estimation;157
12.3.1;6.3.1 Construction of a System of Equations;158
12.3.2;6.3.2 Normalization and Initialization;159
12.4;6.4 Omnidirectional Camera Models;160
12.4.1;6.4.1 The General Catadioptric Camera Model;160
12.4.2;6.4.2 Scaramuzza's Omnidirectional Camera Model;162
12.5;6.5 Experimental Results;163
12.6;6.6 Relative Pose from Homography;167
12.7;6.7 Conclusions;173
12.8;References;173
13;7 Dynamic 3D Environment Perception and Reconstruction Using a Mobile Rotating Multi-beam Lidar Scanner;176
13.1;7.1 Introduction;177
13.2;7.2 3D People Surveillance;179
13.2.1;7.2.1 Foreground-Background Separation;180
13.2.2;7.2.2 Pedestrian Detection and Multi-target Tracking;181
13.2.3;7.2.3 Evaluation;183
13.3;7.3 Real Time Vehicle Detection for Autonomous Cars;185
13.3.1;7.3.1 Object Extraction by Point Cloud Segmentation;187
13.3.2;7.3.2 Object Level Feature Extraction and Vehicle Recognition;189
13.3.3;7.3.3 Evaluation of Real-Time Vehicle Detection;192
13.4;7.4 Large Scale Urban Scene Analysis and Reconstruction;193
13.4.1;7.4.1 Multiframe Point Cloud Processing Framework;194
13.4.2;7.4.2 Experiments;199
13.5;7.5 Conclusion;201
13.6;References;202
14;8 RoboSherlock: Unstructured Information Processing Framework for Robotic Perception;204
14.1;8.1 Introduction;205
14.2;8.2 Related Work and Motivation;207
14.3;8.3 Overview of RoboSherlock;208
14.4;8.4 Conceptual Framework;211
14.4.1;8.4.1 Common Analysis Structure (CAS);211
14.4.2;8.4.2 Analysis Engines in RoboSherlock;212
14.4.3;8.4.3 Object Perception Type System;215
14.4.4;8.4.4 Integrating Perception Capabilities into RoboSherlock;216
14.5;8.5 Tracking and Entity Resolution;217
14.6;8.6 Information Fusion;219
14.7;8.7 Experiments and Results;222
14.7.1;8.7.1 Illustrative Example;223
14.7.2;8.7.2 Entity Resolution;224
14.7.3;8.7.3 Information Fusion;226
14.8;8.8 Conclusion and Future Work;229
14.9;References;229
15;9 Navigation Under Uncertainty Based on Active SLAM Concepts;232
15.1;9.1 Introduction;232
15.1.1;9.1.1 SLAM;234
15.1.2;9.1.2 Active Mapping;234
15.1.3;9.1.3 Active Localization;234
15.1.4;9.1.4 Active SLAM;235
15.2;9.2 High Level View of General Active SLAM Algorithms;236
15.3;9.3 Uncertainty Criteria;237
15.4;9.4 Main Paradigms of Active SLAM;239
15.4.1;9.4.1 A First Approach: Local Search Using Optimality Criteria;239
15.4.2;9.4.2 A Second Look: An Information Gain Approach;241
15.4.3;9.4.3 A Third Strategy: Considering Multiple Steps Ahead;243
15.5;9.5 Navigation Under Uncertainty: An Active SLAM Related Application;243
15.5.1;9.5.1 Path Planning in the Belief Space;244
15.6;9.6 Our Approach: Fast Minimum Uncertainty Search Over a Pose Graph Representation;245
15.6.1;9.6.1 Metric Calculation;246
15.6.2;9.6.2 Increasing Traversability;246
15.6.3;9.6.3 Decision Points;246
15.6.4;9.6.4 Decision Graph;247
15.6.5;9.6.5 Searching over the Decision Graph;248
15.7;9.7 Experiments;249
15.7.1;9.7.1 Graph Reduction;250
15.7.2;9.7.2 H0: Are the Minimum Uncertainty Path and the Shortest Necessarily Equal?;250
15.7.3;9.7.3 Timing Comparisons;251
15.8;9.8 Discussion;252
15.9;References;254
16;10 Interactive Segmentation of Textured and Textureless Objects;259
16.1;10.1 Introduction and Motivation;260
16.2;10.2 Overview of Interactive Segmentation Processing Steps;263
16.3;10.3 Segmentation of Cluttered Tabletop Scene;263
16.4;10.4 Push Point Selection and Validation;264
16.4.1;10.4.1 Contact Points from Concave Corners;265
16.4.2;10.4.2 Push Direction and Execution;265
16.5;10.5 Feature Extraction and Tracking;266
16.6;10.6 Feature Trajectory Clustering;267
16.6.1;10.6.1 Randomized Feature Trajectory Clustering;268
16.6.2;10.6.2 Trajectory Clustering Analysis;271
16.6.3;10.6.3 Exhaustive Graph-Based Trajectory Clustering;273
16.7;10.7 Stopping Criteria and Finalizing Object Models;274
16.7.1;10.7.1 Verification of Correctness of Segmentation;275
16.7.2;10.7.2 Dense Model Reconstruction;276
16.8;10.8 Results;278
16.8.1;10.8.1 Random Versus Corner-Based Pushing;278
16.8.2;10.8.2 Trajectory Clustering;279
16.8.3;10.8.3 System Integration and Validation;280
16.9;10.9 Conclusions;282
16.10;References;282
17;Part III Control of Networked and Interconnected Robots;285
18;11 Vision-Based Quadcopter Navigation in Structured Environments;286
18.1;11.1 Introduction;287
18.2;11.2 Quadcopter Structure and Control;288
18.3;11.3 Quadcopter Hardware and Software;289
18.4;11.4 Methodological and Theoretical Background;290
18.4.1;11.4.1 Feature Detection;291
18.4.2;11.4.2 Feature Tracking;293
18.5;11.5 Approach;299
18.5.1;11.5.1 Software Architecture;299
18.5.2;11.5.2 Quadcopter Initialization;301
18.5.3;11.5.3 Perspective Vision;301
18.5.4;11.5.4 VP Tracking;303
18.5.5;11.5.5 Control;303
18.6;11.6 Experiments and Results;305
18.6.1;11.6.1 VP Motion Model Results;306
18.6.2;11.6.2 Nonlinear Estimator Results;306
18.6.3;11.6.3 Indoor and Outdoor Results;307
18.6.4;11.6.4 Control Results;308
18.7;11.7 Summary and Perspectives;310
18.8;References;310
19;12 Bilateral Teleoperation in the Presence of Jitter: Communication Performance Evaluation and Control;312
19.1;12.1 Introduction;312
19.2;12.2 Communication Performance Evaluation for Wireless Teleoperation;314
19.2.1;12.2.1 The Wireless Communication Medium;314
19.2.2;12.2.2 Implementation of Application Layer Measurements;316
19.2.3;12.2.3 Experimental Measurements;317
19.3;12.3 Control of Bilateral Teleoperation Systems in the Presence of Jitter;323
19.3.1;12.3.1 Control Approaches to Assure the Stability in Bilateral Teleoperation Systems;323
19.3.2;12.3.2 Bilateral Control Scheme to Deal with Jitter Effects;326
19.3.3;12.3.3 Control Experiments;327
19.4;12.4 Conclusions;330
19.5;References;331
20;13 Decentralized Formation Control in Fleets of Nonholonomic Robots with a Clustered Pattern;333
20.1;13.1 Introduction;334
20.2;13.2 Problem Formulation and Preliminaries;336
20.2.1;13.2.1 Robot Dynamics and Tracking Error;337
20.2.2;13.2.2 Network Topology and Agreement Dynamics;339
20.3;13.3 Solving the Consensus and Tracking Problems;341
20.3.1;13.3.1 Linear Consensus for Networks with a Cluster Pattern;341
20.3.2;13.3.2 Tracking for Nonholonomic Systems;344
20.4;13.4 Overall Controller Design;345
20.5;13.5 Simulation Results;348
20.5.1;13.5.1 Small-Scale Example: Ellipse Formation;348
20.5.2;13.5.2 Larger-Scale Example: Three-Leaf Clover Formation;349
20.6;13.6 Conclusions and Perspectives;351
20.7;References;351
21;14 Hybrid Consensus-Based Formation Control of Nonholonomic Mobile Robots;354
21.1;14.1 Introduction;354
21.2;14.2 Background on Hybrid Automata;357
21.3;14.3 Hybrid Consensus-Based Formation Control of Holonomic Robots;359
21.3.1;14.3.1 Regulation Controller Design;360
21.3.2;14.3.2 Consensus-Based Formation Controller Design;361
21.3.3;14.3.3 Hybrid Consensus-Based Regulation and Formation Controller Design;363
21.4;14.4 Hybrid Consensus-Based Formation Control of Non-holonomic Robots;364
21.4.1;14.4.1 Nonholonomic Mobile Robot Equations of Motion;365
21.4.2;14.4.2 Regulation Controller of Mobile Robots;366
21.4.3;14.4.3 Consensus-Based Formation Control of Nonholonomic Mobile Robots;368
21.4.4;14.4.4 Hybrid Consensus-Based Formation Control;371
21.5;14.5 Simulation Results;373
21.5.1;14.5.1 Omnidirectional Robots;374
21.5.2;14.5.2 Nonholonomic Mobile Robots;376
21.6;14.6 Conclusions and Future Work;378
21.7;References;379
22;15 A Multi Agent System for Precision Agriculture;380
22.1;15.1 Introduction;381
22.2;15.2 General Architecture;382
22.3;15.3 Methodology;384
22.3.1;15.3.1 Model Identification;384
22.3.2;15.3.2 Low-Level PID Cascade Control;388
22.3.3;15.3.3 High-Level Model-Based Predictive Control;392
22.4;15.4 Experimental Results;397
22.4.1;15.4.1 Formation Control of UGVs;398
22.4.2;15.4.2 Path Following for the Quadrotor;399
22.4.3;15.4.3 Quadrotor as Flying Sensor for Ground Agents;401
22.5;15.5 Conclusions;403
22.6;References;404
23;Index;406



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.