E-Book, Englisch, 306 Seiten
Harada / Yoshida / Yokoi Motion Planning for Humanoid Robots
1. Auflage 2010
ISBN: 978-1-84996-220-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 306 Seiten
ISBN: 978-1-84996-220-9
Verlag: Springer
Format: PDF
Kopierschutz: 1 - PDF Watermark
Research on humanoid robots has been mostly with the aim of developing robots that can replace humans in the performance of certain tasks. Motion planning for these robots can be quite difficult, due to their complex kinematics, dynamics and environment. It is consequently one of the key research topics in humanoid robotics research and the last few years have witnessed considerable progress in the field. Motion Planning for Humanoid Robots surveys the remarkable recent advancement in both the theoretical and the practical aspects of humanoid motion planning. Various motion planning frameworks are presented in Motion Planning for Humanoid Robots, including one for skill coordination and learning, and one for manipulating and grasping tasks. The problem of planning sequences of contacts that support acyclic motion in a highly constrained environment is addressed and a motion planner that enables a humanoid robot to push an object to a desired location on a cluttered table is described. The main areas of interest include: • whole body motion planning, • task planning, • biped gait planning, and • sensor feedback for motion planning. Torque-level control of multi-contact behavior, autonomous manipulation of moving obstacles, and movement control and planning architecture are also covered. Motion Planning for Humanoid Robots will help readers to understand the current research on humanoid motion planning. It is written for industrial engineers, advanced undergraduate and postgraduate students.
Kensuke Harada received M.E. and Ph.D degrees in Mechanical Engineering from the Graduate School of Engineering, Kyoto University, in 1994 and 1997, respectively. After being employed as a research associate at Hiroshima University, he joined the National Institute of Advanced Industrial Science and Technology (AIST) in 2002. He spent a year, from November 2005 to November 2006, as a visiting scholar at Stanford University's Computer Science Department. In 2001 he was awarded the IEEE ISATP Outstanding Paper Award and in 2004 he received the IEEE ICRA Best Video Award. Eiichi Yoshida received M.E and Ph.D degrees in Precision Machinery Engineering from the Graduate School of Engineering, the University of Tokyo in 1993 and 1996, respectively. In 1996 he joined the former Mechanical Engineering Laboratory, now the National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. From 1990 to 1991, he was a visiting research associate at the Swiss Federal Institute of Technology at Lausanne (EPFL). He served as co-director of AIST/IS-CNRS/ST2I Joint French-Japanese Robotics Laboratory (JRL) at LAAS-CNRS, Toulouse, France, from 2004 to 2008. He is currently co-director of CNRS-AIST JRL (Joint Robotics Laboratory), UMI3218/CRT, AIST, Japan, since 2009. His research interests include robot task and motion planning, modular robotic systems, and humanoid robots. Kazuhito Yokoi is the leader of the Humanoid Research Group and deputy director of the Intelligent Systems Research Institute at AIST. He received his BE degree in Mechanical Engineering from the Nagoya Institute of Technology in 1984, and his ME and PhD degrees in Mechanical Engineering Science from the Tokyo Institute of Technology in 1986 and 1994, respectively. In 1986, he joined the Mechanical Engineering Laboratory, Ministry of International Trade and Industry. He is also a member of CNRS-AIST JRL, UMI3218/CRT and an adjunct professor at the Cooperative Graduate School of the University of Tsukuba. From November 1994 to October 1995, he was a visiting scholar at the Robotics Laboratory in Stanford University's Computer Science Department. His research interests include humanoids and human-centered robotics.
Autoren/Hrsg.
Weitere Infos & Material
1;Preface;5
2;Contents;7
3;List of Contributors;13
4;1 Navigation and Gait Planning;16
4.1;1.1 Introduction;16
4.1.1;1.1.1 Navigation Planning;17
4.1.2;1.1.2 Navigation and Legs;18
4.2;1.2 Dimensionality Reductions;19
4.3;1.3 Contact Forces and Hybrid Dynamics;20
4.4;1.4 Stance Connectivity;22
4.5;1.5 Terrain Evaluation;23
4.6;1.6 A Simple Example;24
4.6.1;1.6.1 Environment Representation;25
4.6.2;1.6.2 The State Space;25
4.6.3;1.6.3 The Action Model;26
4.6.4;1.6.4 The State–Action Evaluation Function;26
4.6.4.1;1.6.4.1 Location Metrics;27
4.6.4.2;1.6.4.2 Step Cost;30
4.6.5;1.6.5 Using the Simple Planner;31
4.7;1.7 Estimated Cost Heuristic;34
4.8;1.8 Limited-time and Tiered Planning;37
4.9;1.9 Adaptive Actions;38
4.9.1;1.9.1 Adaptation Algorithm;40
4.10;1.10 Robot and Environment Dynamics;42
4.11;1.11 Summary;42
4.12;References;43
5;2 Compliant Control of Whole-body Multi-contact Behaviors in Humanoid Robots;44
5.1;2.1 Introduction;44
5.2;2.2 Modeling Humanoids Under Multi-contact Constraints;46
5.2.1;2.2.1 Kinematic and Dynamic Models;47
5.2.2;2.2.2 Task Kinematics and Dynamics Under Supporting Constraints;51
5.2.3;2.2.3 Modeling of Contact Centers of Pressure, Internal Forces, and CoM Behavior;53
5.2.4;2.2.4 Friction Boundaries for Planning CoM and Internal Force Behaviors;57
5.3;2.3 Prioritized Whole-body Torque Control;59
5.3.1;2.3.1 Representation of Whole-body Skills;60
5.3.2;2.3.2 Prioritized Torque Control;61
5.3.3;2.3.3 Real-time Handling of Dynamic Constraints;63
5.3.4;2.3.4 Task Feasibility;67
5.3.5;2.3.5 Control of Contact Centers of Pressure and Internal Tensions/Moments;67
5.4;2.4 Simulation Results;70
5.4.1;2.4.1 Multi-contact Behavior;70
5.4.2;2.4.2 Real-time Response to Dynamic Constraints;73
5.4.3;2.4.3 Dual Arm Manipulation;74
5.5;2.5 Conclusion and Discussion;77
5.6;References;78
6;3 Whole-body Motion Planning – Building Blocks for Intelligent Systems;82
6.1;3.1 Introduction;82
6.2;3.2 Models for Movement Control and Planning;83
6.2.1;3.2.1 Control System;84
6.2.1.1;3.2.1.1 Task Kinematics;85
6.2.1.2;3.2.1.2 Null Space Control;88
6.2.2;3.2.2 Trajectory Generation;90
6.2.3;3.2.3 Task Relaxation: Displacement Intervals;91
6.3;3.3 Stance Point Planning;93
6.4;3.4 Prediction and Action Selection;95
6.4.1;3.4.1 Visual Perception;96
6.4.2;3.4.2 Behavior System;96
6.4.3;3.4.3 Experiments;98
6.5;3.5 Trajectory Optimization;98
6.6;3.6 Planning Reaching and Grasping;101
6.6.1;3.6.1 Acquisition of Task Maps for Grasping;104
6.6.2;3.6.2 Integration into Optimization Procedure;105
6.6.3;3.6.3 Experiments;107
6.7;3.7 Conclusion;109
6.8;References;110
7;4 Planning Whole-body Humanoid Locomotion, Reaching, and Manipulation;114
7.1;4.1 Introduction;114
7.1.1;4.1.1 Basic Motion Planning Methods;115
7.1.2;4.1.2 Hardware and Software Platform;116
7.2;4.2 Collision-free Locomotion: Iterative Two-stage Approach;117
7.2.1;4.2.1 Two-stage Planning Framework;118
7.2.2;4.2.2 Second Stage: Smooth Path Reshaping;119
7.3;4.3 Reaching: Generalized Inverse Kinematic Approach;121
7.3.1;4.3.1 Method Overview;123
7.3.2;4.3.2 Generalized Inverse Kinematics for Whole-body Motion;125
7.3.2.1;4.3.2.1 Inverse Kinematics for Prioritized Tasks;125
7.3.2.2;4.3.2.2 Monitoring Task Execution Criteria;125
7.3.2.3;4.3.2.3 Support Polygon Reshaping;126
7.3.3;4.3.3 Results;126
7.4;4.4 Manipulation: Pivoting a Large Object;127
7.4.1;4.4.1 Pivoting and Small-time Controllability;128
7.4.2;4.4.2 Collision-free pivoting sequence planning;129
7.4.3;4.4.3 Whole-body Motion Generation and Experiments;131
7.4.4;4.4.4 Regrasp Planning;134
7.5;4.5 Motion in Real World: Integrating with Perception;136
7.5.1;4.5.1 Object Recognition and Localization;136
7.5.2;4.5.2 Coupling the Motion Planner with Perception;137
7.5.3;4.5.3 Experiments;139
7.6;4.6 Conclusion;141
7.7;References;141
8;5 Efficient Motion and Grasp Planning for Humanoid Robots;144
8.1;5.1 Introduction;144
8.1.1;5.1.1 RRT-based Planning;145
8.1.2;5.1.2 The Motion Planning Framework;145
8.2;5.2 Collision Checks and Distance Calculations;146
8.3;5.3 Weighted Sampling;147
8.4;5.4 Planning Grasping Motions;149
8.4.1;5.4.1 Predefined Grasps;150
8.4.2;5.4.2 Randomized IK-solver;150
8.4.2.1;5.4.2.1 Reachability Space;151
8.4.2.2;5.4.2.2 A 10 DoF IK-solver for Armar-III;152
8.4.3;5.4.3 RRT-based Planning of Grasping Motions with a Set of Grasps;153
8.4.3.1;5.4.3.1 J+-RRT;153
8.4.3.2;5.4.3.2 A Workspace Metric for the Nearest Neighbor Search;155
8.4.3.3;5.4.3.3 IK-RRT;156
8.5;5.5 Dual Arm Motion Planning for Re-grasping;158
8.5.1;5.5.1 Dual Arm IK-solver;158
8.5.2;5.5.2 Reachability Space;158
8.5.3;5.5.3 Gradient Descent in Reachability Space;158
8.5.4;5.5.4 Dual Arm J+-RRT;160
8.5.5;5.5.5 Dual Arm IK-RRT;161
8.5.6;5.5.6 Planning Hand-off Motions for Two Robots;162
8.5.7;5.5.7 Experiment on ARMAR-III;163
8.6;5.6 Adaptive Planning;163
8.6.1;5.6.1 Adaptively Changing the Complexity for Planning;164
8.6.2;5.6.2 A 3D Example;164
8.6.3;5.6.3 Adaptive Planning for ARMAR-III;165
8.6.3.1;5.6.3.1 Kinematic Subsystems;165
8.6.3.2;5.6.3.2 The Approach;166
8.6.4;5.6.4 Extensions to Improve the Planning Performance;168
8.6.4.1;5.6.4.1 Randomly Extending Good Ranked Configurations;168
8.6.4.2;5.6.4.2 Bi-planning;168
8.6.4.3;5.6.4.3 Focusing the Search on the Area of Interest;169
8.6.5;5.6.5 Experiments;169
8.6.5.1;5.6.5.1 Unidirectional Planning;170
8.6.5.2;5.6.5.2 Bi-directional Planning;172
8.7;5.7 Conclusion;172
8.8;References;174
9;6 Multi-contact Acyclic Motion Planning and Experiments on HRP-2 Humanoid;176
9.1;6.1 Introduction;176
9.2;6.2 Overview of the Planner;178
9.3;6.3 Posture Generator;179
9.4;6.4 Contact Planning;182
9.4.1;6.4.1 Set of Contacts Generation;183
9.4.2;6.4.2 Rough Trajectory;184
9.4.3;6.4.3 Using Global Potential Field as Local Optimization Criterion;186
9.5;6.5 Simulation Scenarios;187
9.6;6.6 Experimentation on HRP-2;191
9.7;6.7 Conclusion;192
9.8;References;193
10;7 Motion Planning for a Humanoid Robot Based on a Biped Walking Pattern Generator;196
10.1;7.1 Introduction;196
10.2;7.2 Gait Generation Method;197
10.2.1;7.2.1 Analytical-solution-based Approach;198
10.2.2;7.2.2 Online Gait Generation;199
10.2.3;7.2.3 Experiment;201
10.3;7.3 Whole-body Motion Planning;202
10.3.1;7.3.1 Definitions;202
10.3.2;7.3.2 Walking Pattern Generation;203
10.3.3;7.3.3 Collision-free Motion Planner;203
10.3.4;7.3.4 Results;205
10.4;7.4 Simultaneous Foot-place/Whole-body Motion Planning;207
10.4.1;7.4.1 Definitions;208
10.4.2;7.4.2 Gait Pattern Generation;209
10.4.3;7.4.3 Overall Algorithm;209
10.4.4;7.4.4 Experiment;211
10.5;7.5 Whole-body Manipulation;212
10.5.1;7.5.1 Motion Modification;213
10.5.2;7.5.2 Force-controlled Pushing Manipulation;214
10.6;7.6 Conclusion;215
10.7;References;216
11;8 Autonomous Manipulation of Movable Obstacles;220
11.1;8.1 Introduction;220
11.1.1;8.1.1 Planning Challenges;221
11.1.2;8.1.2 Operators;222
11.1.3;8.1.3 Action Spaces;222
11.1.4;8.1.4 Complexity of Search;224
11.2;8.2 NAMO Planning;226
11.2.1;8.2.1 Overview;226
11.2.2;8.2.2 Configuration Space;226
11.2.3;8.2.3 Goals for Navigation;228
11.2.4;8.2.4 Goals for Manipulation;229
11.2.5;8.2.5 Planning as Graph Search;230
11.2.5.1;8.2.5.1 Linear Problems;231
11.2.5.2;8.2.5.2 Local Manipulation Search;232
11.2.5.3;8.2.5.3 Connecting Free Space;232
11.2.5.4;8.2.5.4 Analysis;233
11.2.5.5;8.2.5.5 Challenges of CONNECTFS;235
11.2.6;8.2.6 Planner Prototype;235
11.2.6.1;8.2.6.1 Relaxed Constraint Heuristic;236
11.2.6.2;8.2.6.2 High-level Planner;238
11.2.6.3;8.2.6.3 Examples and Experimental Results;238
11.2.6.4;8.2.6.4 Analysis;240
11.2.7;8.2.7 Summary;243
11.3;8.3 Humanoid Manipulation;243
11.3.1;8.3.1 Background;244
11.3.2;8.3.2 Biped Control with External Forces;245
11.3.2.1;8.3.2.1 Decoupled Positioning;245
11.3.2.2;8.3.2.2 Trajectory Generation;246
11.3.2.3;8.3.2.3 Online Feedback;248
11.3.3;8.3.3 Modeling Object Dynamics;248
11.3.3.1;8.3.3.1 Motivation for Learning Models;248
11.3.3.2;8.3.3.2 Modeling Method;249
11.3.4;8.3.4 Experiments and Results;250
11.3.4.1;8.3.4.1 Prediction Accuracy;251
11.3.4.2;8.3.4.2 System Stability;251
11.3.5;8.3.5 Summary;252
11.4;8.4 System Integration;253
11.4.1;8.4.1 From Planning to Execution;253
11.4.2;8.4.2 Measurement;255
11.4.2.1;8.4.2.1 Object Mesh Modeling;255
11.4.2.2;8.4.2.2 Recognition and Localization;255
11.4.3;8.4.3 Planning;257
11.4.3.1;8.4.3.1 Configuration Space;257
11.4.3.2;8.4.3.2 Contact Selection;258
11.4.3.3;8.4.3.3 Action Spaces;259
11.4.4;8.4.4 Uncertainty;260
11.4.4.1;8.4.4.1 Impedance Control;260
11.4.4.2;8.4.4.2 Replanning Walking Paths;261
11.4.4.3;8.4.4.3 Guarded Grasping;261
11.4.5;8.4.5 Results;262
11.5;References;262
12;9 Multi-modal Motion Planning for Precision Pushing on a Humanoid Robot;266
12.1;9.1 Introduction;266
12.2;9.2 Background;268
12.2.1;9.2.1 Pushing;268
12.2.2;9.2.2 Multi-modal Planning;269
12.2.3;9.2.3 Complexity and Completeness;270
12.3;9.3 Problem Definition;271
12.3.1;9.3.1 Configuration Space;271
12.3.2;9.3.2 Modes;272
12.3.3;9.3.3 Transitions;273
12.4;9.4 Single-mode Motion Planning;274
12.4.1;9.4.1 Collision Checking;274
12.4.2;9.4.2 Walk Planning;274
12.4.3;9.4.3 Reach Planning;275
12.4.4;9.4.4 Push Planning;275
12.4.4.1;9.4.4.1 Stable Push Dynamics;275
12.4.4.2;9.4.4.2 Inverse Kinematics;276
12.5;9.5 Multi-modal Planning with Random-MMP;278
12.5.1;9.5.1 Effects of the Expansion Strategy;279
12.5.2;9.5.2 Blind Expansion;280
12.5.3;9.5.3 Utility computation;280
12.5.4;9.5.4 Utility-centered Expansion;282
12.5.5;9.5.5 Experimental Comparison of Expansion Strategies;282
12.6;9.6 Postprocessing and System Integration;283
12.6.1;9.6.1 Visual Sensing;283
12.6.2;9.6.2 Execution of Walking Trajectories;284
12.6.3;9.6.3 Smooth Execution of Reach Trajectories;284
12.6.3.1;9.6.3.1 Time-optimal Joint Trajectories;285
12.6.3.2;9.6.3.2 Univariate Time-optimal Trajectories;285
12.6.3.3;9.6.3.3 Acceleration-optimal Trajectories;286
12.7;9.7 Experiments;287
12.7.1;9.7.1 Simulation Experiments;287
12.7.2;9.7.2 Experiments on ASIMO;288
12.8;9.8 Conclusion;289
12.9;References;289
13;10 A Motion Planning Framework for Skill Coordination and Learning;292
13.1;10.1 Introduction;292
13.1.1;10.1.1 Related Work;294
13.1.1.1;10.1.1.1 Multi-modal Planning;295
13.1.1.2;10.1.1.2 Learning for Motion Planning;295
13.1.2;10.1.2 Framework Overview;296
13.2;10.2 Motion Skills;297
13.2.1;10.2.1 Reaching Skill;299
13.2.2;10.2.2 Stepping Skill;300
13.2.3;10.2.3 Balance Skill;301
13.2.4;10.2.4 Other Skills and Extensions;301
13.3;10.3 Multi-skill Planning;302
13.3.1;10.3.1 Algorithm Details;303
13.3.2;10.3.2 Results and Discussion;305
13.4;10.4 Learning;307
13.4.1;10.4.1 A Similarity Metric for Reaching Tasks;308
13.4.2;10.4.2 Learning Reaching Strategies;309
13.4.3;10.4.3 Learning Constraints from Imitation;310
13.4.3.1;10.4.3.1 Detection of Instantaneous Constraints;313
13.4.3.2;10.4.3.2 Merging Transformations;314
13.4.3.3;10.4.3.3 Computing the Thresholds;314
13.4.3.4;10.4.3.4 Reusing Detected Constraints in New Tasks;315
13.4.4;10.4.4 Results and Discussion;316
13.5;10.5 Conclusion;317
13.6;References;317




