Buch, Englisch, 208 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 359 g
ISBN: 978-981-15-5126-0
Verlag: Springer Nature Singapore
Over the next few decades, millions of people, with varying backgrounds and levels of technical expertise, will have to effectively interact with robotic technologies on a daily basis. This means it will have to be possible to modify robot behavior without explicitly writing code, but instead via a small number of wearable devices or visual demonstrations. At the same time, robots will need to infer and predict humans’ intentions and internal objectives on the basis of past interactions in order to provide assistance before it is explicitly requested; this is the basis of imitation learning for robotics.
This book introduces readers to robotic imitation learning based on human demonstration with wearable devices. It presents an advanced calibration method for wearable sensors and fusion approaches under the Kalman filter framework, as well as a novel wearable device for capturing gestures and other motions. Furthermore it describes the wearable-device-based and vision-based imitation learning method for robotic manipulation, making it a valuable reference guide for graduate students with a basic knowledge of machine learning, and for researchers interested in wearable computing and robotic learning.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
- Technische Wissenschaften Technik Allgemein Mess- und Automatisierungstechnik
- Mathematik | Informatik EDV | Informatik Informatik Mensch-Maschine-Interaktion
- Technische Wissenschaften Elektronik | Nachrichtentechnik Elektronik Sensorik
- Technische Wissenschaften Elektronik | Nachrichtentechnik Elektronik Robotik
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz
Weitere Infos & Material
Chapter 1: Introduction.
In this chapter, we present some backgrounds about robots, and discussions on the robotic manipulation learning algorithms. Then imitation learning methods are detailed described. We present the current studies on the wearable demonstration that the demonstrations are built by the wearable devices for imitation learning. In addition, we also discuss some existing challenges. The organization of this chapter is listed as follows.
1.1 Background
1.2 State-of-the-art of robotic manipulation learning
1.3 State-of-the-art of imitation learning
1.4 State-of-the-art of wearable demonstration
1.5 Existing challenges
1.6 Summary
Chapter 2: Wearable inertial device
In this chapter, we present the background about wearable inertial device, and detailed descriptions on the inertial sensors. The calibration methods of sensors and wearable device are developed. The wearable computing algorithms are presented. The experimental results show the performance of the wearable inertial device. The organization of this chapter is listed as follows.
2.1 Background
2.2 inertial sensors
2.3 Sensor calibration
2.4 Wearable calibration
2.5 Wearable computing
2.6 Experimental results
2.7 Summary
Chapter 3: Robotic manipulation learning from indirect demonstration
In this chapter, the background of the demonstrations is presented, and the indirect demonstrations are introduced. Then demonstration datasets and the experiments of indirect manipulation demonstration using proposed wearable device are described. And we propose the robotic manipulation learning method by integrating the crucial experience in demonstrations. Finally, we verify the developed methods via both simulations and experiments by grasping various shapes of objects. The organization of this chapter is listed as follows.
3.1 Background
3.2 Indirect demonstration
3.3 Learning method
3.4 Experimental results
3.5 Summary
Chapter 4: Robotic manipulation learning from direct demonstration
In this chapter, we provide an overview of direct demonstration. And we exploit the intrinsic relation between human and robot, then develop a novel mapping method thatthe operator’s fingers are used for robotic hand teleoperation and the arms with palm are used for robotic arm teleoperation. Then a rotation invariant dynamical movement primitive method is presented for learning the operation skills. Finally, the effectiveness of the proposed human experience learning system is evaluated by experiments. The organization of this chapter is listed as follows.
5.1 Background
5.2 Direct demonstration
5.3 Learning policy
5.4 Experimental results
5.5 Summary
Chapter 5: Vision-based learning for robotic manipulation
In this chapter, an overview of the vision-based robotic manipulation is investigated. Then an end-to-end learning method is presented for learning the operation skills. Finally, the effectiveness of the proposed learning system is evaluated by experiments. The organization of this chapter is listed as follows.
6.1 Background
6.2 Vision-based learning method
6.3 Experimental results
6.4 Summary
Chapter 7: Conclusions
7.1 Summary
7.2 Future work




