Chen / Yu / Luo | Multi-Modal Human Modeling, Analysis and Synthesis | Buch | 978-1-032-52764-2 | sack.de

Buch, Englisch, 332 Seiten, Format (B × H): 156 mm x 234 mm, Gewicht: 453 g

Chen / Yu / Luo

Multi-Modal Human Modeling, Analysis and Synthesis


1. Auflage 2025
ISBN: 978-1-032-52764-2
Verlag: Taylor & Francis Ltd

Buch, Englisch, 332 Seiten, Format (B × H): 156 mm x 234 mm, Gewicht: 453 g

ISBN: 978-1-032-52764-2
Verlag: Taylor & Francis Ltd


In today’s world, where intelligent technologies are deeply transforming human-computer interaction and virtual reality, multi-modal human modeling, analysis and synthesis have become central topics in computer vision. As application scenarios grow increasingly complex, new technologies continue to emerge to address these challenges. These techniques demand systematic summarization and practical guidance.

To meet this need, Multi-Modal Human Modeling, Analysis and Synthesis aims to adopt a structured perspective, building a comprehensive technical framework for multi-modal human modeling, analysis and synthesis—progressing from local details to holistic perspectives, and from face features to body dynamics.

This book begins by examining the anatomy structures and characteristics of human faces and bodies, then analyzes how traditional methods and deep learning approaches provide robust optimization solutions for modeling. For example, it explores how to address challenges in face recognition caused by lighting changes, occlusions, face expressions and aging, as well as methods for body localization, reconstruction, recognition and anomaly detection in multi-modal scenarios. It also explains how multi-modal data can drive realistic face and body synthesis. A standout feature is its focus on Huawei’s MindSpore framework, bridging the gap between algorithms and engineering through practical case studies. From building face detection and recognition pipelines with the MindSpore toolkit to accelerating model training via automatic parallel computing, and solving large language model (LLM) training challenges, each step is supported by reproducible code and design logic.

Designed for researchers and engineers in computer vision and AI, this book balances theoretical foundations with industry-ready technical details. Whether you aim to enhance the reliability of biometric recognition, explore creative possibilities in virtual-real interactions or optimize the deployment of deep learning frameworks, this guide serves as an essential link between academic advancements and real-world applications.

Chen / Yu / Luo Multi-Modal Human Modeling, Analysis and Synthesis jetzt bestellen!

Zielgruppe


Academic, Postgraduate, Professional Reference, and Undergraduate Advanced

Weitere Infos & Material


Chapter 1 Introduction

Jun Yu, Changwei Luo, Chang Wen Chen, Fengxin Chen and Wei Xu

Chapter 2 Human Face Modeling

Jun Yu, Changwei Luo and Fengxin Chen

Chapter 3 Human Face Analysis

Jun Yu, Changwei Luo and Fengxin Chen

Chapter 4 Human Face Synthesis

Jun Yu and Fengxin Chen

Chapter 5 Human Body Modeling

Fengxin Chen and Jun Yu

Chapter 6 Human Body Analysis

Fengxin Chen and Jun Yu

Chapter 7 Human Body Synthesis

Fengxin Chen and Jun Yu

Chapter 8 MindSpore: An All-Scenario Deep Learning Computing Framework

Peng He, Jun Yu, Xuefeng Jin, Fan Yu, Cong Wang, Wei Zheng and Yuanyuan Tuo



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.