Leonardis / Ricci / Varol | Computer Vision - ECCV 2024 | Buch | 978-3-031-72969-0 | sack.de

Buch, Englisch, Band 15105, 499 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 879 g

Reihe: Lecture Notes in Computer Science

Leonardis / Ricci / Varol

Computer Vision - ECCV 2024

18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XLVII

Buch, Englisch, Band 15105, 499 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 879 g

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-72969-0
Verlag: Springer Nature Switzerland


The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024.

The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.

Leonardis / Ricci / Varol Computer Vision - ECCV 2024 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


CS-Prompt: Learning Prompt to Rearrange Class Space for Prompt-based Continual Learning.- Text-Anchored Score Composition: Tackling Condition Misalignment in Text-to-Image Diffusion Models.- Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.- Make Your ViT-based Multi-view 3D Detectors Faster via Token Compression.- OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation.- CatchBackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing.- UCIP: A Universal Framework for Compressed Image Super-Resolution using Dynamic Prompt.- LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents.- ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference.- Two-Stage Active Learning for Efficient Temporal Action Segmentation.- TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation.- MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views.- Domain-Adaptive 2D Human Pose Estimation via Dual Teachers in Extremely Low-Light Conditions.- Towards More Practical Group Activity Detection: A New Benchmark and Model.- Depicting Beyond Scores: Advancing Image Quality Assessment through Multi-modal Language Models.- Zero-Shot Image Feature Consensus with Deep Functional Maps.- WindPoly: Polygonal Mesh Reconstruction via Winding Numbers.- MinD-3D: Reconstruct High-quality 3D objects in Human Brain.- Tokenize Anything via Prompting.- Geospecific View Generation - Geometry-Context Aware High-resolution Ground View Inference from Satellite Views.- Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks.- City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web.- GRAPE: Generalizable and Robust Multi-view Facial Capture.- Training-Free Model Merging for Multi-target Domain Adaptation.- Multi-RoI Human Mesh Recovery with Camera Consistency and Contrastive Losses.- Co-Student: Collaborating Strong and Weak Students for Sparsely Annotated Object Detection.- Open-Vocabulary Camouflaged Object Segmentation.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.