Leonardis / Ricci / Roth | Computer Vision – ECCV 2024 | E-Book | sack.de
E-Book

E-Book, Englisch, Band 15086, 487 Seiten, eBook

Reihe: Lecture Notes in Computer Science

Leonardis / Ricci / Roth Computer Vision – ECCV 2024

18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXVIII

E-Book, Englisch, Band 15086, 487 Seiten, eBook

Reihe: Lecture Notes in Computer Science

ISBN: 978-3-031-73390-1
Verlag: Springer International Publishing
Format: PDF
Kopierschutz: 1 - PDF Watermark



The multi-volume set of LNCS books with volume numbers 15059 up to 15147 constitutes the refereed proceedings of the 18th European Conference on Computer Vision, ECCV 2024, held in Milan, Italy, during September 29–October 4, 2024.The 2387 papers presented in these proceedings were carefully reviewed and selected from a total of 8585 submissions. They deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; motion estimation.
Leonardis / Ricci / Roth Computer Vision – ECCV 2024 jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


CLIP-Guided Generative Networks for Transferable Targeted Adversarial Attacks.- Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering.- Progressive Classifier and Feature Extractor Adaptation for Unsupervised Domain Adaptation on Point Clouds.- A New Dataset and Framework for Real-World  Blurred Images Super-Resolution.- AddressCLIP: Empowering Vision-Language Models for City-wide Image Address Localization.- RISurConv: Rotation Invariant Surface Attention-Augmented Convolutions for 3D Point Cloud Classification and Segmentation.- StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models.- Bidirectional Uncertainty-Based Active Learning for Open-Set Annotation.- Preventing Catastrophic Overfitting in Fast Adversarial Training: A Bi-level Optimization Perspective.- Projecting Points to Axes: Oriented Object Detection via Point-Axis Representation.- SeiT++: Masked Token Modeling Improves Storage-efficient Training.- Rectify the Regression Bias in Long-Tailed Object Detection.- MagicEraser: Erasing Any Objects via Semantics-Aware Control.- Reliable Spatial-Temporal Voxels For Multi-Modal Test-Time Adaptation.- Stable Preference: Redefining training paradigm of human preference model for Text-to-Image Synthesis.- SparseSSP: 3D Subcellular Structure Prediction from Sparse-View Transmitted Light Images.- NL2Contact: Natural Language Guided 3D Hand-Object Contact Modeling with Diffusion Model.- Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities.- Diff-Tracker: Text-to-Image Diffusion Models are Unsupervised Trackers.- Rethinking Tree-Ring Watermarking for Enhanced Multi-Key Identification.- 3D Small Object Detection with Dynamic Spatial Pruning.- STSP: Spatial-Temporal Subspace Projection for Video Class-incremental Learning.- Transferable 3D Adversarial Shape Completion using Diffusion Models.- OmniSat: Self-Supervised Modality Fusion for Earth Observation.- Distilling Diffusion Models into Conditional GANs.- Semantically Guided Representation Learning For Action Anticipation.- MemBN: Robust Test-Time Adaptation via Batch Norm with Statistics Memory.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.