Mahmud / Doborjeh / Wong | Neural Information Processing | E-Book | sack.de
E-Book

E-Book, Englisch, 416 Seiten

Reihe: Lecture Notes in Computer Science

Mahmud / Doborjeh / Wong Neural Information Processing

31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part III
Erscheinungsjahr 2025
ISBN: 978-981-966582-2
Verlag: Springer Singapore
Format: PDF
Kopierschutz: 1 - PDF Watermark

31st International Conference, ICONIP 2024, Auckland, New Zealand, December 2–6, 2024, Proceedings, Part III

E-Book, Englisch, 416 Seiten

Reihe: Lecture Notes in Computer Science

ISBN: 978-981-966582-2
Verlag: Springer Singapore
Format: PDF
Kopierschutz: 1 - PDF Watermark



The eleven-volume set LNCS 15286-15296 constitutes the refereed proceedings of the 31st International Conference on Neural Information Processing, ICONIP 2024, held in Auckland, New Zealand, in December 2024.
The 318 regular papers presented in the proceedings set were carefully reviewed and selected from 1301 submissions. They focus on four main areas, namely: theory and algorithms; cognitive neurosciences; human-centered computing; and applications.

Mahmud / Doborjeh / Wong Neural Information Processing jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


FreeFlow: A Unified Viewpoint on Diffusion Probabilistic Models via Optimal Transport and Fluid Mechanics.- Optimizing CNNs with Gram Schmidt Non-Iterative Learning for Image Recognition.- Improving Multilingual Speech Recognition with Tucker-compressed Mixture of LoRAs.- MetaFix: Semi-Supervised Model Agnostic Meta-Learning using Consistency Regularization.- Towards Private and Fair Machine Learning: Group-Specific Differentially Private Stochastic Gradient Descent with Threshold Optimization.- LogMoE: Optimizing Mixture of Experts for Log Anomaly Detection via Knowledge Distillation.- Cross-Domain Few-Shot Learning with Equiangular Embedding and Dynamic Adversarial Augmentation.- 8-Net: An Unsupervised Model for Online Graph Time-Series Denoising.- On Learnable Parameters of Optimal and Suboptimal Deep Learning Models.- Aero-engine Condition-Based Maintenance Planning Using  Reinforcement Learning.- Multi-Timescale Processing with Heterogeneous Assembly Echo StateNetworks.- ADERec: Adaptive Data Augmentation Sequence Recommendation Based on Dual Network Architecture.- Pruning neural network parameters using recurrent neural networks.- MA-Mamba: Multi-Agent Reinforcement Learning with State Space Model.- Decentralized Extension for Centralized Multi-Agent Reinforcement Learning via Online Distillation.- Advancing RVFL networks: Robust classification with the HawkEye loss function.- An Enhanced MILP-based Verifier for Adversary Robustness of Neural  Networks.- Hide-and-Seek GANs for Generation with Limited Data.- Unsupervised Robust Hypergraph Correlation Hashing for MultimediaRetrieval.- Emotional Atmosphere Soft Label for Emotion Recognition in Conversations.- CCATS: Moving Forward with Class-Conditional Time Series Generation.- M3ixTS: Mixing of Multi-patch and Multi-view For Time Series  Forecasting.- CSTFormer: Cross Spatial-Temporal Learning Transformer withDynamic Sign Language Recognition through an Augmented Reality Environment.- MmFormer: A Novel Multi-Scale and Multi-Period Transformer Model for Irregular periodic Network Traffc Prediction.- Time Series Anomaly Detection via Temporal Dependencies and Multivariate Correlations Integrating.- Transformer-Based Long Time Series Forecasting with Decoupled Information Extraction and Information Complementarity.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.