Guidotti / Schmid / Longo | Explainable Artificial Intelligence | Buch | 978-3-032-08323-4 | sack.de

Buch, Englisch, 438 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 692 g

Reihe: Communications in Computer and Information Science

Guidotti / Schmid / Longo

Explainable Artificial Intelligence

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part II
Erscheinungsjahr 2025
ISBN: 978-3-032-08323-4
Verlag: Springer

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part II

Buch, Englisch, 438 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 692 g

Reihe: Communications in Computer and Information Science

ISBN: 978-3-032-08323-4
Verlag: Springer


This open access five-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2025, held in Istanbul, Turkey, during July 2025. 


The 96 revised full papers presented in these proceedings were carefully reviewed and selected from 224 submissions. The papers are organized in the following topical sections:

Volume I:

Concept-based Explainable AI; human-centered Explainability; explainability, privacy, and fairness in trustworthy AI; and XAI in healthcare.

Volume II:

Rule-based XAI systems & actionable explainable AI; features importance-based XAI; novel post-hoc & ante-hoc XAI approaches; and XAI for scientific discovery.

Volume III:

Generative AI meets explainable AI; Intrinsically interpretable explainable AI; benchmarking and XAI evaluation measures; and XAI for representational alignment.

Volume IV:

XAI in computer vision; counterfactuals in XAI; explainable sequential decision making; and explainable AI in finance & legal frameworks for XAI technologies.

Volume V:

Applications of XAI; human-centered XAI & argumentation; explainable and interactive hybrid decision making; and uncertainty in explainable AI.

Guidotti / Schmid / Longo Explainable Artificial Intelligence jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Rule-based XAI Systems & Actionable Explainable AI.- CFIRE: A General Method for Combining Local Explanations.- Which LIME should I trust? Concepts, Challenges, and Solutions.- Explainable Bayesian Optimization.- Bridging the Interpretability Gap in Process Mining: A Comprehensive Approach Combining Explainable Clustering and Generative AI.- Balancing Fairness and Interpretability in Clustering with FairParTree.- Features Importance-based XAI.- Antithetic Sampling for Top-k Shapley Identification.- Detecting Concept Drift with SHapley Additive exPlanations for Intelligent Model Retraining in Energy Generation Forecasting.- Counterfactual Shapley Values for Explaining Reinforcement Learning.- Improving the Weighting Strategy in KernelSHAP.- POMELO: Black-Box Feature Attribution with Full-Input, In-Distribution Perturbations.- Novel Post-hoc & Ante-hoc XAI Approaches.- Explain to Gain: Introspective Reinforcement Learning for Enhanced Performance.- Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest.- Mathematical Foundation of Interpretable Equivariant Surrogate Models.- Interpretable Link Prediction via Neural-Symbolic Reasoning.- CausalAIME: Leveraging Peter-Clark Algorithms and Inverse Modeling for Unified Global Feature Explanation in Healthcare.- XAI for Scientific Discovery.- Interpreting the Structure of Multi-object Representations in Vision Encoders.- Leveraging Influence Functions for Resampling in PINNs.- Safe and Efficient Social Navigation through Explainable Safety Regions Based on Topological Features.- A Biologically Inspired Filter Significance Assessment Method for Model Explanation.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.