Guidotti / Schmid / Longo | Explainable Artificial Intelligence | Buch | 978-3-032-08316-6 | www2.sack.de

Buch, Englisch, 450 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 709 g

Reihe: Communications in Computer and Information Science

Guidotti / Schmid / Longo

Explainable Artificial Intelligence

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part I
Erscheinungsjahr 2025
ISBN: 978-3-032-08316-6
Verlag: Springer

Third World Conference, xAI 2025, Istanbul, Turkey, July 9-11, 2025, Proceedings, Part I

Buch, Englisch, 450 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 709 g

Reihe: Communications in Computer and Information Science

ISBN: 978-3-032-08316-6
Verlag: Springer


This open access five-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2025, held in Istanbul, Turkey, during July 2025.

The 96 revised full papers presented in these proceedings were carefully reviewed and selected from 224 submissions. The papers are organized in the following topical sections:

Volume I:

Concept-based Explainable AI; human-centered explainability; explainability, privacy, and fairness in trustworthy AI; and XAI in healthcare.

Volume II:

Rule-based XAI systems & actionable explainable AI; features importance-based XAI; novel post-hoc & ante-hoc XAI approaches; and XAI for scientific discovery.

Volume III:

Generative AI meets explainable AI; Intrinsically interpretable explainable AI; benchmarking and XAI evaluation measures; and XAI for representational alignment.

Volume IV:

XAI in computer vision; counterfactuals in XAI; explainable sequential decision making; and explainable AI in finance & legal frameworks for XAI technologies.

Volume V:

Applications of XAI; human-centered XAI & argumentation; explainable and interactive hybrid decision making; and uncertainty in explainable AI.

Guidotti / Schmid / Longo Explainable Artificial Intelligence jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Concept-based Explainable AI.- Global Properties from Local Explanations with Concept Explanation Clusters.- From Colors to Classes: Emergence of Concepts in Vision Transformers.- V-CEM: Bridging Performance and Intervenability in Concept-based Models.- Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations.- Concept Extraction for Time Series with ECLAD-ts.- Human-Centered Explainability.- A Nexus of Explainability and Anthropomorphism in AI-Chatbots.- Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection.- Generating Rationales Based on Human Explanations for Constrained Optimization.- Algorithmic Knowability: a unified approach to Explanations in the AI Act.- Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities.- Explainability, Privacy, and Fairness in Trustworthy AI.- Too Sure for Trust. The Paradoxical Effect of Calibrated Confidence in case of Uncalibrated Trust in Hybrid Decision Making.- The Impact of Concept Explanations and Interventions on Human-machine Collaboration.-Leaking LoRA: An Evaluation of Password Leaks and Knowledge Storage in Large Language Models.- Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction.- The Dynamics of Trust in XAI: Assessing Perceived and Demonstrated Trust Across Interaction Modes and Risk Treatments.- XAI in Healthcare.- Systematic Benchmarking of Local and Global Explainable AI Methods for Tabular Healthcare Data.- A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-order Statistical Radiomic Features.- FAIR-MED: Bias Detection and Fairness Evaluation in Healthcare Focused XAI.- Weakly Supervised Pixel-Level Annotation with Visual Interpretability.- Assessing the Value of Explainable Artificial Intelligence for Magnetic Resonance Imaging.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.