Longo / Seifert / Lapuschkin | Explainable Artificial Intelligence | Buch | 978-3-031-63786-5 | sack.de

Buch, Englisch, Band 2153, 494 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 768 g

Reihe: Communications in Computer and Information Science

Longo / Seifert / Lapuschkin

Explainable Artificial Intelligence

Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I
2024
ISBN: 978-3-031-63786-5
Verlag: Springer Nature Switzerland

Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I

Buch, Englisch, Band 2153, 494 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 768 g

Reihe: Communications in Computer and Information Science

ISBN: 978-3-031-63786-5
Verlag: Springer Nature Switzerland


This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. 

The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:

Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.

Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.

Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.

Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.

Longo / Seifert / Lapuschkin Explainable Artificial Intelligence jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


.- Intrinsically interpretable XAI and concept-based global explainability.
.- Seeking Interpretability and Explainability in Binary Activated Neural Networks.
.- Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges.
.- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model.
.- Revisiting FunnyBirds evaluation framework for prototypical parts networks.
.- CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models.
.- Unveiling the Anatomy of Adversarial Attacks: Concept-based XAI Dissection of CNNs.
.- AutoCL: AutoML for Concept Learning.
.- Locally Testing Model Detections for Semantic Global Concepts.
.- Knowledge graphs for empirical concept retrieval.
.- Global Concept Explanations for Graphs by Contrastive Learning.
.- Generative explainable AI and verifiability.
.- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation.
.- Generative Inpainting for Shapley-Value-Based Anomaly Explanation.
.- Challenges and Opportunities in Text Generation Explainability.
.- NoNE Found: Explaining the Output of Sequence-to-Sequence Models when No Named Entity is Recognized.
.- Notion, metrics, evaluation and benchmarking for XAI.
.- Benchmarking Trust: A Metric for Trustworthy Machine Learning.
.- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI.
.- Conditional Calibrated Explanations: Finding a Path between Bias and Uncertainty.
.- Meta-evaluating stability measures: MAX-Sensitivity & AVG-Senstivity.
.- Xpression: A unifying metric to evaluate Explainability and Compression of AI models.
.- Evaluating Neighbor Explainability for Graph Neural Networks.
.- A Fresh Look at Sanity Checks for Saliency Maps.
.- Explainability, Quantified: Benchmarking XAI techniques.
.- BEExAI: Benchmark to Evaluate Explainable AI.
.- Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification tasks.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.