Buch, Englisch, Band 2155, 456 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 715 g
Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III
Buch, Englisch, Band 2155, 456 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 715 g
Reihe: Communications in Computer and Information Science
ISBN: 978-3-031-63799-5
Verlag: Springer International Publishing
This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024.
The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:
Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.
Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.
Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.
Zielgruppe
Research
Autoren/Hrsg.
Fachgebiete
- Mathematik | Informatik EDV | Informatik Technische Informatik Netzwerk-Hardware
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz
- Mathematik | Informatik EDV | Informatik Angewandte Informatik
- Mathematik | Informatik EDV | Informatik Informatik Natürliche Sprachen & Maschinelle Übersetzung
Weitere Infos & Material
.- Counterfactual explanations and causality for eXplainable AI.
.- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems.
.- Human-in-the-loop Personalized Counterfactual Recourse.
.- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images.
.- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence.
.- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests.
.- Causality-Aware Local Interpretable Model-Agnostic Explanations.
.- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy.
.- CAGE: Causality-Aware Shapley Value for Global Explanations.
.- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI.
.- Exploring the Reliability of SHAP Values in Reinforcement Learning.
.- Categorical Foundation of Explainable AI: A Unifying Theory.
.- Investigating Calibrated Classification Scores through the Lens of Interpretability.
.- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI.
.- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution.
.- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework.
.- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability.
.- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers.
.- Multi-modal Machine learning model for Interpretable Mobile Malware Classification.
.- Explainable Fraud Detection with Deep Symbolic Classification.
.- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems.
.- Towards Non-Adversarial Algorithmic Recourse.
.- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring.
.- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.