Evaluation of Embedded AI Through Model Difference Analysis
Date
Type
Language
Reading access rights:
Rights Holder
Conference Date
Conference Place
Conference Title
ISBN, e-ISBN
Container Title
Department
Version
Faculty
First Page
Subject (OSZKAR)
qualitative model extraction
qualitative reasoning
Gender
University
- Cite this item
- https://doi.org/10.3311/MINISY2025-011
OOC works
Abstract
The growing reliance on embedded AI components in critical systems demands robust mechanisms for explainability and reliability. These systems often integrate highly complex, opaque models whose decision-making processes are difficult to interpret, posing significant challenges to debugging and trustworthiness. This paper introduces an approach that allows examining regions identified through model comparisons, specifically focusing on areas where interpretable surrogate models and opaque models diverge or produce inconsistencies. By analyzing these regions, the paper provides actionable insights for identifying edge cases and mitigating risks associated with model inaccuracies.
This paper leverages qualitative abstraction techniques to translate complex model behavior into comprehensible representations, enabling systematic evaluation of discrepancies. By focusing on the intersection of model behavior and system-level impact, the proposed methodologies offer a scalable approach for enhancing both the dependability and interpretability of AI-enabled systems. The findings advance the state of explainable AI and contribute to the development of safer, more transparent applications in critical domains.