At EVS, we actively support and incentivize academic research to continuously push the boundaries of artificial intelligence, especially in computer vision. We recently supported our employee Francesco Dibitonto in his work on HOLMES (HOLonym-MEronym based Semantic Inspection), a new explainable AI technique presented at the xAI World Conference in Lisbon.

What is Explainable AI?

While deep learning has enabled incredible breakthroughs in computer vision, the knowledge acquired by models during training is fully sub-symbolic and difficult to interpret. Explainable AI (XAI) aims to shed light on these black box models by providing explanations for their predictions. By understanding why models make certain decisions, we can identify potential biases, debug errors, and increase user trust in AI systems. Explainability is especially crucial for safety-critical applications like self-driving cars.

XAI Taxonomy

Explainable AI (XAI) can be categorized along several dimensions:
  • Transparency – The degree to which an AI system’s internal workings are observable and understandable. A transparent system allows users to inspect its algorithms, data, and models.
  • Interpretability – The extent to which an AI system’s logic and predictions can be understood by humans. An interpretable system provides insights into why certain outputs were produced.
  • Explainability – The ability of an AI system to provide explanations for its functioning in human-understandable terms. Explainable systems can clarify their reasoning and justify their decisions.

These concepts are interconnected and contribute to overall model explainability. Transparency enables interpretability, which facilitates explainability. But tradeoffs exist between performance and explainability, so finding the right balance is key.

Previous XAI Approaches

Numerous techniques have emerged for explaining AI model predictions. A widely used one is Grad-CAM, which visualizes the importance of different input regions for a model’s output using gradient information. For image classification, it generates heatmap overlays showing which areas most informed the predicted class label. This provides local explanations for individual samples.

Other popular methods include producing counterfactual examples showing how inputs can be modified to change outputs, training inherently interpretable models like decision trees, and learning proxy models to mimic complex black boxes. Each approach has pros and cons regarding model fidelity, generalizability, and human understandability.

Generating Explanations with HOLMES

What HOLMES does is leveraging meronym detectors to pinpoint fine-grained salient regions and quantify their individual influence on the holonym model’s prediction. The full paper dives deeper into these techniques for prying open the AI black box. For example, to explain jaguar classification, HOLMES would detect the head, legs, tail, fur, etc. It would highlight the fur region in a heatmap and show that occluding it significantly lowers the model’s jaguar confidence score.

The Way Forward

HOLMES demonstrates the feasibility of generating multi-level explanations without relying on full-part annotations. As AI proliferates, ensuring model transparency and fairness is crucial. At EVS, we are committed to advancing explainable computer vision through pioneering research. HOLMES represents another step toward interpretable systems that users can trust.

We welcome collaborations as we continue demystifying deep learning.

Check out the full HOLMES paper here to learn more!