A trustworthy framework for punches recognition in Medieval panel paintings

October 4, 2023   
Keywords: deep learning, machine learning, trustworthy AI, explainable AI, uncertainty quantification
Prerequisites: deep learning, uncertainty quantification in ML (optional)
Difficulty: Hard (B.Sc.), Medium (M.Sc.)
Group work (only for B.Sc.): possible

Abstract

Late-Medieval panel paintings from the Florence area were often decorated with the help of metal tools, called punches, which left marks on the wooden panels. These marks were often repeated multiple—even hundred—of times, to create decorative patterns.

Art historian Erling S. Skaug showed that an accurate and scientific study of these punches could be a powerful indicator toward the attribution of authorship of a painting. This job was mainly carried out by careful, yet time-expensive, hand measuring of punches. Nowadays, this repetitive task can be quickly sped up with the help of Machine Learning.

In the past, we previously proposed the usage of Convolutional Neural Networks on images of punches for (a) image classification (Zullich et al. 2023) and (b) object detection (Bruegger 2023—thesis is currently not public). Despite these systems producing seemingly high results, there are still doubts on the effective generalization capabilities of these solutions: namely, the creation of datasets of punches is a prohibitive task, due to time and technical constraints and the geographical locations of the art pieces, which, nowadays, are scattered in various institutions throughout the world. Thus, it is expected that, especially on unseen works of art, the models may produce unreliable outputs. A proposal to mitigate this might be represented by coupling the predictions of existing models with tools from Trustworthy AI, namely:

  1. Uncertainty Quantification: predictions from image classification models and, often, object detection, already come with estimates for uncertainty; the issue is that, often, these estimates are completely unreliable since the model tends to be overconfident (i.e., inaccurate predictions display high degrees of confidence). The idea would then be to study the presence of over/undeconfidence and possibly train alternate stochastic models (e.g., Bayesian Neural Networks—BNNs) to get a better estimate of prediction uncertainty.
  2. Predictions Explanation: Explainable AI (XAI) tools can help model users or designers in getting clues on the inner functioning of the model themselves. Techniques like input attribution (whereas important parts of the input are determined) and the determination of prototypes or representative datapoints, like images from the training set which correlate well with the prediction—can help the users in understand why a model is producing a given prediction on unseen data.

The final goal is to produce an ensemble of tools that can guide the users—who may be art historians unskilled in Machine Learning—in understanding whether a prediction on new data is correct or not. The above list is merely indicative; I am open to receiving further proposals on the matter.

Required work

  • identify techniques for uncertainty quantification and XAI to apply
  • start studying calibration on the model(s)
  • train BNNs and compare results on uncertainty
  • on the selected model(s) + possibly the BNNs, run the XAI tools
    • bouns point: assess whether the explanations are reliable

Relevant literature