Unsupervised debiasing with real-life applications
September 26, 2023
| Keywords: | machine learning, fairness, trustworthiness, debiasing |
| Prerequisites: | Uncertainty Quantification in Machine Learning, Deep Learning, Python, basic statistics |
| Difficulty: | Hard (B.Sc.), Medium (M.Sc.—see notes below) |
Abstract
Bias in a Machine Learning (ML) model happens when the performance of this model is significantly different across diverse subgroups of the data, identified by one or more protected attributes. This can lead to significant ethical/societal issues if the model is employed to aid decision-making on humans: different performances across ethnicity, age, gender, or other demographic variables might be detrimental to the trustworthiness of the model itself and amplify bias already present in the society. Debiasing refers to the act of removing or decreasing bias from a model, either by acting on the dataset, or by tweaking the predictive mechanism of the model. Specifically, unsupervised debiasing (UD) is supposed to operate debiasing without the operators implicitly labelling the protected attributes by hand. The current issue with the literature on UD is (a) a generic scarcity of open implementations, and (b) validation on methods that do not concentrate on real-life issues. This project aims at implementing one or more UD methods and apply these methods on real-life scenario, like face detection or other applications where human are at the center of the prediction phase.
Required work
- Literature review on the concept of fairness in machine learning, with a focus on techniques like face detection or recognition
- Literature review on the concept of UD
- Implement one or more methods for UD (preferably in Python, but other languages can be used as well)
- Compare bias & accuracy on the models with and without debiasing applied
- (Possible extensions) Uncertainty quantification analysis on models, e.g., calibration, reliability diagrams, etc.
Relevant literature
- Survey on fairness: Mehrabi et al. A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys. 2021.
- Ways to measure fairness in FR: Howard et al. Evaluating Proposed Fairness Models for Face Recognition Algorithms. 2022.
- Recent work on fairness assessment: Zullich & Santacatterina Assessing Fairness in Open-Source Face Mask Detection Algorithms. HHAI 2023 Workshops. 2023.
- Recent work on UD: Ragonesi et al. Learning unbiased classifiers from biased data with meta-learning. CVPR workshops. 2023.