Analyzing the affinity score of pruned layers
October 20, 2023
| Keywords: | machine learning, generalization, regularization, pruning, model compression |
| Prerequisites: | Deep Learning, Statistics |
| Difficulty: | Medium/Hard (M.Sc.). Not suitable for B.Sc. |
Abstract
The affinity score is a recently-introduced metric for calculating the non-linearity of a transformation from two linear variables X and Y. This metric can be applied for measuring the non-linearity of a neural network layer seen as a transformation from its input to its output. The authors of the paper show how there seems to be a solid connection between affinity score and loss attained by a fully-trained neural network.
Neural network pruning is a well-known technique for model compression, i.e., reducing the memory footprint of a machine learning model. Specifically, pruning acts on neural networks by removing parameters (i.e., connections) with a given criterion.
The image above illustrate a toy example whereas pruning is applied to a small neural network, leaving the connectivity pattern sparser. Image is own work.
An appealing, yet largely unexplained, property of pruning is that, when applied with a low rate to neural networks, and after some epochs of re-training, the generalization capability may be better than the original, dense model. Thus, it seems like pruning can be seen also as a regularizer (in addition to being a model compression technique). This project proposal aims at studying the regularization effect of pruning at different rates by identifying possible trends in the non-linearity.
Required work
- Literature review on pruning and methods for comparing hidden representations of neural networks
- Pick multiple datasets, possibly one simple (not MNIST), one medium (e.g., CIFAR10), and one hard (e.g., Tiny-Imagenet or CIFAR100).
- Operate pruning
- Analyze the data (affinity score vs. accuracy)
- (extra) Extend the work to other non-vision datasets
Relevant literature
- Introduction of the affinity score as a non-linearity measure: Bouniot et al. Understanding Deep Neural Networks Through the Lens of their Non-linearity. ArXiv. 2023.
- Introduction to modern pruning in DNNs: Shiwei & Zhangyang. Ten Lessons We Have Learned in the New “Sparseland”: A Short Handbook for Sparse Neural Network Researchers. ArXiV. 2023.
- A paper studying hidden representations of pruned neural networks: Ansuini et al. Investigating similarity metrics for convolutional neural networks in the case of unstructured pruning