Fine Tuning vs Joint Training vs Feature Extraction

I am reading this paper

It distinguishes between feature extraction and fine tuning in deep learning. I am not getting the difference as feature extraction is just the same as fine tuning:

As per my understanding:

You train a model on a dataset, use it for training on another dataset. This is fine tuning. This is the same as feature extraction from the first trained model, like in feature extraction also you take the first model and train it on a new dataset.

Is there any difference between the two in the ml literature?

Joint training is a third category I understand as there you train on all data simultaneously.


As shown in figure 2 of {1}, in the fine-tuning strategy all weights are changed when training on the new task (except for the weights of the last layers for the original task), whereas in the feature extraction strategy only the weights of the newly added last layers change during the training phase:

enter image description here


Source : Link , Question Author : Rafael , Answer Author : Franck Dernoncourt

Leave a Comment