
Generative AI is revolutionizing various domains, from natural language processing to image recognition. Two concepts that are fundamental to these advancements are Transfer Learning and Fine Tuning. Despite their interconnected nature, they are distinct methodologies that serve unique purposes when training large language models (LLMs) to achieve different objectives. In this blog, we will explore the differences between Transfer Learning and Fine Tuning, learning about their individual characteristics and how they come into play in real-world scenarios with the help of examples.
What is Transfer Learning?
Transfer Learning is an AI / ML concept that refers to the utilization of a pre-trained model on a new but related task. It involves using an existing pre-trained model such as BERT, GPT, etc that was trained on a particular task (source task) and adapting it for a different but related task (target task). We can call the target task as domain adaptation of source task. The primary goal is to leverage the knowledge gained from the source task to achieve better performance on the target task, especially when there is limited labeled data for the target task. Note that in transfer learning, you don’t pre-train the model from scratch.
The following is how transfer learning works:
- Pre-Trained Model: Start with a model that has already been trained on a large dataset, often related to a general domain. Let’s say a BERT or LLAMA 2 model trained with English text to understand nuances of English language.
- Target Task: Identify the new task you want the model to perform. This task should be related to the source task in some way. For example, classify the contract documents in procurement. Or, for that matter, classify the resumes for recruitment teams.
- Domain Adaptation: The pre-trained model is then used as a starting point for the target task. Depending on the problem, some layers of the model might be frozen (i.e., their parameters are not updated).
Example illustrating how transfer learning works
Task 1: Training a Model for Predicting Next Words
- Problem: Predicting next words based on the previous words. This can also be called as pre-training.
- Data: A large dataset containing thousands of words from sources such as Wikipedia.
- Model: A large language model such as GPT trained on this data.
- Outcome: The trained model can now accurately predict next words based on the previous words.
Task 2: Adapt the Pre-trained LLM to In-domain Corpus (New Domain)
- Problem: Now, we want to adapt the model to predict next words in target corpus, e.g., book reviews. This can also be called as domain adaptation.
- Solution: Transfer Learning.
- Step 1: Take the pre-trained model from Task 1 (Pre-trained on Wikipedia corpus).
- Step 2: Remove the layer(s) that are specific to predicting the next words.
- Step 3: Add new layer(s) specific to next word prediction in book reviews.
- Step 4: Train the modified model on the smaller book reviews text.
- Outcome: The model can now predict words in the target corpus, e.g., book reviews.
Why It Works
- The early layers of LLM trained for predicting next words often learn to detect generic features.
- These features are common to many next word prediction tasks.
- By keeping these early layers and only retraining the later, task-specific layers, we can transfer the knowledge the model has gained from predicting words in English to the new task of predicting next words in target corpus.
Advantages of Transfer Learning
The following are some of the advantages of transfer learning:
- Efficiency: Reduces the need for extensive data in the target task given a pre-trained model is already created.
- Speed: Minimizes the training time, as the model has already learned relevant features based on pre-training.
- Performance: Often leads to better performance, especially when the target task has limited labeled data.
What is Fine Tuning Task?
Fine-tuning is a specific technique within the broader realm of transfer learning. It involves taking a pre-trained model that has been trained on a large dataset (usually for a related task) and adapting it for a specific task by continuing the training process on a smaller, task-specific dataset.
Fine-tuning refers to the process of taking a pre-trained model and adjusting or “tuning” it to perform a specific task. The primary objective is to improve performance on the target task by leveraging the knowledge contained in the pre-trained model.
Example illustrating how fine tuning works
Extending the example in the transfer learning section, here is how the fine tuning task will look like:
Task 3: Classify the sentiments of book reviews
In this step, the LLM adapted to book reviews can be fine-tuned with a classification layer for the target task (e.g., classifying the sentiment of book reviews). This step can be called as fine tuning.
Differences between Transfer Learning & Fine Tuning
The following is a detailed comparison between transfer learning and fine-tuning, highlighting the differences between the two concepts along with examples for each point:
Definition | Leveraging a pre-trained model’s knowledge for a new but related task. Its also called as domain adaptation. | Adjusting a pre-trained model’s parameters to specialize in a specific task. | Transfer Learning: Using a pre-trained ImageNet model for a new animal classification task. Fine-Tuning: Adjusting the last layer of a pre-trained BERT model for sentiment analysis. |
Objective | To utilize generalized knowledge, reduce training time, and data needs. | To tailor the pre-trained model for a particular task or domain. | Transfer Learning: Applying a model trained on general English to understand legal texts. Fine-Tuning: Adapting a pre-trained face recognition model to detect specific emotions. |
Model Architecture | Utilizes existing architecture; may freeze some layers. | Modifies certain layers; often unfreezes layers for specialization. | Transfer Learning: Using a pre-trained CNN for medical image analysis without changes. Fine-Tuning: Modifying the last layers of a pre-trained GPT-2 model for a specific writing style. |
Training Process | May involve only training a new top layer while keeping other layers fixed. | Involves adjusting specific layers and parameters for the new task. | Transfer Learning: Training only the output layer for a new speech recognition task. Fine-Tuning: Training multiple layers of a pre-trained VGG-16 model for a custom object detection task. |
Data Requirement | Less specific to the task; relies on pre-trained knowledge. | Requires task-specific data for fine-tuning. | Transfer Learning: Using a pre-trained text model for general text classification. Fine-Tuning: Needing specific medical records to fine-tune a model for medical text classification. |
Computational Complexity | Generally lower, as fewer parameters may be trained. | Potentially higher, depending on the extent of fine-tuning. | Transfer Learning: Quick adaptation of a pre-trained model for weather prediction. Fine-Tuning: Extensive training of a pre-trained model for highly specific scientific simulations. |
The following picture represents the difference between transfer learning and fine tuning. The left one represents transfer learning and the right one represents fine tuning.
Conclusion
- Transfer Learning: Focuses on transferring general knowledge from one domain to another. It often involves using the same objective function and may freeze certain layers to retain general features.
- Fine-Tuning: Goes a step further by specializing the model to a particular task. This may involve modifying the objective function, adjusting specific layers, and unfreezing parts of the model for targeted training.
Leave a Reply