Differences between Random Forest vs AdaBoost

decision trees in random forest

In this post, you will learn about the key differences between the AdaBoost classifier and the Random Forest algorithm. As data scientists, you must get a good understanding of the differences between Random Forest and AdaBoost machine learning algorithms. Both algorithms can be used for both regression and classification problems.

Random forest and Adaboost are two popular machine learning algorithms. Both algorithms can be used for classification and regression tasks. Both Random Forest and AdaBoost algorithm is based on the creation of a Forest of trees. Random Forest is an ensemble learning algorithm that is created using a bunch of decision trees that make use of different variables or features and makes use of bagging techniques for data samples. Adaboost is also an ensemble learning algorithm that is created using a bunch of what is called a decision stump. Decision stumps are nothing but decision trees with one node and two leaves. The AdaBoost algorithm can be said to make decisions using a bunch of decision stumps.  The tree is then tweaked iteratively to focus on areas where it predicts incorrectly. As a result, Adaboost typically provides more accurate predictions than Random Forest. However, Adaboost is also more sensitive to overfitting than Random Forest. Here are different posts on Random forest and AdaBoost.

Models trained using both Random forest and AdaBoost classifier make predictions that generalize better with a larger population. The models trained using both algorithms are less susceptible to overfitting / high variance.

Differences between AdaBoost vs Random Forest

Here are the key differences between AdaBoost and the Random Forest algorithm:

  • Data sampling: Both Random forest and Adaboost involve data sampling, but they differ in terms of how the samples are used. In Random forest, the training data is sampled based on the bagging technique. The bagging technique is a data sampling technique that decreases the variance in the prediction by generating additional data for training from the dataset using combinations with repetitions to produce multi-sets of the original data. Bagging, also known as bootstrap aggregating, involves randomly sampling data with replacement. This means that some data points will be sampled multiple times, while others may not be sampled at all. In AdaBoost, the training data used for training subsequent decision stumps (trees with one node and two leaves) have few data samples assigned higher weights based on miss-classification of those data set in the previous decision stump. The very fact that few data samples which are misclassified are assigned higher weights results in those data sets will get sampled repeatedly in the new data sample.
  • Decision Trees vs Decision Stumps: Random forest makes use of multiple full-size decision trees or multiple decision trees having different depths. These decision trees make use of multiple variables to do the final classification of a data point. On the other hand, AdaBoost makes use of what is called decision stumps. Decision stumps are decision trees with one node and two leaves. AdaBoost makes use of multiple decision stumps with each decision stump built on just one variable or feature. This is unlike a random forest in which decision trees make use of multiple variables to make a final classification decision. Here are the diagrams representing decision trees used in random forest vs decision stumps used in the AdaBoost algorithm.
Decision Stumps in AdaBoost Algorithm
Fig 2. Decision Stumps in AdaBoost Algorithm
  • Equal Weights vs Variable Weights: In a Random forest, a decision made by each tree carries equal weight. In other words, each decision tree has equal say or weight in the final decision. In AdaBoost, some decision stumps may have a higher say or weight in the final decision than the others.
  • Tree order: In a Random forest, each decision tree is made independently of other trees. The ordering in which decision trees are created is not important at all. However, in the forest of stumps made in AdaBoost, the ordering in which decision stumps are created is important. The errors made in the first decision stump influence how the second decision stump is made and the error made in the second stump influences how the third decision stump is made.

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. For latest updates and blogs, follow us on Twitter. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking. Check out my other blog, Revive-n-Thrive.com
Posted in Data Science, Machine Learning. Tagged with , .

Leave a Reply

Your email address will not be published. Required fields are marked *