Decision Tree Algorithm – Concepts, Interview Questions

0

Decision tree is one of the most commonly used machine learning algorithms which can be used for solving both classification and regression problems. It is very simple to understand and use. Here is a lighter one representing how decision trees and related algorithms (random forest etc) are agile enough for usage.

Decision Trees and Forests

Figure 1. Trees and Forests

 

In this post, you will learn about some of the following in relation to machine learning algorithm – decision trees vis-a-vis one of the popular C5.0 algorithm used to build a decision tree for classification. In another post, we shall also be looking at CART methodology for building a decision tree model for classification.

  • Key terminologies/definitions
  • Key concepts
  • Sample interview questions/practice tests

The post also presents a set of practice questions to help you test your knowledge of decision tree fundamentals/concepts. It could prove to be very useful if you are planning to take up an interview for machine learning engineer or intern or freshers or data scientist position.

Key Definitions – Decision Trees

  • Divide and Conquer: It is a strategy used for splitting the data into two or more data segments based on some decision. IT is also termed as recursive partitioning. The splitting criterion used in C5.0 algorithm is entropy or information gain which is referred in detail in this post.

    The splitting criterion used in C5.0 algorithm is entropy or information gain which is described later in this post. 

  • Entropy: Dictionary meaning of entropy pertaining to data is lack of order or predictability with data; In other words, data with the high disorder can be said to be data with high entropy, and, homogenous (or pure) data can be termed as data with very low entropy. Entropy can, thus, be defined as a measure of impurity of data. Higher the entropy, higher impure the data is.
  • Pure data segment or children node: As mentioned above, the value of entropy determines the purity of a data segment. A value of entropy close to zero represents the fact that data is pure. This essentially means that data belongs to one or mostly one class level (class with a label). A value closer to 1 represents maximum disorder or maximum split. This implies 50-50 or equal splits in the data segment.
    entropy plot for a data segment

    Figure 1. Entropy plot

  • How to calculate entropy value of a data segment: The value of entropy of a data segment can be calculated using summation on following formula for data segment with n classes with p representing proportion of value ((or probability of occurence of a class) belonging to a particular class:
    -p * log2 (p)
    

    Thus, for data segment having data belonging to two classes A (say, head) and B (say, tail) where the proportion of value to class A (or probability p(A)) is 0.3 and for class B (p(B)) is 0.7, the entropy can be calculated as the following:

    -(0.3)*log2 (0.3) - (0.7)*log2 (0.7)
    = - (-0.5211) - (-0.3602)
    = 0.8813
    

    For data segment having split of 50-50, here is the value of entropy (expected value of 1).

    -(0.5)*log2 (0.5) - (0.5)*log2 (0.5)
    = - (0.5)*(-1) - (0.5)*(-1)
    = 0.5 + 0.5
    = 1
    

    For data segment having split 90-10% (highly homogenous/pure data), the value of entropy is (expected value is closer to 0):

    -(0.1)*log2 (0.1) - (0.9)*log2 (0.9)
    = - (0.1)*(-3.3219) - (0.9)*(-0.1520)
    = 0.3323 + 0.1368
    = 0.4691
    

    For completely pure data segment, the value of entropy is (expected value is 0):

    -(1)*log2 (1) - (0)*log2 (0)
    = - (1)*(0) - (0)*(infinity)
    = 0
    

    Based on above calculation, one could figure out that the entropy varies as per following plot:

  • Information Gain: Information gain is the difference between the entropy of a data segment before the split and after the split. The high difference represents high information gain. Higher the difference implies the lower entropy of all data segments resulting from the split. Thus, higher the difference, higher the information gain and better the feature used for the split. Mathematically, the information gain, I, can be represented as following:
    InfoGain = E(S1) - E(S2)
    
    • E(S1) represents the entropy of data belonging to the node before split
    • E(S2) represents the weighted summation of the entropy of children nodes; Weights equal to the proportion of data instance falling in specific children node.

Key Concepts – Decision Trees

  • Decision tree topology: Decision tree consists of following different types of nodes:
    • Root node: Top-most node of the tree from where tree starts
    • Decision nodes: One or more decision nodes which result in splitting of data in multiple data segments. The goal is to have the children nodes with maximum homogeneity (purity).
    • Leaf nodes: The node representing data segment having highest homogeneity (purity).
  • A decision tree is built in the top-down fashion.
  • Decision Nodes represent Features/Attributes: Decision nodes represent the features/attributes based on which data is split into children nodes.
  • Valid decision node or feature: A decision node or a feature can be considered to be suitable or valid when the data split results in children nodes having data with higher homogeneity or lower entropy (weighted entropy of children nodes added to determine overall entropy after the split).

    A decision node or a feature can be considered to be suitable or valid when the data split results in children nodes having data with higher homogeneity or lower entropy

  • The leaf node is reached when no significant information gain happens: A data segment is said to be pure if it contains data instances belonging to just one class and not the mix of class. For example, a segment of data consisting of only females can be said to be pure. However, a data segment consisting of 50-50 split can be said to be most impure or entropy of 1. Leaf nodes in the decision tree is reached when further splits do not result in any further information gain. This implies that resultant entropy of partitioned data segments remains almost equal to entropy before the split.

    A data segment is said to be pure if it contains data instances belonging to just one class. The goal while building decision tree is to reach to a state where leaves (leaf nodes) attain pure state.

  • Feature selection in decision trees: The goal of the feature selection while building a decision tree is to find features or attributes (decision nodes) which lead to split in children nodes whose combined entropy sums up to lower entropy than the entropy value of data segment before the split. This implies higher information gain.

    The goal of the feature selection is to find the features or attributes which lead to split in children nodes whose combined entropy sums up to lower entropy than the entropy value of data segment before the split. 

  • Pre-pruning vs Post-pruning decision tree: Once you have understood feature selection technique in relation decision tree, you can end up creating a tree which results in leaf nodes having very high homogeneity (properly classified data). However, this may lead to overfitting scenario. Alternatively, if you have not done enough partitioning, there is a chance of underfitting. This aspect of stopping partitioning the tree early enough is also called as pre-pruning. The challenge is to find the optimal tree which results in the appropriate classification with acceptable accuracy. Alternatively, one can let the tree grow enough and then use error rates to appropriately prune the trees.
  • Algorithm C5.0 is one of the most used implementations for building machine learning model based on the decision tree.

Sample Interview Questions on Decision Tree

The following are some of the questions which can be asked in the interviews. The answers can be found in above text:

  • What is entropy?
  • What is information gain?
  • How are entropy and information gain related vis-a-vis decision trees?
  • How do you calculate the entropy of children nodes after the split based on on a feature?
  • How do you decide a feature suitability when working with decision tree?
  • Explain feature selection using information gain/entropy technique?
  • Which algorithm (packaged) is used for building models based on the decision tree?
  • What are some of the techniques to decide decision tree pruning?

Practice Test on Decision Trees Concept

A decision tree can be used to build models for _______

A decision tree is built in _______ fashion

The goal of the feature selection while building a decision tree is to find features or attributes (decision nodes) which lead to split in children nodes which has ________ entropy

A data segment can be said to be pure if all of the data instances belong to the ______

The goal of feature selection while working with decision tree is to find the feature that results in ___________ information gain

Entropy value of ______ represents that data sample is pure or homogenous

Entropy value of ______ represents that data sample has 50-50 split belonging to two categories

A decision tree created with the aim to achieve very minimal entropy may result in _______

Pre-pruning the decision tree may result in _________

Which of the following is preferred strategy for building the most optimal decision tree

Summary

In this post, you learned about some of the following:

  • Terminologies and concepts related to decision tree machine learning algorithm
  • Sample interview questions
  • Practice tests to check your knowledge

Did you find this article useful? Do you have any questions about this article or understanding decision tree algorithm and related concepts and terminologies? Leave a comment and ask your questions and I shall do my best to address your queries.

Ajitesh Kumar

Ajitesh Kumar

Ajitesh has been recently working in the area of AI and machine learning. Currently, his research area includes Safe & Quality AI. In addition, he is also passionate about various different technologies including programming languages such as Java/JEE, Javascript and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data etc.

He has also authored the book, Building Web Apps with Spring 5 and Angular.
Ajitesh Kumar

Leave A Reply

Time limit is exhausted. Please reload the CAPTCHA.