Machine Learning Models Evaluation Infographics

model evaluation techniques

In this post, you will get an access to a self-explanatory infographics / diagram representing different aspects / techniques which need to be considered while doing machine learning model evaluation. Here is the infographics:

 

Fig 1. Different aspects of Model Evaluation

In the above diagram, you will notice that the following needs to be considered once the model is trained. This is required to be done to select one model out of many models which get trained.

  • Basic parameters: The following need to be considered for evaluating the model:
    • Bias & variance
    • Overfitting & underfitting
    • Holdout method
    • Confidence intervals
  • Resampling methods: The following techniques need to be adopted for evaluating models:
    • Repeated holdout
    • Empirical confidence intervals
  • Cross-validation: Cross validation technique is required to be performed for achieving some of the following 
    • Hyperparameters tuning
    • Model selection
    • Algorithm selection
  • Statistical tests: Statistical tests need to be performed for doing the following:
    • Model comparison
    • Algorithm comparison
  • Evaluation metrics

The image is adopted from this page.

Ajitesh Kumar

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.
Posted in AI, Data Science, Machine Learning. Tagged with , .