This post highlights some great pages where **python implementations **for different **machine learning models **can be found. If you are a data scientist who wants to get a fair idea of whats working underneath different machine learning algorithms, you may want to check out the Ml-from-scratch page. The top highlights of this repository are python implementations for the following:

- Supervised learning algorithms (linear regression, logistic regression, decision tree, random forest, XGBoost, Naive bayes, neural network etc)
- Unsupervised learning algorithms (K-means, GAN, Gaussian mixture models etc)
- Reinforcement learning algorithms (Deep Q Network)
- Dimensionality reduction techniques such as PCA
- Deep learning
- Examples that make use of above mentioned algorithms

Here is an insight into implementation of different types of regression algorithms. The code found on this regression page provides implementation for the following types of regression:

- Linear regression (Least squares method for loss function)
- Lasso regression (L1 norm regularization)
- Ridge regression (L2 norm regularization)
- Elastic net regression (regularized regression method that combines L1 and L2 penalties against model complexity)
- Polynomial regression
- Polynomial ridge regression

Very helpful indeed! Thanks to Erik Liner-Noren for putting these pages. Enjoy learning and implementing machine learning.

- First Principles Understanding based on Physics - April 13, 2021
- Precision & Recall Explained using Covid-19 Example - April 11, 2021
- Moving Average Method for Time-series forecasting - April 4, 2021

## Leave a Reply