This post highlights some great pages where **python implementations **for different **machine learning models **can be found. If you are a data scientist who wants to get a fair idea of whats working underneath different machine learning algorithms, you may want to check out the Ml-from-scratch page. The top highlights of this repository are python implementations for the following:

- Supervised learning algorithms (linear regression, logistic regression, decision tree, random forest, XGBoost, Naive bayes, neural network etc)
- Unsupervised learning algorithms (K-means, GAN, Gaussian mixture models etc)
- Reinforcement learning algorithms (Deep Q Network)
- Dimensionality reduction techniques such as PCA
- Deep learning
- Examples that make use of above mentioned algorithms

Here is an insight into implementation of different types of regression algorithms. The code found on this regression page provides implementation for the following types of regression:

- Linear regression (Least squares method for loss function)
- Lasso regression (L1 norm regularization)
- Ridge regression (L2 norm regularization)
- Elastic net regression (regularized regression method that combines L1 and L2 penalties against model complexity)
- Polynomial regression
- Polynomial ridge regression

Very helpful indeed! Thanks to Erik Liner-Noren for putting these pages. Enjoy learning and implementing machine learning.

- Agentic Reasoning Design Patterns in AI: Examples - October 18, 2024
- LLMs for Adaptive Learning & Personalized Education - October 8, 2024
- Sparse Mixture of Experts (MoE) Models: Examples - October 6, 2024

I found it very helpful. However the differences are not too understandable for me