Deep Learning

Top Tutorials – Neural Network Back Propagation Algorithm

Here are the top web pages /videos for learning back propagation algorithm used to compute the gradients in neural network. I will update this page with more tutorials as I do further deep dive on back propagation algorithm. For beginners or expert level data scientists / machine learning enthusiasts, these tutorials will prove to be very helpful.

Before going ahead and understanding back propagation algorithm from different pages, lets quickly understand the key components of neural network algorithm:

  • Feed forward algorithm: Feed forward algorithm represents the aspect of how input signals travel through different neurons present in different layers in form of weighted sums and activations, and, result in output / prediction. The key aspect in feed forward algorithm is activation function. You may want to check out this post which represents activation functions in form of animation – Different types of activation functions illustrated using animation.

  • Back propagation algorithm: Back propagation algorithm represents the manner in which gradients are calculated on output of each neuron going backwards (using chain rule). The goal is to determine changes which need to be made in weights in order to achieve the neural network output closer to actual output. Note that back propagation algorithm is only used to calculate the gradients.

  • Optimization algorithm (Optimizer): Once the gradients are determined, the final step is to use appropriate optimization algorithm to update the weights using the gradients calculated using back propagation algorithm.

In order to understand back propagation in a better manner, check out these top web tutorial pages on back propagation algorithm.

Here are some great youtube videos on back propagation algorithm.

  • Lex Friedman tutorial on Back propagation
Video Lecture comprising of great explanation of Back propagation
  • Great lecture on back propagation by Andrej Karpathy (Stanford CS231n)
Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. For latest updates and blogs, follow us on Twitter. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking. Check out my other blog, Revive-n-Thrive.com

Recent Posts

Feature Engineering in Machine Learning: Python Examples

Last updated: 3rd May, 2024 Have you ever wondered why some machine learning models perform…

2 days ago

Feature Selection vs Feature Extraction: Machine Learning

Last updated: 2nd May, 2024 The success of machine learning models often depends on the…

3 days ago

Model Selection by Evaluating Bias & Variance: Example

When working on a machine learning project, one of the key challenges faced by data…

3 days ago

Bias-Variance Trade-off in Machine Learning: Examples

Last updated: 1st May, 2024 The bias-variance trade-off is a fundamental concept in machine learning…

4 days ago

Mean Squared Error vs Cross Entropy Loss Function

Last updated: 1st May, 2024 As a data scientist, understanding the nuances of various cost…

4 days ago

Cross Entropy Loss Explained with Python Examples

Last updated: 1st May, 2024 In this post, you will learn the concepts related to…

4 days ago