Machine Learning

Difference between Adaline and Logistic Regression

In this post, you will understand the key differences between Adaline (Adaptive Linear Neuron) and Logistic Regression

  1. Activation function
  2. Cost function

Difference in Activation Function

The primary difference is the activation function. In Adaline, the activation function is called as linear activation function while in logistic regression, the activation function is called as sigmoid activation function. The diagram below represents the activation functions for Adaline. The  activation function for Adaline, also called as linear activation function, is the identity function which can be represented as the following:

$$\phi(W^TX) = W^TX$$

Fig 1. Adaline Linear Activation Function Representation

The diagram below represents the activation functions for Logistic Regression. The  activation function for Logistic Regression, also called as sigmoid activation function, is the identity function can be represented as the following:

$$\phi(W^TX) = \frac{1}{(1 + e^-Z)}$$ 

Fig 2: Logistic Sigmoid Activation Function Representation


Difference in Cost Function

For Adaline, the cost function or loss function looks like below:

$$J(w) = \sum\limits_{i} \frac{1}{2} (\phi(z^{(i)}) – y^{(i)})^2$$

For Adaline, the goal is to minimize the above sum of squared error function.

For Logistic regression, the cost function is created based on likelihood function that looks like below:

$$L(w) = P(y \vert x; w) = \prod\limits_{i=1}^n P(y^{(i)} \vert x^{(i)}; w) = \prod\limits_{i=1}^n (\phi(z^{(i)}))^{y^{(i)}} (1 – \phi(z^{(i)}))^{(1-y^{(i)})} $$

For logistic regression, the idea is to maximize the above likelihood function. For ease of calculation and numerical stability the above equation is converted into log-likelihood function which is then maximized. The below represents the log-likelihood function:

$$\log(L(w)) = \sum\limits_{i=1}^n [y^{(i)}\log(\phi(z^{(i)})) + (i – y^{(i)})\log(1 – \phi(z^{(i)}))]$$

The above log-likelihood function could be written in the following manner as the cost functionJ(w) that can be minimized using the gradient descent

$$J(w) = \sum\limits_{i=1}^n [-y^{(i)}\log(\phi(z^{(i)})) – (i – y^{(i)})\log(1 – \phi(z^{(i)}))]$$

Note that the superscript represents the ith row. Underscript represents the specific feature in that row. 

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. For latest updates and blogs, follow us on Twitter. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking. Check out my other blog, Revive-n-Thrive.com

Recent Posts

Autoencoder vs Variational Autoencoder (VAE): Differences

Last updated: 09th May, 2024 In the world of generative AI models, autoencoders (AE) and…

21 hours ago

Linear Regression T-test: Formula, Example

Last updated: 7th May, 2024 Linear regression is a popular statistical method used to model…

4 days ago

Feature Engineering in Machine Learning: Python Examples

Last updated: 3rd May, 2024 Have you ever wondered why some machine learning models perform…

1 week ago

Feature Selection vs Feature Extraction: Machine Learning

Last updated: 2nd May, 2024 The success of machine learning models often depends on the…

1 week ago

Model Selection by Evaluating Bias & Variance: Example

When working on a machine learning project, one of the key challenges faced by data…

1 week ago

Bias-Variance Trade-off in Machine Learning: Examples

Last updated: 1st May, 2024 The bias-variance trade-off is a fundamental concept in machine learning…

1 week ago