In this post, you will learn about Logistic Regression terminologies / glossary with quiz / practice questions. For machine learning Engineers or data scientists wanting to test their understanding of Logistic regression or preparing for interviews, these concepts and related quiz questions and answers will come handy. Here is a related post, 30 Logistic regression interview practice questions I have posted earlier. Here are some of the questions and answers discussed in this post:
The following are some of the different names / terms used:
Logistic regression is an algorithm where the logarithm of odds of an event to occur(Class = 1 in case of binary classification) is directly proportional to linear combination of one or more parameters / features value and its coefficients. In other words, logistic regression is used to determine the relationship between the odds of an event happening (dependent variable) or logarithmic odds of event happening (dependent variable) and one or more independent variables. This is how it looks like, mathematically speaking:
[latex]
Log(\frac{P}{1-P}) =w_0*1 + w_1*x_1 + w_2*x_2 + w_3*x_3 + … + w_n*x_n
[/latex]
In the above mathematical equation, P denotes probability of whether an event will happen or not. [latex](\frac{P}{1-P})[/latex] denotes the odds of an event to occur. This can be read as the following:
For every 1 unit increase in value of [latex]x_n[/latex], log-odds of event happening increases by [latex]w_n[/latex] unit or odds of event happening increases by [latex]10^(w_n)[/latex]
It is called as logistic regression as the probability of an event occurring (can be labeled as 1) can be expressed as logistic function such as the following:
[latex]
P = \frac{1}{1 + e^-Z}
[/latex]
In above equation, Z can be represented as linear combination of independent variable and its coefficients. [latex]Z = \beta_0 + \beta_1*x_1 + … + \beta_n*x_n[/latex]
Logistic regression is also called as logit regression because the dependent variable can also be termed as logit of the probability of event happening (Class = 1). The logit of probability is nothing but the logarithm of odds of event happening.
[latex]logit of probability (Y=1) = log(\frac{P(Y=1)}{1-P(Y=1)})[/latex]
The logistic function can be represented as inverse-logit.
Logistic function is a sigmoid function which takes a real value as input and output the value between 0 and 1. Here is how the equation looks like:
[latex]
\sigma(z) = \frac{1}{1 + exp(-z)}
[/latex]
In the above equation, exp represents exponential (e). In case of logistic regression, Z represents the logit of probability of event happening or log-odds of an event happening and the [latex]\sigma(Z)[/latex] represents the probability of the event happening.
Training a logistic regression model means modeling the dependent random variable Y as 1 or 0 (in case of binary classification) given the independent variables. In other words, approximating a mathematical function which outputs probability of whether an event will happen as a function of independent variables. The goal is to find the coefficients of independent variable of the model.
The objective function used to estimate the parameters of independent variables is the likelihood function representing likelihood of an event happening given the data. The optimization is performed using techniques such as gradient descent by maximizing the log of likelihood function or minimizing the negative of log likelihood.
The following are different types of logistic regression models:
The following are different implementations of Logistic regression in Scikit-learn (Sklearn) in Python:
Regularization in case of logistic regression is about regularizing the values of coefficients of different independent variables to achieve different objectives such as the following:
Different types of regularization supported in Logistic regression are as following:
[latex]
\min_{w, c} \|w\|_1 + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1).
[/latex]
[latex]
\min_{w, c} \frac{1}{2}w^T w + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1) .
[/latex]
[latex]
\min_{w, c} \frac{1 – \rho}{2}w^T w + \rho \|w\|_1 + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1),
[/latex]
Here is the guideline on when to use which type of regularization:
In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…
Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…
With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…
Anxiety is a common mental health condition that affects millions of people around the world.…
In machine learning, confounder features or variables can significantly affect the accuracy and validity of…
Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…