# Category Archives: Data Science

## Python – How to Plot Learning Curves of Classifier

In this post, you will learn a technique using which you could plot the learning curve of a machine learning classification model. As a data scientist, you will find the Python code example very handy. In this post, the plot_learning_curves class of mlxtend.plotting module from mlxtend package is used. This package is created by Dr. Sebastian Raschka.  Lets train a Perceptron model using iris data from sklearn.datasets. The accuracy of the model comes out to be 0.956 or 95.6%. Next, we will want to see how did the learning go.  In order to do that, we will use plot_learning_curves class of mlxtend.plotting module. Here is a post on how to install mlxtend with Anaconda. The following …

Posted in Data Science, Machine Learning, Python. Tagged with , , .

## Infographics for Model & Algorithm Selection & Evaluation

This is a short post created for quick reference on techniques which could be used for model evaluation & selection and model and algorithm comparision. This would be very helpful for those aspiring data scientists beginning to learn machine learning or those with advanced data science skills as well. The image has been taken from this blog, Comparing the performance of machine learning models and algorithms using statistical tests and nested cross-validation authored by Dr. Sebastian Raschka The above diagram provides prescription for what needs to be done in each of the following areas with small and large dataset. Very helpful, indeed. Model evaluation Model selection Model and algorithm comparison …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## Feature Scaling & Stratification for Model Performance (Python)

In this post, you will learn about how to improve machine learning models performance using techniques such as feature scaling and stratification. The following topics are covered in this post. The concepts have been explained using Python code samples. What is feature scaling and why one needs to do it? What is stratification? Training Perceptron model without feature scaling and stratification Training Perceptron model with feature scaling Training Perceptron model with feature scaling and stratification What is Feature Scaling and Why is it needed? Feature scaling is a technique of standardizing the features present in the data in a fixed range. This is done when data consists of features of varying …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## Python – Improve Model Performance using Feature Scaling

In this post you will learn about a simple technique namely feature scaling using which you could improve machine learning models. The models will be trained using Perceptron (single-layer neural network) classifier. First and foremost, lets quickly understand what is feature scaling and why one needs it? What is Feature Scaling and Why does one need it? Feature Scaling is a technique to standardize the independent features present in the data in a fixed range. This is performed when the dataset contains features that are highly varying in magnitudes, units, and ranges. The following are a different kind of scaling: Min-max scaling: The input value is scaled to the range of …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## How to use Sklearn Datasets For Machine Learning

In this post, you wil learn about how to use Sklearn datasets for training machine learning models. Here is a list of different types of datasets which are available as part of sklearn.datasets Iris (Iris plant datasets used – Classification) Boston (Boston house prices – Regression) Wine (Wine recognition set – Classification) Breast Cancer (Breast cancer wisconsin diagnostic – Classification) Digits (Optical recognition of handwritten digits dataset – Classification) Linnerud (Linnerrud dataset – Classification) Diabetes (Diabetes – Regression) The following command could help you load any of the datasets: All of the datasets come with the following and are intended for use with supervised learning: Data (to be used for …

Posted in Data Science, Machine Learning. Tagged with , , .

## Python – How to install mlxtend in Anaconda

In this post, you will quickly learn about how to install mlxtend python package while you are working with Anaconda Jupyter Notebook. Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. This library is created by Dr. Sebastian Raschka, an Assistant Professor of Statistics at the University of Wisconsin-Madison focusing on deep learning and machine learning research. Here is the instruction for installing within your Anaconda.  Add a channel namely conda-forge by clicking on Channels button and then Add button. Open a command prompt and execute the following command: conda install mlxtend –channel Conda-forge Once installed, launch a Jupyter Notebook and try importing the following. This should work …

Posted in Data Science, Machine Learning, Python. Tagged with , , .

## Python – Scatter Plot Different Classes

In this post, you will learn about the how to create scatter plots using Python which represents two or more classes while you are trying to solve machine learning classification problem. As you work on the classification problem, you want to understand whether classes are linearly separable or they are non-linear. In other words, whether the classification problem is linear or non-linear. This, in turn, helps you decide on what kind of machine learning classification algorithms you might want to use. In this post, you will learn how to use scatter plot to identify whether two or more classes are linearly separable or not. You may want to check what, when and how of scatter plot matrix which can also be used to determine whether the data …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## Python DataFrame – Assign New Labels to Columns

In this post, you will get a code sample related to how to assign new labels to columns in python programming while training machine learning models.  This is going to be very helpful when working with classification machine learning problem. Many a time the labels for response or dependent variable are in text format and all one wants is to assign a number such as 0, 1, 2 etc instead of text labels. Beginner-level data scientists will find this code very handy. We will look at the code for the dataset as represented in the diagram below: In the above code, you will see that class labels are named as very_low, Low, High, Middle …

Posted in AI, Data Science, Machine Learning, News. Tagged with , , .

## Java Implementation for Rosenblatt Perceptron

In this post, you will learn about Java implementation for Rosenblatt Perceptron.  Rosenblatt Perceptron is the most simplistic implementation of neural network. It is also called as single-layer neural network. The following diagram represents the Rosenblatt Perceptron: The following represents key aspect of the implementation which is described in this post: Method for calculating “Net Input“ Activation function as unit step function Prediction method Fitting the model Calculating the training & test error Method for calculating “Net Input” Net input is weighted sum of input features. The following represents the mathematical formula: $$Z = {w_0}{x_0} + {w_1}{x_1} + {w_2}{x_2} + … + {w_n}{x_n}$$ In the above equation, w0, w1, w2, …

Posted in Data Science, Machine Learning. Tagged with , .

## Difference between Adaline and Logistic Regression

In this post, you will understand the key differences between Adaline (Adaptive Linear Neuron) and Logistic Regression.  Activation function Cost function Difference in Activation Function The primary difference is the activation function. In Adaline, the activation function is called as linear activation function while in logistic regression, the activation function is called as sigmoid activation function. The diagram below represents the activation functions for Adaline. The  activation function for Adaline, also called as linear activation function, is the identity function which can be represented as the following: $$\phi(W^TX) = W^TX$$ The diagram below represents the activation functions for Logistic Regression. The  activation function for Logistic Regression, also called as sigmoid activation function, is …

Posted in AI, Data Science, Machine Learning. Tagged with , .

## Logistic Regression: Sigmoid Function Python Code

In this post, you will learn about the following: How to represent the probability that an event will take place with the asssociated features (attributes / independent features) Sigmoid function python code Probability as Sigmoid Function The below is the Logit Function code representing association between the probability that an event will occur and independent features. $$Logit Function = \log(\frac{P}{(1-P)}) = {w_0} + {w_1}{x_1} + {w_2}{x_2} + …. + {w_n}{x_n}$$ $$Logit Function = \log(\frac{P}{(1-P)}) = W^TX$$ $$P = \frac{1}{1 + e^-W^TX}$$ The above equation can be called as sigmoid function. Python Code for Sigmoid Function Executing the above code would result in the following plot: Pay attention to some of the …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## Three Key Challenges of Machine Learning Models

In this post, you will learn about the three most important challenges or guiding principles that could be used while you are building machine learning models. The three key challenges which could be adopted while training machine learning models are following: The conflict between simplicity and accuracy Dimensionality – Curse or Blessing? The multiplicity of good models The Conflict between Simplicity and Accuracy Before starting on working for training one or more machine learning models, one would need to decide whether one would like to go for simple model or one would want to focus on model accuracy. The simplicity of models could be achieved by using algorithms which help …

Posted in AI, Data Science, Machine Learning. Tagged with , .

## Difference between True Error & Sample Error

In this post, you will learn about some of the following in relation to evaluating a discrete-valued hypothesis when learning hypothesis (building models) using different machine learning algorithms. The discrete-valued hypothesis could also be understood as classification models built using machine learning algorithms and used to classify an instance drawn at random. What is a true error or true risk? What is a sample error or empirical risk? Difference between true error and sample error How to estimate the true error? In case you are a data scientist, you will want to understand the concept behind the true error and sample error. These concepts are key to understand for evaluating …

Posted in AI, Data Science, Machine Learning. Tagged with , , .

## Python – How to Create Dataframe using Numpy Array

In this post, you will get a code sample for creating a Pandas Dataframe using a Numpy array with Python programming. Step 1: Load the Python Packages Step 2: Create a Numpy array This is how the array would look like: Step 3: Create a Transpose of Numpy Array This is how the transpose would look like: Step 4: Create a Pandas Dataframe Print the data using head command such as df.head(). This is how the data frame would look like: In case, you would like to quickly plot the data and look for relationship, here are the command using seaborn package: The above would print the following plot:

Posted in AI, Data Science, Machine Learning. Tagged with , .

## Hypergeometric Distribution Explained with 10+ Examples

In this post, we will learn Hypergeometric distribution with 10+ examples. The following topics will be covered in this post: What is Hypergeometric Distribution? 10+ Examples of Hypergeometric Distribution If you are an aspiring data scientist looking forward to learning/understand the binomial distribution in a better manner, this post might be very helpful. The Binomial distribution can be considered as a very good approximation of the hypergeometric distribution as long as the sample consists of 5% or less of the population. One would need a good understanding of binomial distribution in order to understand the hypergeometric distribution in a great manner. I would recommend you take a look at some of my related posts on …