In this post, you will learn the concepts of Adaline (ADAptive LInear NEuron), a machine learning algorithm, along with Python example.As like Perceptron, it is important to understand the concepts of Adaline as it forms the foundation of learning neural networks. The concept of Perceptron and Adaline could found to be useful in understanding how gradient descent can be used to learn the weights which when combined with input signals is used to make predictions based on unit step function output.
Here are the topics covered in this post in relation to Adaline algorithm and its Python implementation:
Adaline, as like Perceptron, also mimics a neuron in the human brain. You may want to read one of my related posts on Perceptron – Perceptron explained using Python example. Adaline is also called as single-layer neural network. Here is the diagram of Adaline:
The following represents the working of Adaline machine learning algorithm based on the above diagram:
The adaline algorithm explained in previous section with the help of diagram will be illustrated further with the help of Python code. Here are the algorithm steps and the related Python implementation:
'''
Net Input is sum of weighted input signals
'''
def net_input(self, X):
weighted_sum = np.dot(X, self.coef_[1:]) + self.coef_[0]
return weighted_sum
'''
Activation function is fed the net input. As the activation function is
an identity function, the output from activation function is same as the
input to the function.
'''
def activation_function(self, X):
return X
'''
Prediction is made on the basis of output of activation function
'''
def predict(self, X):
return np.where(self.activation_function(self.net_input(X)) >= 0.0, 1, 0)
Batch Gradient Descent
1. Weights are updated considering all training examples.
2. Learning of weights can continue for multiple iterations
3. Learning rate needs to be defined
'''
def fit(self, X, y):
rgen = np.random.RandomState(self.random_state)
self.coef_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
for _ in range(self.n_iterations):
activation_function_output = self.activation_function(self.net_input(X))
errors = y - activation_function_output
self.coef_[1:] = self.coef_[1:] + self.learning_rate*X.T.dot(errors)
self.coef_[0] = self.coef_[0] + self.learning_rate*errors.sum()
Here is the entire Python code of Adaline algorithm custom implementation:
class CustomAdaline(object):
def __init__(self, n_iterations=100, random_state=1, learning_rate=0.01):
self.n_iterations = n_iterations
self.random_state = random_state
self.learning_rate = learning_rate
'''
Batch Gradient Descent
1. Weights are updated considering all training examples.
2. Learning of weights can continue for multiple iterations
3. Learning rate needs to be defined
'''
def fit(self, X, y):
rgen = np.random.RandomState(self.random_state)
self.coef_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
for _ in range(self.n_iterations):
activation_function_output = self.activation_function(self.net_input(X))
errors = y - activation_function_output
self.coef_[1:] = self.coef_[1:] + self.learning_rate*X.T.dot(errors)
self.coef_[0] = self.coef_[0] + self.learning_rate*errors.sum()
'''
Net Input is sum of weighted input signals
'''
def net_input(self, X):
weighted_sum = np.dot(X, self.coef_[1:]) + self.coef_[0]
return weighted_sum
'''
Activation function is fed the net input. As the activation function is
an identity function, the output from activation function is same as the
input to the function.
'''
def activation_function(self, X):
return X
'''
Prediction is made on the basis of output of activation function
'''
def predict(self, X):
return np.where(self.activation_function(self.net_input(X)) >= 0.0, 1, 0)
'''
Model score is calculated based on comparison of
expected value and predicted value
'''
def score(self, X, y):
misclassified_data_count = 0
for xi, target in zip(X, y):
output = self.predict(xi)
if(target != output):
misclassified_data_count += 1
total_data_count = len(X)
self.score_ = (total_data_count - misclassified_data_count)/total_data_count
return self.score_
Here is the Python code which represents the breast cancer classification model trained using Adaline implementation explained in the previous section:
#
# Load the data set
#
bc = datasets.load_breast_cancer()
X = bc.data
y = bc.target
#
# Create training and test split
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, stratify=y)
#
# Instantiate CustomPerceptron
#
adaline = CustomAdaline(n_iterations = 10)
#
# Fit the model
#
adaline.fit(X_train, y_train)
#
# Score the model
#
adaline.score(X_test, y_test), prcptrn.score(X_train, y_train)
The score of the model on the test data set turns out to be around 0.63.
Here is the summary of what you learned in this post in relation to Adaline algorithm and its Python implementation:
Have you ever wondered how to use OpenAI APIs to create custom chatbots? With advancements…
The combination of Retrieval-Augmented Generation (RAG) and powerful language models enables the development of sophisticated…
When building a Retrieval-Augmented Generation (RAG) application powered by Large Language Models (LLMs), which combine…
Last updated: 25th Jan, 2025 Have you ever wondered how to seamlessly integrate the vast…
Artificial Intelligence (AI) agents have started becoming an integral part of our lives. Imagine asking…
In the ever-evolving landscape of agentic AI workflows and applications, understanding and leveraging design patterns…