In this post, you will learn about Java implementation for Rosenblatt Perceptron.
Rosenblatt Perceptron is the most simplistic implementation of neural network. It is also called as single-layer neural network. The following diagram represents the Rosenblatt Perceptron:
The following represents key aspect of the implementation which is described in this post:
Net input is weighted sum of input features. The following represents the mathematical formula:
$$Z = {w_0}{x_0} + {w_1}{x_1} + {w_2}{x_2} + … + {w_n}{x_n}$$
In the above equation, w0, w1, w2, …, wn represents the weights for the features such as x0, x1, x2, …, xn respectively. Note that x0 takes the value of 1 is used to represent the bias.
The above could be achieved using the following Java code:
public double netInput(double[] input) { double netInput = 0; netInput += this.weights[0]; for(int i = 0; i < input.length; i++) { netInput += this.weights[i+1]*input[i]; } return netInput; }
For Rosenblatt Perceptron, the activation function is same as a unit step function which looks like the following:
In above, the net input, represented as Z, also includes the bias element as depicted by w0x0 in the following equation:
$$Z = {w_0}{x_0} + {w_1}{x_1} + {w_2}{x_2} + … + {w_n}{x_n}$$
Mathematically, the activation function gets represented as the following unit step function:
$$ \phi(Z) = 1 if Z \geq 0 $$
$$ \phi(Z) = -1 if Z < 0 $$
The following code represents the activation function:
private int activationFunction(double[] input) { double netInput = this.netInput(input); if(netInput >= 0) { return 1; } return -1; }
For Rosenblatt Perceptron, the prediction method is same as the activation function. If the weighted sum of input is greater than equal to 0, the class label is assigned as 1 or else -1.
The following code represents the prediction method:
public int predict(double[] input) { return activationFunction(input); }
Fitting the model requires the following to happen:
The following code represents the above algorithm:
public void fit(double[][] x, double[] y) { // // Initialize the weights // Random rd = new Random(); this.weights = new double[x[0].length + 1]; for(int i = 0; i < this.weights.length; i++) { this.weights[i] = rd.nextDouble(); } // // Fit the model // for(int i = 0; i < this.noOfIteration; i++) { int errorInEachIteration = 0; for(int j=0; j &lt; x.length; j++) { // // Calculate the output of activation function for each input // double activationFunctionOutput = activationFunction(x[j]); this.weights[0] += this.learningRate*(y[j] - activationFunctionOutput); for(int k = 0; k &lt; x[j].length; k++) { // // Calculate the delta weight which needs to be updated // for each feature // double deltaWeight = this.learningRate*(y[j] - activationFunctionOutput)*x[j][k]; this.weights[k+1] += deltaWeight; } // // Calculate error for each training data // if(y[j] != this.predict(x[j])) { errorInEachIteration++; } } // // Update the error in each Epoch // this.trainingErrors.add(errorInEachIteration); } }
Once the model is fit, one should be able to calculate training and test error. Training error is total misclassifications recorded during the training (across all the iterations/epochs). Test error is misclassification for the test data.
In the code under model fitting, note how errorInEachIteration is incremented for every training record that gets misclassified and trainingError is updated in each iteration.
The source code can be found in on this GitHub page for Perceptron.
In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…
Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…
With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…
Anxiety is a common mental health condition that affects millions of people around the world.…
In machine learning, confounder features or variables can significantly affect the accuracy and validity of…
Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…