Artificial Neuron and Perceptron: Fundamentals of Neural Networks

Artificial Neurons

The building blocks of Artificial Neuron nets are Artificial Neurons also known as perception.
It mimic the behaviour of biological neurons found in the human brain.

Similarity between  Artificial Neuron and biological brain


Input: X1, x2, xn is an input signal which enter into the neuron
Weight: w1, w2, wn is weight that connected with input. It determine how important the input is
Bias (b): b is an extra value, which help to control in activation/deactivation.
Summation: The sum of all input and it's multiplication
The activation is then transformed into an output value or prediction using a transfer function, such as the step transfer function.
Output: The result of activation function which send to the next neuron or considor as final result

Algoritm of perception:

The Perceptron is inspired by the information processing of a single neural cell called a neuron.

A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body.

In a similar way, the Perceptron receives input signals from examples of training data that we weight and combined in a linear equation called the activation.

activation = sum(weight_i * x_i) + bias
The activation is then transformed into an output value or prediction using a transfer function, such as the step transfer function.

prediction = 1.0 if activation >= 0.0 else 0.0
In this way, the Perceptron is a classification algorithm for problems with two classes (0 and 1) where a linear equation (like or hyperplane) can be used to separate the two classes.

It is closely related to linear regression and logistic regression that make predictions in a similar way (e.g. a weighted sum of inputs).

The weights of the Perceptron algorithm must be estimated from your training data using stochastic gradient descent.

In together these neuron creates a neural network and can solve difficult problem

No comments

Powered by Blogger.