Explain Back Propagation Algorithm with example.
Back Propagation Algorithm
Backpropagation is a neural network learning algorithm. Learn by adjusting the weight so as to be able to predict the correct class label of the input. Backpropagation learns by iteratively processing a set of training samples, comparing the network's prediction for each sample with the actual known class label. For each training sample, the weights are modified so as to minimize the mean squared error between the network's prediction and the actual class. These modifications are made in the "backward" direction, that is, form the output layer through each hidden layer down to the first hidden layer (hence the name backpropagation). Although it is not guaranteed in general the weights will eventually converge, and the learning process stops. The algorithm is summarized below. Initialize the weights. The weights in the network are initialized to a small random number (e.g. ranging from -1.0 to1.0, or -0.5 to 0.5). Each unit has a bias associated with it.
Input: D training data set and their associated class label and l- learning rate(normally 0.0-1.0)
Output:a trained neural network i.e., weight-adjusted neural network.
Comments
Post a Comment