Backpropagation (backward propagation) is an important mathematical tool for improving the accuracy of predictions in data mining and machine learning. Backpropagation algorithms have practical applications in many areas of artificial intelligence (AI), including optical character recognition (OCR), natural language processing (NLP) and image processing.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Using backpropagation methods, a desired output is compared to an achieved system output in a neural network, and then the system is tuned by adjusting connection weights to narrow the difference between the two as much as possible. The algorithm basically computes a gradient descent, and it resembles algorithmic work going back to mathematicians such as Gauss, Newton and Leibniz.
Backpropagation algorithms grew out of research in artificial intelligence in the 1960s. The algorithms were applied as an outgrowth of feed-forward algorithms in neural networks. The difficulty of understanding exactly how changing weights and biases affected the overall behavior of the network was one factor that held back wider application of neural network applications, arguably until the early 2000s when computers could provide the necessary insight.
While the popularity of the neural approach to AI ebbed and then found renewed currency, backpropagation has steadily grown in use. Along with classifiers such as Naïve Bayesian filters and decision trees, the backpropagation algorithm has emerged as an important part of machine learning applications that involve predictive analytics.
Professor Geoffrey Hinton explains backpropagation.