Network structure inspired by simplified models of biological neurons (brain cells). Neural networks are trained to "learn" by supervised and unsupervised techniques, and can be used to solve optimization problems, approximation problems, classify patterns, and combinations thereof.
Neural networks have many practical applications within the software realm.
An application of neural networks for supervised learning would be training a neural network for optical character recognition or handwriting recognition. The network would be trained on exemplars of characters, and given enough data which are a representative sample of the population, the network can generalize to a wider spectrum of cases that were not encountered during training. The procedure of training a neural network in a supervised learning manner involves a learning algorithm for finding the optimal weights of the neurons in the network that minimize its error at performing a task. Gradient Descent is an example for a learning algorithm common for adjusting the weights of a neural network. It is often accompanied by the backpropagation technique in order to measure the contribution of each weight to the error signal and determine the gradients that guides the learning algorithm in adjusting each weight.
For an example of a backpropagation network in action, see the source of GNU Backgammon
A frequently used network topology in unsupervised learning is the Self-Organizing Map, often attributed to Kohonen. These networks can be used for clustering data, and in general, providing a lower dimensional representation of a higher dimensional space.
See this code project article for an application of the Self-Organizing Map in clustering different images to find all of the unique faces.
Introductory Video
Neural Networks Demystified (Jupyter Notebooks)