Skip to main content

An Investigation of Input Encodings for Recursive Auto Associative Neural Networks

01 January 1989

New Image

A neural network is a computer architecture loosely based on neuroscience, made up of nodes (neurons) which use an activation function to compute an output, and weighted links (synapses) connecting the nodes. In the human brain, there are 10 billion neurons connected in a massively parallel network. The neurons are slow, operating in milliseconds rather than nanoseconds, yet perform computations which stagger the largest super-computers. An artificial neural network attempts to capture some portion of this architecture and the resultant computing ability in current hardware and software. The nodes are connected in layers by the links. One set of nodes comprise an input layer, one set of nodes comprise an output layer, and the others comprise hidden layers. For a given value put in the input layer, the network will compute a value which appears in the output layer. Each node computes its own output based on the output of all connecting nodes, the weight of the connecting links, and an internal activation function.