# Artificial Neuron Models

Computational neurobiologists have constructed very elaborate computer models of neurons in order to run detailed
simulations of particular circuits in the brain. As Computer Scientists, we are more interested in the general
properties of neural networks, independent of how they are actually "implemented" in the brain. This
means that we can use much simpler, abstract "neurons", which (hopefully) capture the essence of neural
computation even if they leave out much of the details of how biological neurons work.

People have implemented model neurons in hardware as electronic circuits, often integrated on VLSI chips. Remember
though that computers run much faster than brains - we can therefore run fairly large networks of simple model
neurons as software simulations in reasonable time. This has obvious advantages over having to use special "neural"
computer hardware.

## A Simple Artificial Neuron

Our basic computational element (model neuron) is often called a **node** or **unit**. It receives input
from some other units, or perhaps from an external source. Each input has an associated **weight** *w*,
which can be modified so as to model synaptic learning. The unit computes some function *f* of the weighted
sum of its inputs:

Its output, in turn, can serve as input to other units.
- The weighted sum
is called the
**net input** to unit *i*, often written *net*_{i}.
- Note that w
_{ij} refers to the weight from unit *j* to unit *i* (not the other way
around).
- The function
*f* is the unit's **activation function**. In the simplest case, *f* is the identity
function, and the unit's output is just its net input. This is called a **linear unit**.
- Maple examples of activation functions.

goto top of page
[Next: Linear regression] [Back to the first page]