top of page

Introduction to Neural Networks. Pt1 - The Neuron

Writer's picture: Richard WalkerRichard Walker

Neural networks are a popular AI solution. They are at the heart of facial recognition, speech processing, autonomous driving, medical diagnosis, and many other ground breaking technologies. They have been used to surpass humans at games of skill like Chess and Go.

Read on or click the graphic above

But what is a neural network?


This five part video-blog will give you a primer in this exciting technology. This first post describes the essential components of an artificial neuron. We then go on to discuss what activation functions are in part two. Part three asks the question – what is a neural network? That is to say how are individual neurons connected together to perform a specific task. In part four we begin to look at how these networks are trained. We introduce a term called a ‘Cost Function’, this is essentially a measure of how good, or how bad, a network is at performing a task. We conclude this session on training in part five with a discussion of ‘Backpropagation’. This is a term used to describe how a network updates its parameters when it is being trained to get better at its assigned task.


An artificial neuron


Let’s start at the most basic level of neural networks. Let’s look at what an artificial neuron is. These artificial neurons are inspired by biological neurons found in the brain. Biological neurons take inputs from other neurons. These inputs are in the form of eletro-chemical signals. If the signals of the inputs are above a particular threshold then a neuron will fire. This sends an electro-chemical signal to further neurons.


A neuron's output - or 'Activation'


Seen from the outside an artificial neuron is just a number. You will often hear of this number referred to as either the ‘Activation’ or the ‘Output’ of a neuron.


A neuron's activation

We can make some design choices to constrain this activation. We might want to limit its values to being between 0 and 1. Or we might want to permit some negative values and limit the range of activations to being between -1 and +1. Alternatively we might want the activations to be between 0 and Infinity. We have the ability to choose the right levels based on our application.


Four simple steps to calculate a neuron's activation


This activation isn’t just chosen at random. It is based on inputs to this neuron. A simple four step process determines the output. To calculate the output for a neuron:


1. Take each input and multiply it by a weight.

2. Then sum all of these products.

3. Then add a bias term to this sum.

4. Finally send this total through something called an activation function to generate the output.


We will talk much more about activation functions and different types of activation function in the next blog in this series.


An illustrated step-by-step example


To illustrate this process let’s look at a single neuron with three inputs. The inputs here are a1, a2 and a3. Each of these inputs is multiplied by its own weight – w1, w2 and w3. A bias term ‘b’ is added and these terms are fed into an activation function to generate the output – or activation of this neuron.

How to calculate a neuron's activation

There are lots of types of activation functions to choose from. Each have their own properties. Each have their own pros and cons. We’ll explore activation functions in the very next instalment.


An example of an activation function - The 'Sigmoid'

In this video let’s just look at a single type of activation. This animated graphic above shows an example of what is called a ‘Sigmoid’ activation function. One of the properties of this function is that it produces outputs that vary between 0 and 1. No matter how large the input, the maximum activation will be 1. No matter how negative the input, the minimum activation will be 0.

The output of the Sigmoid function is shown on the vertical axis and the input is on the horizontal axis. Recall that the inputs are multiplied by weights, these are summed and a bias is added. This total is then fed into the activation function as an input. This input is represented here by the x axis.


What does the 'bias' do?


An example of a negative bias

This chart above shows the impact of a negative bias. This has the effect of activating the neuron for lower levels of inputs.


An example of a positive bias

This chart shows the effect of a positive bias. This requires the inputs to be much larger before the neuron is activated.


This concludes this introductory blog. I hope that you now have a clearer understanding of what an artificial neuron is. If you have any questions please leave them in the comments section. In the next video we’ll look at other examples of activation functions and then discuss how we stack neurons together to make networks.




334 views0 comments

Recent Posts

See All

Comments


bottom of page