Backpropagation is the means by which neural networks learn. Under supervised learning the network is presented with a set of inputs and corresponding outputs. To begin with the network chooses random values for its weights and biases. It takes the input and calculates an output. With random initialisation of parameters, one would expect an output a long way from the correct result. The difference between the network's output and the correct output is the error or 'loss'.
If we differentiate the cost function we can see how sensitive the cost function is to every single parameter. That is to say how much the cost function would change for a given change in each parameter.
This gives us a strategy to nudge each weight and bias; by how much and in what direction, to minimise the cost function.
Each layer of neurons has inputs. These inputs come from the activations in the previous layer. We can not alter the previous layer's activations directly, but we do know whether we want the activations to increase or decrease. In order to get the increase or decrease we want in the prior layer's activations we can alter that layer's weights and biases.
In this way the error cascades backwards or 'backpropagates' from output layer to input layer.
While the details of all of the calculus can be a lot to wrap your head around, all that really matters is to understand that there is a strategy, with sound mathematical foundations, to update the network's weights and biases to get the best objective performance. That is to say minimise the cost function that we have chosen.