AAI Logo
Loading...
AAI Logo
Loading...
Deep Learning
Deep LearningIntermediate

Neural Networks Explained

neural networksdeep learningReLUactivation functionssupervised learning
No reviews yet — be the first!

The Biological Inspiration

Artificial neural networks are loosely inspired by biological brains. A biological neuron receives signals from other neurons through dendrites, processes them in the cell body, and fires an output signal through its axon if the cumulative input exceeds a threshold. Artificial neurons replicate this: they compute a weighted sum of inputs, add a bias, and pass the result through an activation function.

What is a Neural Network?

Let's build one from scratch — starting with the simplest problem we can think of: predicting crop yield.

A farm field showing soil quality, rainfall, temperature, and fertiliser as inputs to a crop yield prediction model

Say you have a dataset of 10 plots of land. You want to predict how much yield each plot will produce per hectare. You know that soil quality is the main factor. If you have studied linear equations, your first instinct would be to fit a straight line — a linear function where soil quality is the input x and yield per hectare is the output y. You then multiply the yield per hectare by the size of each plot to get the total expected yield.

There is one constraint: yield can never be negative. So you eliminate the portion of the line that dips below zero. What you are left with is called a ReLU function — Rectified Linear Unit.

Diagram
Soil Quality (input x)Yield per Hectare (y)02468101234567thresholdf(x) = max(0, x − threshold)
A single neuron: soil quality feeds in on the left, the ReLU activation eliminates negative outputs, and yield per hectare comes out on the right.

You can think of that function as the simplest neural network that exists. The input is soil quality, the output is yield per hectare, and the neuron's job is to take x and produce y. In neural network terms, x is always the input and y is always the output. The ReLU — max(0, x) — is one of the most common activation functions used across modern models.

A single ReLU function mapping one input to one output is a valid, complete neural network — just the smallest one possible.

Scaling Up: Multiple Inputs

A single-input neuron is useful, but crop yield depends on more than just soil quality. Temperature, rainfall, fertilizer type, and fertilizer quantity all matter. You can add all of these as inputs to the same neural network.

Your input X is now a vector of four features. Your output Y is still the predicted crop yield. What changes is the structure between them.

Diagram
INPUT LAYERHIDDEN LAYEROUTPUTx₁Soil Qualityx₂Temperaturex₃Rainfallx₄FertilizeryCrop Yield4 inputs5 neurons (fixed)1 output
Input features:
A four-input neural network: soil quality, temperature, rainfall, and fertilizer feed into a hidden layer of ReLU neurons, which combine to produce a single crop yield prediction.

The hidden layer neurons each receive all four inputs, weight them differently, apply ReLU, and pass their results forward. Each neuron is learning a different combination of the inputs — one might specialise in the effect of rainfall on yield, another in the interaction between soil quality and fertilizer. The network figures out these combinations automatically during training.

Once you have trained this network on your dataset, you feed in the four inputs for any new plot of land and the network gives you its predicted yield. You never had to specify the rules — the network learned them from the data.

Quick Check

In the crop yield network, what is the role of the hidden layer neurons?

Why This is Powerful

This crop yield network is an example of supervised learning. You provide labelled training data — plots of land with known yields — and the network adjusts its weights until its predictions match the labels. Once trained, it generalises to plots it has never seen.

Neural networks are powerful in supervised learning because they can approximate very complex relationships between inputs and outputs without you having to specify those relationships manually. Stacking more layers and more neurons lets the network capture increasingly abstract patterns. For e.g., a network trained to detect disease in crop images is doing the same thing — learning input-to-output mappings — just with millions of pixels as inputs instead of four features.

Quick Check

What makes the multi-input crop yield network an example of supervised learning?

Knowledge check

Test Your Knowledge

Ready to check how much you remember? Take the quiz for Neural Networks Explained and see your score on the leaderboard.

Up next

Types of Machine Learning

In the next module, we explore the two core types of machine learning — supervised and unsupervised — using our crop yield example to make the difference concrete.

Continue learning