Neural Networks can tackle complex issues with complex calculations.
But today I will make the complex simple for you!
Let's see how Neural Networks work!
First of all, why do we use Neural Networks?
Sometimes you will face datasets that are too complicated for a simple model like Linear Regression.
Neural Networks however can identify any relationship between the variables.
Complexity can be tackled by complexity.
If you face a complex dataset, you can add layers and nodes to make your Neural Network more complex and better fit for the data.
Note: More complex Neural Networks are not always equal better results!
How do Neural Networks tackle complex tasks?
By using Activation Functions.
Activation functions introduce non-linearity (complexity) into the Neural Network.
Now let's add activation functions to the building blocks we already discussed.
Each node/neuron applies an activation function to its input, transforming it into an output value.
So nodes basically contain the activation functions.
Activation functions take inputs, do calculations, and then provide output.
There are many activation functions like:
- ReLU
- Sigmoid
- Binary stop function
Activation Functions are just like any other mathematical function.
Neural Networks basically just combine these existing functions to fit any type of data.