An introduction to neural networks

  • Neural networks, inspired by the human brain, refers to a type of computing architecture that is based on a model of how human brain functions.
  • They function through layers, including input, hidden, and output layers, facilitating learning and prediction.
  • Types of neural networks include feed-forward, where data moves linearly; backpropagation, which refines predictions through continuous feedback; and convolutional, tailored for image analysis like AI image recognition.

Neural networks, although seemingly distant from our daily lives, intricately weave into our lives in imperceptible ways. They make it possible for us to immerse in content that is tailor-made for our interests, while empowering us smoothly engage with virtual assistants like Siri.
Therefore, promoting understanding of them allows us to better use their capabilities to enrich our lives.
–Audrey Huang, BTW reporter

The article introduces the definition, operating principles and the types of neural networks.

What are neural networks?

A neural network, or artificial neural network, is a type of computing architecture that is based on a model of how human brain functions. Neural networks consist of a collection of processing units called “nodes.” These nodes pass data to each other, just like how in a brain, neurons pass electrical impulses to each other. The networks are used in machine learning, a type of computer programs that acquire knowledge without definite instructions.

Also read: Private wireless networks: Ownership, spectrum, and uses

Also read: Ethernet dedicated lines vs. wireless networks

How do they work?

Neural networks consist of numerous nodes distributed across at least three layers: an input layer, a hidden layer, and an output layer. Additionally, there can be multiple hidden layers apart from the input and output layers. Irrespective of their placement within the network, each node undertakes specific processing tasks or functions on the input received from the previous node or the input layer. Essentially, every node encompasses a unique mathematical formula, with individual variables weighted differently. If the outcome of applying this formula to the input surpasses a designated threshold, the node transfers data to the subsequent layer. Conversely, if the output falls below the threshold, no data is forwarded to the next layer.

What are the types of them?

Neural networks differ in their processing methods and the number of hidden layers they possess. There are three types of it: Feed-forward neural networks, backpropagation neural networks and convolution neural networks.

1. Feed-forward neural networks

These neural networks represent the fundamental structure of an artificial neural network. They transmit data in a single forward direction, moving from the input node to the subsequent output node. While not essential, they may incorporate hidden layers to handle more intricate tasks. Their learning process evolves gradually through feedback mechanisms. Facial recognition serves as an illustration of a feed-forward network.

2. Backpropagation neural networks

These neural networks work continuously by enabling each node to retain its output value and propagate it backward through the network to generate predictions at each layer. This facilitates ongoing learning and refinement of predictions.

3. Convolution neural networks

Convolutional neural networks (CNNs) use hidden layers to execute mathematical operations, generating feature maps of image regions that are more amenable to classification. Each hidden layer receives a distinct portion of the image for decomposition, leading to further analysis and eventual prediction of the image content. AI image recognition is a prime example of convolutional neural networks in action.


Audrey Huang

Audrey Huang is an intern news reporter at Blue Tech Wave. She is interested in AI and startup stories. Send tips to

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *