top of page
  • Writer's pictureTriple Helix

Deep Learning and the Quest of Solving Brain’s Mystery

Written by Naphat Permpredanun '24

Edited by Jasmine Shum '24

The brain is inarguably one of the essential parts of our body. It initiates the process of thinking and perception and modulates multiple systems in our body, including balance, coordination, and breathing. Damage to the brain, even minimal, can cause severe impairments to the body. For example, dementia, which results in memory loss and cognitive decline, is caused by neurodegeneration. With such significant consequences to our health, neuroscientists are dedicating resources to studying the structure and function of the brain more thoroughly to further our understanding of the neural system as a whole.

Previously, the prominent approach to studying brain function was performing an electroencephalogram (EEG), which is a “diagnostic test that uses electrodes placed over the scalp to record the electrical activity of the brain, especially the cerebral cortex” to find the connections between parts of the brain while performing a specific task (Mandal, 2019). However, it is difficult to map an entire coordinate system of the brain with just EEG measures because it is hard to find the starting point of the stimulation. Therefore, scientists are currently searching for alternative approaches to creating a more concrete map of the brain. One novel approach under investigation is a collaboration with the deep learning model.

One of the central mysteries of brain functionality is “the reasons behind the specializations within the brain for various tasks.” Scientists have wondered not just why different parts of the brain do other things but also why the differences are so specific: Why, for example, does the brain have an area for recognizing objects in general but also for faces in particular? Using EEG, one can only determine the connection between parts of the brain, but cannot describe the reason why some specific parts of the brain exist. Therefore, scientists have started using other analogies to help justify the existence of some parts of the brain. Fortunately, at that time, a growing field in Computer Science called “Deep learning” mimicked the idea of the connection of neurons and created the structure of trainable machine learning called a neural network. With this model, scientists believe that by training the model correctly and inputting some data, its behavior could resemble a performance in the human brain. Therefore, neuroscientists decided to study this structure on how it tackles some essential data, such as vision, sound, and cognitive function.

Figure 2 : (Left) The visualization for the perceptrons, (Right) The ventral visual stream

For the visual component, there is a deep learning model called “perceptron” that represents the concept of visual perception in the brain. Perceptron is an artificial neural network consisting of three main layers. The input layer receives the data. The output layer shows the results of the process within the networks, and the hidden layer applies the same filter to every portion of an image. Each convolution captures different essential features of the image, such as edges. More basic features are captured in the early stages of the network, and the more complex features are captured in later stages.

The construction and characteristics of perceptrons are analogous to the part of the brain responsible for the visual system, the ventral visual stream. The retina in the eyes acts as an input layer for the brain, and what we see acts as an output. A sequence of brain areas connects to each other to replicate the hidden layers of the model. These layers start at the lateral geniculate nucleus, then follow to an area called V1 in the primary visual cortex. Finally, this connection will end downstream in areas V2 and V4 and lead to the inferior temporal cortex (the visualization of this stream is shown on the right side of figure 2). Moreover, the brain can capture the fundamental qualities of visual information in the early stages, such as processing edges and color information in V1 or V2, and recognize faces and objects in the inferior temporal cortex. This analogy is supported by the computational neuroscience team from MIT as demonstrated by high correlational evidence between the connection of perceptrons and the monkey’s brain.

For the sound component, neuroscientists find that their understanding of audio-sensory information from the brain is limited compared to visual information since audio contains more variety in the type (such as between conversation, music, etc.) Therefore, their focus is mainly on the division of processes between different kinds of sound, especially “speech” and “music.” There are three hypotheses about where the sound processing is separated: at the input, the hidden layer, and the output. Researchers at MIT have investigated this question using the same deep learning models, specifically deep nets. The team designed an experiment by creating a deep net model of the cochlea, the sound-transducing organ in the inner ear. Then, they separated the deep nets based on these three hypotheses and trained the models to recognize the music genre and song lyrics. Their goal was to find which model required the least amount of resources to train as it would be the most probable model imitating human audio perception.

Figure 3 : (Left) Deep Nets based on the assumption of splitting the result in the input layers,

(Right) Deep Nets based on the assumption of splitting the result in the output layers

Figure 4: Deep Nets based on the assumption of splitting the result in the hidden layers

The result from the training shows that the model that requires least resources and highest accuracy on detecting the speech and music is the model that processes together from the input and early hidden layers, then separate in the late hidden layers. This shows that human’s auditory system requires a general decoding at first, then specialized on each type later on.

Apart from the specialization of the brain, another exciting mystery of the brain is its capacity for cognitive function. Despite knowing the rough connections between parts of the brain through traditional approaches, there needs to be more knowledgeable regarding the potential capacity of cognition in the human brain. Therefore, based on previous research, neuroscientists employ the deep learning model to depict that capability. These neuroscientists define the boundaries of cognitive function as fluid intelligence, which is the ability to problem solve, think, and reason abstractly. Using a deep learning model called gCNN, neuroscientists find that fluid intelligence depends heavily on two regions of the brain: the prefrontal cortex and parietal cortex, which are involved in decision-making and sensory perception. Additionally, they found that structural features of the amygdala, hippocampus, and nucleus accumbens (NAc), along with the temporal, parietal, and cingulate cortex, drove the fluid intelligence prediction. Using this information, scientists can pose more hypotheses about the connectivity of the brain for decision-making processes.

Conclusion and Future Work

With the current progression of deep learning models, neuroscientists can make analogies between these models and the human brain regarding brain specialization and cognitive function. Despite these advancements, deep learning models still have room for improvement in applying these analogies between deep learning and the brain to predict and evaluate cognitive function. With such enhancements, deep learning models can enable whole brain emulation, a strategy for creating a kind of artificial intelligence by replicating the human brain's functionality in software. Consequently, knowing the such as Alzheimer’s or dementia. However, with an insufficient computational level and model, the mystery of the brain will remain…for now.



Ananthaswamy A, Quanta Magazine moderates comments to facilitate an informed substantive. Deep Neural Networks help to explain living brains [Internet]. Quanta Magazine. 2021 [cited 2022Dec12]. Available from:

Besson P, Rogalski E, Gill NP, Zhang H, Martersteck A, Bandt SK. Geometric deep learning reveals a structuro-temporal understanding of healthy and pathologic brain aging [Internet]. Frontiers. Frontiers; 2022 [cited 2022Dec12]. Available from:


David E. Council post: How the future of deep learning could resemble the human brain [Internet]. Forbes. Forbes Magazine; 2020 [cited 2022Dec12]. Available from:

Mandal DA. Studying the human brain [Internet]. News. 2019 [cited 2022Dec12]. Available from:

Milletari F, Ahmadi S-A, Kroll C, Plate A, Rozanski V, Maiostre J, et al. Hough-CNN: Deep Learning for segmentation of deep brain regions in MRI and ultrasound [Internet]. 2016 [cited 2022Dec12]. Available from:

Neurocognitive Imaging Lab [Internet]. NeuroCognitive Imaging Lab. 2022 [cited 2022Dec12]. Available from:

Pham C. Graph Convolutional Networks (GCN) [Internet]. TOPBOTS. 2021 [cited 2022Dec12]. Available from:

Rohman M. Novel deep learning method may help predict cognitive function [Internet]. Medical Xpress - medical research advances and health news. Medical Xpress; 2022 [cited 2022Dec11]. Available from:

Sun J, Liu Y, Wu H, Jing P, Ji Y. A novel deep learning approach for diagnosing Alzheimer's disease based on eye-tracking data [Internet]. Frontiers in human neuroscience. U.S. National Library of Medicine; 2022 [cited 2022Dec12]. Available from:

Valentin. Using artificial neural networks to understand the human brain [Internet]. Research Features. 2022 [cited 2022Dec12]. Available from:

14 views0 comments


bottom of page