Saturday, January 31, 2026

From information to knowledge: The cognitive transformation in the human brain in context of digital era

Sdílet

Introduction

With the rise of information systems and the extremely fast development of technology, our perception of the world, thinking and learning has rapidly changed. This digital technology affected our brains. It altered the whole process of receiving information and processing it into knowledge. The information systems often use known functions and structure of the human brain as inspiration. Similarly, these systems can help us discover more about the brain itself.

In this essay, I would like to focus on data, information and knowledge. I will cover the processes that occur in the brain during information receiving, processing and memory forming. I will explain on examples why and how the human brain and information systems are so similar. Lastly, I will talk about the influence of digital technology on our thinking, learning and overall living.

The theoretical basis of data, information and knowledge

The DIKW pyramid

Firstly, I find it vital to explain some of the key terms concerning information. Throughout the history, there have been many different definitions of information and knowledge. Different experts have had different views on the relationship between data, information, knowledge and wisdom. That is why I would like to propose the well-known concept of the DIKW pyramid, known by the name of “Knowledge pyramid,“ “DIKW Hierarchy“ and “Information Hierarchy.“ (Frické, 2019) I believe it is one of the simplest means of explaining such a complex relationship between these four terms.

Data

The basis of the DIKW Hierarchy is data. As Frické puts it, data is the source we process into its relevant form. Data itself has no significant value for us. Imagine someone gives you a paper with many diagrams, charts and numbers. This paper is filled with results of an experiment. On its own, it is almost meaningless. As Ackoff (1989) says, “Data are symbols that represent properties of objects, events and their environments. They are products of observation.”

When talking about data and its meaninglessness when lacking context, the question of storing and analysing data arises. I find it unreasonable to store data in its simplest form, without any metadata or further complementary information. It will remain either untouched or used and interpreted incorrectly. As Frické describes, we do not want to store data, hoping it will once turn into a meaningful piece of information. It is crucial to have some metadata or further context paired with it.

Information

The process of turning data into information often happens unconsciously. We do not change the data, just give it certain reasoning. Information can be observed or even calculated from the data given. If I were to use my previous example where we received a piece of paper with results of an experiment, the data turns into information in the moment when someone tells us more about the experiment. For instance, when, where and why was it conducted or which variables did we follow. To put it simply, we need context to make data a piece of information.

Frické (2019) puts emphasis on the fact that when we process data into information, some data reduction can occur. This is because we need to take relevancy of the data into account.

Based on John Sweller’s explanation, I will further discuss the categories of information which were defined evolutionarily. (Sweller, 2019) Primary information is processed without any conscious effort as if it has been natural to us. Our brains store it automatically, no matter the volume. The simplest example of primary information is the ability to speak and listen. When we are born, we automatically start listening to sounds of the world around us, even though we do not consciously think about doing so.

On the other hand, secondary information is far less intuitive, requires conscious effort to process. Usually, we would follow some instructions to achieve a skill based on secondary knowledge. Most of the topics taught in schools and other institutions are secondary. Listening and speaking is primary, reading and writing is secondary, because we must be able to recognise different letters to be able to read and write. Interestingly, we can learn much less secondary information than primary information. Miller (1956) states that we can learn at most seven elements of novel, secondary information at one time.

Knowledge

I have already mentioned the next step, which is knowledge. When getting from information to knowledge, we use existing patterns to connect certain pieces of information with experience, skills and expert opinions. (Chaffey & Wood, 2005) With knowledge, we move from being able to acknowledge certain things to rationally taking them into account. It is vital to mention that when speaking of knowledge, we often mean not only theoretical knowledge, but also practical skills such as how to ride a bicycle or draw a house. (Ackoff, 1989)

Knowledge forming uses other humans to obtain information. (Sweller, 2019) This information is then combined with what we already know to be put into context in our minds. If we do not have any information on some topic, we rely fully on what others tell us.

Sweller talks about this in context of problem solving. If we are trying to find some ways to solve a problem, four primary scenarios might occur. I will explain this concept on an example. Simply imagine that you want to buy a new car. We will explore your options of deciding which model you should get.

In the first scenario, we dispose of information about the topic for solving the problem on our own. In the past, we have gathered enough information to find a solution which will give us the best possible outcome. This would be an equivalent to you being an expert on cars, knowing exactly which model fills your needs and wants the best. You come to a salon and choose a car based on your own thoughts and knowledge.

In the second scenario, our own information is not enough for solving the problem. We combine it with information acquired from other people to get to the correct answer. You know a little about cars, but it is not enough to confidently decide which car is the best. You might be an expert on car engines but know little about all the other car parts. In the salon, you discuss your options with the seller.

In another case, we have no information about said topic whatsoever. As I already mentioned, we are completely dependent on other human beings when making the decision. You know nothing about cars but certainly need a new one. Maybe you search on the internet or ask a friend who is a fan of cars. Based on everything you hear, read and see, you choose a car.

In the last scenario, we still do not know anything about the topic. The only difference is that there is no one to help us make the decision. Since nobody contributes with their knowledge, we must choose completely randomly. In this fictional scenario, you need to buy a car as soon as possible. You have no signal and the only car shop near you provides no information about the cars available. The owner is away; his young daughter is the only one to help. You choose a car completely randomly.

The benefit that comes from the last two examples is that even though we had no knowledge about the topic in the beginning; after making the choice, we usually discover the correct solution. In our car example, after using the car for a while, we could probably tell if we bought a good one or not. This implies that even if we fail, we still learn something in the end.

Wisdom

Wisdom as the last step of the Knowledge Hierarchy is far less discussed than other parts of the pyramid despite being the destination of the whole process. This might be because many authors concentrate mainly on the systematic process instead of the end goal or because experts’ views on what wisdom really is differ. (Rowley, 2007)

As I see it, wisdom is the ability to transform information into knowledge, find different connections between diverse topics and events and finally, use it in practice relevantly. It is a set of complex skills. Ackoff (1989) says that wisdom requires judgement, meaning it is always related to a specific person. The value that wisdom adds is its subjectivity and uniqueness.

Ackoff also stated a very important idea proposing that wisdom is the only part of the hierarchy which requires human thinking and cannot be replaced by a computer, mainly because judgement is one of the characteristics that differ the human beings from machines.

The DIWK model has its visual representation because in the real world, there is more data than information, more information than knowledge and certainly more knowledge than wisdom. (Frické, 2019)

Information processing, cognition and memory

In this part of my work, I will focus on the activity happening in the brain during some of the most important processes that occur in the human body – processing of acquired information, learning and some key concepts of the memory theory.

Opposed to other organs, the brain is far more complex to understand. For this and other reasons, neuroscientists have not yet discovered all its secrets. Neuroscience is a very current topic, many of revolutionary discoveries come from studies conducted in the last ten years. This is why I find it so exciting to learn about. It is different from other academic fields since it is still forming.

The brain

When talking about the fundamentals of the brain, I will draw information from an inspiring lecture called Introduction to Neuroscience by John H. Byrne, Ph.D. taught at the McGovern Medical School in Houston, Texas. (Byrne, 2017)

Brain structure

The brain is a very specific organ mainly because of its complexity. One brain consists of around one hundred billion neurons which do not work independently, they are all connected. They work as a network, forming neural circuits. They have specific functions as well.

Both learning and memory rely on communication between neurons (Schiera, Di Liegro, and Di Liegro, 2020) so we will discuss them further.

Neurons

I would like to briefly introduce the composition of a neuron. Unlike other cells, we can distinguish the top from the bottom. Soma, also called the cell body, is where most of the functions take place. Dendrites are tree-like structured parts of a neuron highly involved in receiving connections from other neurons. An axon connects the cell body to the synapses.

The synapse

Synapses are responsible for sending information to other neurons. The synapse of a transferring neuron is referred to as the presynaptic terminal. The dendrite of the receiving neuron is called the postsynaptic terminal.

There are synaptic vesicles inside of a presynaptic terminal. Inside the vesicles are neurotransmitters which will be transferred to another neuron. Neurotransmitters are chemicals transferring information between neurons. (NIGMS, 2024) The activation of the whole process happens through an electrical signal in the presynaptic terminal.

Next, the vesicles move towards the edge of the synapse and release the neurotransmitters by opening up. The neurotransmitters then move towards the dendrite of the second neuron where they get caught by the neurotransmitter receptors.

Synaptic plasticity

When talking about the synapse, we must not forget about its plasticity. This means that synapses are able to strengthen or weaken over time, depending on how often they are activated. (Boundless, n.d.) It is a key component for learning and memory formation. This change can be both short-term and long-term.

Short-term synaptic enhancement occurs when the number of neurotransmitters is increased. Similarly, short-term synaptic depression is when the number is decreased. Long-term potentiation strengthens synapses, whereas long-term depression weakens the synaptic connection. Strengthening essentially means that the neurons are more responsive to a certain chemical process.

The sensory systems

Humans receive information through their senses. (Dantzig, 2025) We all know the five fundamental senses – touch, taste, hearing, smell and sight. However, these are not the only sensory systems our bodies use.

Vestibular system detects how our body and our head move. Proprioception system represents awareness of our own body, meaning our muscles and joints. For instance, it helps us distinguish whether our muscles are relaxed or contracted. Interoception system is all about our internal organs and their functioning such as breathing, feeling pain or hunger.

Through sense receptors of these systems, information is detected and sent though the sensory circuits towards the brain. (University of Utah Genetic Science Learning Center, 2014) Thalamus is the first brain part they reach. Then, they continue to different areas of the cortex depending on the senses. For example, vision belongs to the visual cortex and touch to the somatosensory cortex. (Kandel et al., 2013)

A contradicting theory to unisensory systems has been proposed. It suggests that senses do not really have their own areas (Kayser & Logothetis, 2007) and that the areas of the cortex are multisensory. (Ghazanfar and Schroeder, 2006). Primary cortex receives the information, secondary cortex processes it. It is then passed onto hippocampus.

The hippocampus

The hippocampus plays a key role in information processing. It is a bridge, the managing centre for information. The hippocampus takes in information, identifies and organizes it. It has three main functions: forming new memories, learning and contributing to emotional processing. (Tyng, Amin, Saad, and Malik, 2017)

When talking about the hippocampus, I find it vital to also mention the medial prefrontal cortex (mPFC) because these two terms are often mentioned collectively. However, I will only briefly touch the subject of medial prefrontal cortex. Its key role is not in processing information or storing memories. It manipulates with already existing memories. It abstracts and generalizes, updates memory models and activates them when relevant. (Schlichting and Preston, 2015)

We will talk about the role of hippocampus in forming memories when discussing the phases of memory formation. It is involved in both encoding and memory consolidation. It will also be mentioned when debating the process of learning.

Memory

After processing information and turning it into knowledge, we would certainly like to store it somewhere to be able to use it later. It is evident that newly processed information is not stored independently on other pieces of information. (Van Kersteren and Meeter, 2020) This would be very inefficient. Instead, it is put into context of other previously learned facts and stored as a part of a schema. (Van Kersteren et al., 2012)

Types of memory

Long-term and short-term memory

When talking about memory types, the most familiar concept is long-term and short-term memory. Everyone probably has a brief idea on what the main difference is. However, not all scientists agree on this. One of the frequently mentioned concepts includes working memory, which we will shortly talk about.

After processing information through our sensory store, it is firstly stored in the short-term memory. (Taylor and Workman, 2021) Here, only a limited amount of information can be held. Miller (1956) proposed that around seven items can be comfortably stored in our short-term memory. If we want to keep this information in mind, we will have to rehearse it.

By rehearsing and repeating information for long enough and giving it a meaning, we move it to the long-term memory. Once it is stored there, it can last for a lifetime. Long-term memory also has unlimited capacity, which makes it extremely powerful. (Atkinson and Shiffrin, 1968)

Working memory was first introduced by Baddeley and Hitch in 1974. It is generally considered a part of the short-term memory. Working memory has the ability to use just received information for solving complex problems. It also creates context for these problems. After hearing a list of words, working memory helps us manipulate them to put them into a meaningful sentence.

Implicit and explicit memory

Another important categorizing distinguishes implicit and explicit memory. Implicit memory covers learning that happens spontaneously. (Velez Tuarez et al., 2019) The learner usually did not intend to learn anything and is not even aware that he is learning. This could involve walking or recognizing meanings of new words based on the context.

On the other hand, explicit memory occurs with the intention of learning. The key word here is consciousness. Learning and recalling information must be conscious. Examples of explicit learning are learning in schools or other institutions and remembering specific events.

There are many other classifications of memory. Unfortunately, we cannot cover every one of them. Our next focus will be the process of storing information.

Storing information as memories

After we receive a piece of information, the process of storing comes. (Paller and Wagner, 2002) The hippocampus takes specific details about the thought, event or information which are important for its storage and creates a memory out of them. (Tulving, 1972) These are the details we remember about a memory.

Phases of memory formation

Finally, I would like to discuss four key phases which form memories. These are known as encoding, consolidation, retrieval and forgetting.

Encoding

Encoding is the first phase required to form a memory. After our brain receives information, encoding is there to prepare the material for the process of storing. (Mujawar et al., 2021) Atkinson and Shiffrin (1968) propose in their multi-store model that it is necessary for encoding to happen if we want to transfer our memories from short-term to long-term memory.

There are several types of encoding including semantic, visual and acoustic coding. Visual encoding stores the information as an image, acoustic as a sound and semantic though its meaning. (tutor2u, 2021) Short-term memory is usually encoded by acoustic coding, however long-term memory typically uses semantic coding.

I also find it critical to mention that we only encode the pieces of information that we focus on and that is new to us. (Christensen et al., 2011) If we did not pay much attention to a certain piece of information, our brain will possibly not encode it at all.

According to Guskjolen and Cembrowski (2023), during encoding, neurons increase their excitability, which makes them more likely to be chosen for encoding. Neurons with the highest excitability then form a coordinated group called neuronal ensemble. (Carrillo-Reid & Yuste, 2020) They become synchronized and fire together even without any external stimulus, representing the memory.

Consolidation

As I already mentioned, short-term memory does not last forever and can then be unreliable. Memory consolidation helps us transform memories from short-term to long-term. (Guskjolen and Cembrowski, 2023) This happens through a process called synaptic consolidation. (Alberini, 2009)

At this point of memory forming, important information is stored in long-term memory and insignificant information might be lost. Information can gain its strength for instance by being recalled often or during sleep.

During sleep, our hippocampus replays neural structures it created when we were learning. This helps strengthen the synapses. A memory gains its strength, but it can also lose some of its details. (Dudai, Karni and Born, 2015) This is why we do not necessarily remember every single detail about each event that happened in our lives, even if it was just a couple of days ago. A memory can also change after storing a different memory, which is somehow linked to it.

Memories dependent on the hippocampus consolidate within hours, but memories dependent on the medial prefrontal cortex usually take weeks to consolidate. (Kitamura et al., 2017) The mechanisms which support the latter are obviously more complex. We could use indexing theory, where our hippocampus forms index pointing towards each pattern. (Guskjolen and Cembrowski, 2023)

This is tightly linked to the second type of consolidation, which is called system consolidation. This is the process of reorganizing the memories from being dependent on the hippocampus to being dependent on mPFC, which can help generalize and form schemes. (Wiltgen and Silva, 2007) This process can take anywhere from days to years. According to the consolidation model, after letting our memories consolidate during sleep or rest, they can be accessed without the use of the hippocampus. (Squire and Bayley, 2007)

Retrieval

Encoding and consolidation help us store memories, retrieval helps us access them. In simple terms, we are recreating the neural patterns which were present during learning.

The encoding specificity principle says that memory retrieval is successful if the context in which the memory was retrieved is similar to the context in which the memory was originally created. (Tulving and Thomson, 1973)

Forgetting

As a last step of memory formation, forgetting does not always occur. But when it does, it can be both passive and active. (Hardt, Nader and Nadel, 2013) Because of synaptic plasticity, some of the stored information can be lost. Unfortunately, this process is inevitable.

As the last part of the chapter on memory, I would like to propose some interesting concepts related to storing memories.

Sequence memory

Sequence memory, generally known as the ability to remember lists of objects or things in specific order, plays an important role in learning and remembering for both children and adults. (Martinelli, 2025) If you remember your phone number or password, you probably used sequence memory to memorise them. What is more, sequence memory helps us with reading, spelling, calculations, writing or speaking. Everything with steps to follow is connected to sequence memory.

What is interesting, not every human has the same ability to remember sequences. As Martinelli says, for instance people with ADHD or dyslexia have a hard time remembering ordered lists. Interestingly, humans are the only species that have sequence memory. (Lind et al., 2023; Zhang et al., 2022)

Sequence memory is vital for pattern recognition and prediction. (Hawkins et al., 2009) The processes which enable us to store memories in hierarchical sequences take place in the neocortex.

Hawkins, George and Niemasik describe the hierarchical temporal memory theory (HTM) as a model explaining how the neocortex learns sequences. The neocortex is proposed as a hierarchy of regions where lower levels store fast-changing predictions and higher levels store more stable patterns. When we process some word A, lower layers activate possible next words with different probabilities. In biology this means that neurons for the most likely next event fire before it even happens. More context from previous words increases the chance of predicting the next word correctly.

Memory errors

As I already mentioned, our brains are not always faultless. Sometimes we remember that we have seen a giraffe on a picture from the Zoo when it is in fact not there at all. Our brains would sometimes even refuse to acknowledge that something very unusual happened, for instance on our commute to work. If we use the same route daily, it is hard to believe that on some specific day, we took the underground instead of our usual bus.

Predictive coding

Predictive coding is a theory which suggests that if our brain gathers enough information or context to a certain situation, it tends to predict what will happen next. (Rao and Ballard, 1999) This includes the general idea of the world. It would be exhausting and also inefficient to gather all information about it every day. Instead, we want to recall already known information and use it to better understand the new one.

If the model of the world is still developing, it is easier to distinguish prediction errors. (Henson and Gagnepain, 2010) However, as the brain gathers more strictly comparable observations about the world, its model strengthens. If it becomes too strong, it starts being resistant to change. (Van Kersteren and Meeter, 2020) This can result in evaluating the situation we are currently in incorrectly and reacting irrelevantly.

False memories

The definition of false memories is the same as for almost every neuroscientific term – it is still thought over. As I see it, false memories are any memories that our brain crated without them ever happening. They either happened with different details such as time and location, or they did not happen at all. They could have also happened similarly to how we remember them with a significant detail missing in our memory. This problem is thought to be mainly formed during reconstruction and recall of a memory. (Schacter, 2012)

A great example of false memories is the Deese-Roediger-McDermott paradigm. (Roediger and McDermott, 1995) Participants of this experiment received a list of words they were supposed to memorise such as “snow”, “cold” et cetera. When asked to recall elements on this list, they would often mention words which were not included but are highly associated with certain words from the list, for instance the word “winter.”

Similarly, when respondents received two lists of words to memorise, after being asked to name words from the first list, they would confuse them with the words written on the second list and vice versa. (Hupbach, Gomez, Hardt, and Nadel, 2007)

Learning

Simply put, learning takes information and memory and turns it into useful knowledge. But when exactly do we learn?

In the first part of this essay, I already talked about knowledge forming. I covered the four scenarios that might occur when making a decision. In the last two cases, we did not have our own information to help us decide which car to buy. After buying one based on external help, we discovered whether it was a right choice or not. If we were to buy another car, we would already know how to choose.

This is what learning can look like. We are unable to do something. We observe, imitate, try, reflect, discover the right way to do it. Suddenly, we have learned the skill. We connected new information to what we already know and created a network. (Schlichting and Preston, 2015)

When obtaining information, our ability to learn it and be able to use it later fully depends on understanding and storing it correctly. (Organisation for Economic Co‑operation and Development, 2011) There are other aspects influencing learning such as motivation, environment or emotions. Zhu et al. (2022) say that when a certain topic is more emotionally significant than other topics, it is easier for us to remember it.

When talking about learning, we will not dive into much detail on different styles of learning and its effectiveness, even though I find the studies covering this topic fascinating and recommend reading some of them.

Synaptic plasticity plays a key role in learning. If some cell A is repeatedly involved in firing cell B, their connection strengthens. (Magee and Grienberger, 2020) This allows us to associate the cells together. It explains why we learn better with repeating.

Parallels between the brain and information systems

The properties of neuroscience are extremely complex and their discovery has been one of the crucial aspects of understanding how humans work. Their complexity and interconnection inspired many different schemas used in information systems.

It is often needed in information systems to process and transform information. This is something we have already talked about in context of the human brain. Since the aim is to achieve similar results, we took advantage of what we already know about the brain to help us create the systems in computer technology. Brain-like models are supposed to copy different functions and schemas of the brain in order to accomplish similar functions that the brain can do. (Ou et al., 2022)

With the extremely fast evolvement of technology, this could also work the other way around. We can use new discoveries and inventions in information systems to help us understand processes in the brain in more detail.

The idea of brain-like computers was first introduced by Turing and Shannon, even before computers were invented. (Hodges and Turing, 1992) This theoretical debate did not receive much attention, mainly because this area was not yet developed as it is nowadays.

Further, I would like to discuss some of the areas of information systems where inspiration was taken from how the brain functions.

Human memory as a database

The key similarity between memory and a database is evident – they both store and retrieve information. Database, just like the brain, is organised and hierarchical so it can retrieve information fast. (Goult, 2021) We could compare the memory storage in the cortex to a complex database. Using this logic, the hippocampus would be the data management system. Memory recall is similar to retrieval in databases and memory reconstruction to recall. (Baker, 2012) The process of encoding happens in both the brain and a database.

As we have already proposed, pieces of information stored in the brain are not isolated, they are connected to each other. Context plays an important role in storing information, it can even shift its meaning. Overall, in the human brain, everything depends, even if just slightly, on the individual person. Oppositely, databases can exist without any context whatsoever.

Another aspect in common is the limit of short-term memory. I have already talked about Miller’s seven plus minus two rule briefly when discussing memory itself. He says we can hold seven plus minus two items in short-term memory. Just like our brain, databases as well have a limit.

Our memory and a database might seem very similar. However, the aspect of context and subjectivity in interpretation plays a huge role. Databases are purely factual, without any generalisation, false interpretation or other errors. As a result, the analogy cannot be used in every context.

The computer memory

I will use the Multi-store model to describe the analogy in short-term and long-term memory in the brain with the computer memory.

Multi-store model

The multi-store model in the brain describes three areas of memory storage. (McLeod, 2025) Information can move between these through retrieval, rehearsal or other processes.

Short-term memory holds information for a short period of time. In the computer, the analogy is called Random Access Memory (RAM.) (Kutsokon, 2021) It stores the data that are actively in use, which allows us to quickly access any information we need. It does not hold the data permanently.

Long-term memory holds memories that could be months or even years old, it can also store memories indefinitely. We compare our long-term memory to a computer’s hard drive. It stores information and can hold it for a long period of time. One obstacle here is that a hard drive does not change or alter this information over time. (Université de Montréal, McGill Centre for Integrative Neuroscience, n.d.) On the other hand, our long-term memory is being reconstructed and altered depending on new discoveries and other changes.

Lastly, sensory memory takes evidence of the environment around us. It helps us create a full picture of the moment, so its memories last only a second. (McLeod, 2025) The information might or might not be processes further. This is what input buffers do in computers. They hold an input and decide whether or not to pass it further into the system. Input buffer can be for instance our keyboard or microphone. In both sensory memory and buffer input, a big part of information received is discarded, because it is considered irrelevant.

Processing information and data

In the second part of this essay, I described the whole procedure of receiving and processing information in the brain. A similar process happening in computers is called the computing cycle. (ITU Online IT Training, 2024) This cycle consists of four key stages which we will go through and compare them to the stages in the brain.

Input

The stage of input detects raw data through different sensors. It is our source of information. Parts of a computer which are responsible for the input are for instance the keyboard, the camera, the microphone or the mouse. Receiving an input happens when we type on the keyboard, record a video or click a link. Without it, we would not be able to process any information.

Similarly, the brain uses different sensory systems to detect the data. Our body does this for example through our eyes, ears or skin. Just like computers, it gathers information from the environment around us. They differ by the portion of data they let in. Computers record every input they receive. Our brains filter this information based on importance. Attention and predictive coding also play a role in what information we process further.

Processing

The stage of processing takes the raw data and turns it into meaningful information which can be later used. The central processing unit is responsible for giving sense to this data. It uses a certain program to compute this.

In the brain, processing takes place in many of its parts. Neural circuits are associated with transforming and interpreting inputs. As I said, the difference here is that processing in the brain happens in multiple areas, not just one. It is also more complex since information can be altered over time as more context is provided. Prior knowledge is also used for information processing. Integration of the information into an already existing system is key.

Output

The output of a computer communicates the outcome of the whole process to the user. It can be shown for instance on a monitor, heard from a speaker or printed on a printer. The response of a brain is communicated differently. It does not need to show us the information. Sometimes the output results in our behaviour, decision or a thought.

Storage

We have already talked about storing data when discussing databases and the computer memory. Storing data means saving it for later use. Computers use hard drives, RAMs or clouds. The parallels in the brain have already been discussed above.

Neural networks and artificial neural networks

With the rise of artificial intelligence, many obstacles came. The classical computer architecture is suddenly not powerful and efficient enough to deal with such complex and unstructured data. (Ou et al., 2022) The problem partly rises from the need to constantly move the data back and forth between memory and the processor. There is hope in neural networks system helping us overcome these obstacles.

Even though they got its name from neurons in the human brain, there are many differences in structure and computation between artificial neural networks (ANN) and the brain. (Pham, Matsui, and Chikazoe, 2023) As a result, we cannot directly compare the two in measure. We will talk about the key similarities and differences based on a review written by Pham, Matsui, and Chikazoe in 2023.

The first evident alikeness are neurons and nodes. Just like neuron is the smallest unit of the brain, node is the smallest unit of an artificial neural network. Encoding in this context means a node predicting a neuron. Alternatively, decoding is a neuron predicting a node. If one of them can predict the other, they are very possibly associated.

Nodes together create a layer. An artificial neural network has hidden layers, and the brain has different regions. We do not know where the exact boundaries of regions in the brain are, which makes us unable to compare them to the layers of ANN, which are clearly distinguished.

Lastly, we could compare the behavioural level. That means finding similarities and differences between human performance and artificial network outputs. Some of the measures are error patterns or reaction times. (Spoerer, Kietzmann, Mehrer, Charest, and Kriegeskorte, 2020; Mnih et al., 2013)

The similarity of the human brain and a computer is partly present because developers were inspired by the complex schemas and processes in the brain when creating information systems. It is a huge advantage in discovering new properties about the brain through what we know about computer systems and vice versa. However, we must not forget about the differences between them, which are often crucial.

The effects of digital technology on the human brain

Lastly, I would like to discuss the influence of digital technology on information processing and learning. We live in an era where the use of digital media expands enormously. It can be used to our advantage, unluckily, its impact on our health can be unpleasant.

Attention and multitasking

Attention is not limitless. We can only focus on one task for a specific amount of time, which is dependent on many factors, for instance motivation. (Oberauer, 2019) This could be problematic when overusing digital technology. Being distracted is normalised. We get so used to being distracted that we do not know how to stay focused on one task for a longer period of time anymore. Firth (2019) even suggested that we are transitioning from the information age to the age of interruption.

Distractions lead to performance decline. (Farkaš, 2024) Even if we are not using our phones, just their presence worsens our performance. (Thornton et al., 2014)

Multitasking is tightly associated with attention issues. When we try to focus on many things at one time, we often end up not focusing even on one of them. With the invention of digital technology, especially our laptops and smartphones, multitasking is now easier than ever, it feels almost natural to us.

However, working on two tasks simultaneously leads to worse performance and slower response times. (Drody, Pereira and Smilek, 2025) With increased difficulty of tasks, performance decreases. The similarity of the tasks also influences how slow our transition between them is.

Multitasking is often associated with poor decision making (Müller et al., 2021), bad time management and high impulsivity. (Yang and Zhu, 2016) What is more, studies have shown that multitasking can result in greater depression, anxiety and decreased self-esteem. (Becker et al., 2013)

Memory

As I already mentioned, the use of digital technology leads to performance decline. In the modern world, we do not have to remember anything since everything can be found on the internet. The need to store information is low and the depth of our learning is weakened. (Farkaš, 2024) Because of digital technology, our cognitive abilities are declining, retrieval practice is reduced, and the hippocampus is being used less.

Information overload

Information overload essentially describes a situation when the amount of data needed to be processed in a limited time exceeds our capacity. This can be associated, similarly to other problems with digital technology, with increased stress and decreased capability of efficient decision making. (Shi et al., 2020)

Being surrounded by so many facts without having enough time to process them can also be tiring mentally and physically. It affects our cognitive capacity, which results in fragmented focus. (Wang, Zhao and Yu, 2025)

The overwhelming amount of information available can also cause problems in decision making. We see so many choices, we do not know which one to choose. Eppler and Mengis (2004) would call this the decision paralysis.

As we can see, the effects associated with digital technology are intertwined. Most of the problems of the use of digital technology affect children more than adults. This is mainly because the developing brain disposes of higher plasticity (Hensch and Bilimoria, 2012), which means children’s brain can easily adapt to the world of digital technologies, where it does not need to remember as much information or pay attention to one task for a longer time.

New forms of learning

E-learning, known also as learning through digital technology such as our phones or computers, brings out new aspects of learning. (Clark and Mayer, 2016) The most evident advantage is that we can learn anywhere and anytime. We can also combine different types of media such as videos, written text or visualizations. This enhances remembering and understanding. With e-learning, flexibility and individuality in learning rise. If an e-learning system is designed appropriately, meaning it takes into account how the brain processes information, it can be very effective.

Conclusion

In this essay, I have explained the differences between data and information. I covered the essential processes that happen in the brain during information processing, talking about the role of hippocampus, memory forming and learning. Then, I connected these observations with the information systems, finding parallels with the functioning of the human brain. Lastly, we have dived deeper into the effects the modern world and its digital technology have on humans, especially their abilities to process information, which is something everyone should be aware of, especially when using digital technology on a daily basis.

The human brain is an extremely complex organ and neuroscientists have come a long way in explaining its functioning. However, there are still many areas undiscovered, which makes it one of the areas of biology we should focus on more. With the rise of information and communication technologies, the fascinating processes in the brain could be used as inspiration for building algorithms and information systems. Similarly, our computers help us understand the processes in the brain.

Nevertheless, we should not forget about the aspects that make our brain so unique. The human brain uses context when storing and retrieving information, turning everything into one intertwined network. Our brain is still our own, influenced by our subjective perception and emotions. Vital is also its plasticity, which plays a huge role in many important processes such as information storing.

To conclude, despite the effort scientists and developers make in imitating it, the human brain is unique and irreplaceable. Data is everywhere around us, but only the human brain can turn it into wisdom.

Resources

Ackoff, R. L. (1989). From data to wisdom. Journal of Applied Systems Analysis16, 3–9.

Alberini, C. M. (2009). Transcription factors in long-term memory and synaptic plasticity. Physiological Reviews89(1), 121–145. https://doi.org/10.1152/physrev.00017.200

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation (Vol. 2, pp. 89–195). Academic Press.

Baddeley, A., & Hitch, G. (1974). Working memory. In G. Bower (Ed.), The psychology of learning and motivation (pp. 47–89). Academic Press.

Baker, E. J. (2012). Biological databases for behavioral neurobiology. Behavioural Brain Research227(2), 293–302. https://doi.org/10.1016/j.bbr.2011.12.014

Becker, M. W., Alzahabi, R., & Hopwood, C. J. (2013). Media multitasking is associated with symptoms of depression and social anxiety. Cyberpsychology, Behavior, and Social Networking16(2), 132–135.

Boundless. (n.d.). 35.8: How neurons communicate – Synaptic plasticity. Biology LibreTextshttps://bio.libretexts.org/Bookshelves/Introductory_and_General_Biology/General_Biology_(Boundless)/35%3A_The_Nervous_System/35.08%3A__How_Neurons_Communicate_-_Synaptic_Plasticity

Byrne, J. H. (2017, September 17). Introduction to neuroscience [Video lecture]. McGovern Medical School, UTHealth Houston. http://nba.uth.tmc.edu/neuroscience/

Carrillo-Reid, L., & Yuste, R. (2020). What is a neuronal ensemble? In Oxford Research Encyclopedia of Neuroscience. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264086.013.207

Chaffey, D., & Wood, S. (2005). Business information management: Improving performance using information systems. FT Prentice Hall.

Christensen, T. A., D’Ostilio, K., Marinkovic, K., & Halgren, E. (2011). Modulating the focus of attention for spoken words at encoding. Journal of Cognitive Neuroscience23(12), 4066–4078.

Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (4th ed.). Wiley.

Dantzig, S. A. (2025, September 17). Sensory processing explained: How the brain interprets information and why it matters [Blog post]. United Cerebral Palsy. https://ucp.org/sensory-processing-explained-how-the-brain-interprets-information-and-why-it-matters/

Drody, A. C., Pereira, E. J., & Smilek, D. (2025). Attention in our digital ecosystem: The five interactive components that drive media multitasking. Psychonomic Bulletin & Review32(4), 2454–2471. https://doi.org/10.3758/s13423-025-02722-5

Dudai, Y., Karni, A., & Born, J. (2015). The consolidation and transformation of memory. Neuron88(1), 20–32. https://doi.org/10.1016/j.neuron.2015.09.004

Eppler, M., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inform Soc20, 325–344. https://doi.org/10.1080/01972240490507974

Farkaš, I. (2024). Transforming cognition and human society in the digital age. Cognitive Studieshttps://doi.org/10.1007/s13752-024-00483-3

Firth, J., Torous, J., Stubbs, B., Vancampfort, D., Cleare, A., Langan, C., Malouf, P., & Sarris, J. (2019). The online brain: How the internet may be changing our cognition. World Psychiatry18(2), 119–129. https://doi.org/10.1002/wps.20617

Frické, M. (2019). The knowledge pyramid: The DIKW hierarchy. Knowledge Organization46(1), 33–46. https://doi.org/10.5771/0943-7444-2019-1-33

Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences10(6), 278–285.

Goult, B. T. (2021). The mechanical basis of memory – the MeshCODE theory. Frontiers in Molecular Neuroscience14, Article 592951. https://doi.org/10.3389/fnmol.2021.592951

Guskjolen, A., & Cembrowski, M. S. (2023). Engram neurons: Encoding, consolidation, retrieval, and forgetting of memory. Molecular Psychiatry28(8), 3207–3219. https://doi.org/10.1038/s41380-023-02137-5

Hardt, O., Nader, K., & Nadel, L. (2013). Decay happens: The role of active forgetting in memory. Trends in Cognitive Sciences17(3), 111–120. https://doi.org/10.1016/j.tics.2013.01.001

Hawkins, J., George, D., & Niemasik, J. (2009). Sequence memory for prediction, inference and behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences364(1521), 1203–1209. https://doi.org/10.1098/rstb.2008.0322

Hensch, T., & Bilimoria, P. (2012). Re-opening windows: Manipulating critical periods for brain development. Cerebrum11.

Henson, R. N., & Gagnepain, P. (2010). Predictive, interactive multiple memory systems. Hippocampus20(11), 1315–1326. https://doi.org/10.1002/hipo.20857

Hodges, A. (1992). Alan Turing: The enigma. Vintage.

Hupbach, A., Gomez, R., Hardt, O., & Nadel, L. (2007). Reconsolidation of episodic memories: A subtle reminder triggers integration of new information. Learning & Memory14(1-2), 47–53. https://doi.org/10.1101/lm.365707

ITU Online IT Training. (2024, October 7). The four stages of the computing cycle: How computers process data. ITU Online. https://www.ituonline.com/blogs/the-four-stages-of-the-computing-cycle-how-computers-process-data/

Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks15(5), 1063–1070. https://doi.org/10.1109/TNN.2004.832719

Kandel, E. R., Koester, J. D., Mack, S. H., & Siegelbaum, S. A. (2013). Principles of neural science (5th ed.). McGraw-Hill.

Kayser, C., & Logothetis, N. K. (2007). Do early sensory cortices integrate cross-modal information? Brain Structure and Function212, 121–132. https://doi.org/10.1007/s00429-007-0154-0

Kell, A. J., & McDermott, J. H. (2019). Deep neural network models of sensory systems: Windows onto the role of task constraints. Current Opinion in Neurobiology55, 121–132. https://doi.org/10.1016/j.conb.2019.03.006

Kitamura, T., Ogawa, S. K., Roy, D. S., Okuyama, T., Morrissey, M. D., Smith, L. M., Masuho, I., McHugh, T. J., & Tonegawa, S. (2017). Engrams and circuits crucial for systems consolidation of a memory. Science356(6333), 73–78. https://doi.org/10.1126/science.aam6808

Kutsokon, N. (2021, March 1). Databases: How data is stored on disk. DEV Community. https://dev.to/nikita_kutsokon/databases-how-data-is-stored-on-disk-22k8

Lind, J., Vinken, V., Jonsson, M., Ghirlanda, S., & Enquist, M. (2023). A test of memory for stimulus sequences in great apes. PLOS ONE18(9), Article e0290546. https://doi.org/10.1371/journal.pone.0290546

Magee, J. C., & Grienberger, C. (2020). Synaptic plasticity forms and functions. Annual Review of Neuroscience43(1), 95–117. https://doi.org/10.1146/annurev-neuro-090919-022842

Martinelli, K. (n.d.). Understanding sequential memory: Challenges and strategies for support. Understood. Retrieved November 16, 2025, from https://www.understood.org/en/articles/sequential-memory-challenges-strategies

McLeod, S. A. (2025, May 19). Multi-store memory model: Atkinson and Shiffrin. Simply Psychology. https://www.simplypsychology.org/multi-store.html

Miller, G. A. (1956a). The magic number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review63, 81–93.

Miller, G. A. (1956b). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review63, 81–97.

Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. A. (2013). Playing Atari with deep reinforcement learning. arXivhttps://arxiv.org/abs/1312.5602

Mujawar, S., Patil, J., Chaudhari, B., & Saldanha, D. (2021). Memory: Neurobiological mechanisms and assessment. Industrial Psychiatry Journal30(Suppl 1), S311–S314. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8611531/

Müller, S. M., Schiebener, J., Brand, M., & Liebherr, M. (2021). Decision-making, cognitive functions, impulsivity, and media multitasking expectancies in high versus low media multitaskers. Cognitive Processing22(4), 593–607.

National Institute of General Medical Sciences. (2024). What is a neurotransmitter?https://nigms.nih.gov/biobeat/2024/08/what-is-a-neurotransmitter/

Oberauer, K. (2019). Working memory and attention—A conceptual analysis and review. J Cognithttps://doi.org/10.5334/joc.58

Organisation for Economic Co-operation and Development. (2011). Tuning-AHELO conceptual framework of expected and desired learning outcomes in economics (OECD Education Working Paper No. 59). OECD. https://one.oecd.org/document/EDU/WKP%282011%295/en/pdf

Ou, W., Xiao, S., Zhu, C., Han, W., & Zhang, Q. (2022). An overview of brain-like computing: Architecture, applications, and future trends. Frontiers in Neurorobotics16, Article 1041108. https://doi.org/10.3389/fnbot.2022.1041108

Paller, K. A., & Wagner, A. D. (2002). Observing the transformation of experience into memory. Trends in Cognitive Sciences6(2), 93–102. https://doi.org/10.1016/S1364-6613(00)01845-3

Pham, T. Q., Matsui, T., & Chikazoe, J. (2023). Evaluation of the hierarchical correspondence between the human brain and artificial neural networks: A review. Biology12(10), Article 1330. https://doi.org/10.3390/biology12101330

Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience2(1), 79–87. https://doi.org/10.1038/4580

Roediger, H. L., & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, and Cognition21(4), 803–814. https://doi.org/10.1037/0278-7393.21.4.803

Rowley, J. (2007). The wisdom hierarchy: Representations of the DIKW hierarchy. Journal of Information Science33(2), 163–180. https://doi.org/10.1177/0165551506070706

Schacter, D. L. (2012). Adaptive constructive processes and the future of memory. American Psychologist67(8), 603–613. https://doi.org/10.1037/a0029869

Schiera, G., Di Liegro, C. M., & Di Liegro, I. (2020). Cell-to-cell communication in learning and memory: From neuro- and glio-transmission to information exchange mediated by extracellular vesicles. International Journal of Molecular Sciences21(1), Article 266. https://doi.org/10.3390/ijms21010266

Schlichting, M. L., & Preston, A. R. (2015). Memory integration: Neural mechanisms and implications for behavior. Current Opinion in Behavioral Sciences1, 1–8. https://doi.org/10.1016/j.cobeha.2014.07.005

Shi, C., Yu, L., Wang, N., Cheng, B., & Cao, X. (2020). Effects of social media overload on academic performance: A stressor–strain–outcome perspective. Asian Journal of Communication30(2), 179–197. https://doi.org/10.1080/01292986.2020.1748073

Spoerer, C. J., Kietzmann, T. C., Mehrer, J., Charest, I., & Kriegeskorte, N. (2020). Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision. PLoS Computational Biology16, Article e1008215. https://doi.org/10.1371/journal.pcbi.1008215

Squire, L. R., & Bayley, P. J. (2007). The neuroscience of remote memory. Curr. Opin. Neurobiol.17, 185–196.

Sweller, J. (2020). Cognitive load theory and educational technology. Education Tech Research Dev68, 1–16. https://doi.org/10.1007/s11423-019-09701-3

Taylor, S., & Workman, L. (2021). Cognitive psychology: The basics. Taylor & Francis Group.

Thornton, B., Faires, A., Robbins, M., & Rollins, E. (2014). The mere presence of a cell phone may be distracting: Implications for attention and task performance. Soc Psychol45, 479–488. https://doi.org/10.1027/1864-9335/a000216

Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review80(5), 352–373. https://doi.org/10.1037/h0033455

tutor2u. (2021, March 22). Coding & encoding. Tutor2uhttps://www.tutor2u.net/psychology/reference/coding-encoding

Tyng, C. M., Amin, H. U., Saad, M. N. M., & Malik, A. S. (2017). The influences of emotion on learning and memory. Frontiers in Psychology8, Article 1454. https://doi.org/10.3389/fpsyg.2017.01454

Université de Montréal, McGill Centre for Integrative Neuroscience. (n.d.). Tool module – The human brain from top to bottom: L’outil bleu 05 [Web page]. The Brain from Top to Bottom. Retrieved November 22, 2025, from https://thebrain.mcgill.ca/flash/capsules/outil_bleu05.html

University of Utah Genetic Science Learning Center. (n.d.). Sensory systems work together. Learn.Genetics. https://learn.genetics.utah.edu/content/senses/worktogether/

Valadez-Godínez, S., Sossa, H., & Santiago-Montero, R. (2020). On the accuracy and computational cost of spiking neuron implementation. Neural Networks122, 196–217. https://doi.org/10.1016/j.neunet.2019.09.026

Van Kesteren, M. T. R., & Meeter, M. (2020). How to optimize knowledge construction in the brain. NPJ Science of Learning5(1), Article 5. https://doi.org/10.1038/s41539-020-0064-y

Van Kesteren, M. T., Ruiter, D. J., Fernández, G., & Henson, R. N. (2012). How schema and novelty augment memory formation. Trends in Neurosciences35(4), 211–219. https://doi.org/10.1016/j.tins.2012.02.001

Velez Tuarez, M. A., Zamora Delgado, R. I., Torres Teran, O. V., & Moya Martine, M. E. (2019). The brain and its role on learning process. International Journal of Physical Sciences and Engineering3(2), 27–33. https://doi.org/10.29332/ijpse.v3n2.326

Wang, X., Zhao, X., & Yu, C. (2025). The influence of information and social overload on academic performance: The role of social media fatigue, cognitive depletion, and self-control. Revista de Psicodidáctica (English ed.)30(2). https://doi.org/10.1016/j.psicoe.2025.500164

Wiltgen, B. J., & Silva, A. J. (2007). Memory for context becomes less specific with time. Learning & Memory14(5), 313–317. https://doi.org/10.1101/lm.430707

Yang, X., & Zhu, L. (2016). Predictors of media multitasking in Chinese adolescents. International Journal of Psychology51(6), 430–438.

Zhang, H., Zhen, Y., Yu, S., Long, T., Zhang, B., Jiang, X., Li, J., Fang, W., Sigman, M., Dehaene, S., & Wang, L.(2022). Working memory for spatial sequences: Developmental and evolutionary factors in encoding ordinal and relational structures. Journal of Neuroscience42(5), 850–864. https://doi.org/10.1523/JNEUROSCI.0603-21.2021

Zhu, Y., Zeng, Y., Ren, J., Zhang, L., Chen, C., Fernández, G., & Qin, S. (2022). Emotional learning retroactively promotes memory integration through rapid neural reactivation and reorganization. eLife11, Article e60190. https://doi.org/10.7554/eLife.60190

+ posts

Číst více

Další články