top of page
  • Writer's pictureTriple Helix

Working hard or hardly working? How cognitive science can explain your failure to multitask

Written by Nicolas Kim '26

Edited by Surya Khatri '24

Description: Generalized depiction of the brain’s cognitive functions

Source: Atlassian

Ever tried to convince yourself that you can watch TV and do a homework assignment at the same time? Take a call while shopping and talking to your friends? Try to cram a math problem set in the hour before it is due? If so, you – and countless others – are no stranger to the woeful limitations of multitasking and your own concentration. Yet our failure to effectively multitask and cram has more or less become a fact we all come to accept, seeking no further explanation beyond our own experiences.

Now, advancements in cognitive science have revealed that there is more to our failure to multitask or to focus on a task than just our individual unproductivity. Rather, research has revealed that the currently understood constraints on our capacity for cognitive control – our ability to dedicate focused effort to a task – arise from the underlying representation and processing of information in our brain [1]. These constraints can be described through the lens of two, major, dilemmas: firstly, the tradeoff between learning efficacy (how quickly we learn new things) and processing efficiency (how well we can separate out tasks and dedicate focused effort to each of them); and secondly, the hypothesis that constraints on the amount of control we dedicate to a task arise from a bias of us towards wanting to switch “flexibly” between tasks.

The first dilemma is intimately tied to the way that we represent certain tasks, and stimuli, within our brain. The ability to execute multiple tasks requires us to separate out the way we represent tasks (like saying a word versus pushing a button) in our brain, allocating more control and focus to them in parallel. On the other hand, learning tasks quickly requires the shared representation of tasks in our brain, switching between each task rapidly to ensure that the way we represent one task does not conflict with the others. This ability to learn tasks quickly, however, faces the tradeoff between tasks that require greater cognitive stability (commitment of mental effort towards maintaining the pursuit of a task) versus switching to a different task (being cognitively flexible), in addition to possible interference between similar tasks. Although existing cognitive theories approach the question of where cognitive constraints arise – whether from some inherent limitation in some centralized control mechanism or from multiple different, more decentralized, control mechanisms allocated to particular tasks – from different standpoints, the outstanding question of how and why we opt to share representations for learning (or not to) remains unanswered [1, 3].

As it turns out, computational studies in semantic cognition have revealed that sharing representations across objects of memory, visual stimuli, and tasks alike allows for us to categorize object features in a way that facilitates learning down the road. For example, learning the association between relevant features of a tree versus a flower (i.e., the representation of both being able to grow) allows us to generalize for questions dealing with objects with similar, but not identical, features. Research using machine learning methods have backed these empirical findings, showing that models trained on multiple tasks (not directly related to the task at hand, but still sharing some representational form with it) end up learning more quickly in domains ranging from computer vision to speech recognition, thus improving the efficacy of learning [1]. Yet sharing representations across tasks (and quickly dealing with them one by one) faces the issue of interference between tasks when multitasking – for example, the tasks of trying to buy groceries and shop for clothes at the same time is tough in spite of their commonalities – while also trading off with the greater efficiency of learning that could be attained by just doing all the tasks in parallel and separating out their representations.

The resolution to this conflict can be found in the rising computational and empirical evidence that suggests that humans favor sharing representations among tasks, even for seemingly disparate tasks like color naming and word reading, over separating representations via multitasking [1, 2]. Moreover, the aforementioned studies further support this thesis with the bias of trained, machine learning models towards the statistical regularities in the data they are fed.

So what makes cramming and multitasking inefficient? The answer is not a limitation in who you are as a person, but rather our built-in constraints for the way we allocate control to tasks. Our neural architecture is just biased towards sharing representations across tasks (something we do when we try to execute tasks quickly one at a time) versus separating them out in multitasking, and as a result – whether it be in class, 2 minutes before an exam, or in a busy grocery aisle – this is just a reality we have to accept.



[1] Musslick, S., & Cohen, J. D. (2021). Rationalizing constraints on the capacity for cognitive control. Trends in Cognitive Sciences, 25(9), 757–775. doi:

[2] Petri*, G., Musslick*, S., Dey, B., Öczimder, K., Turner, D., Ahmed, N., . . . Cohen, J. D. (2021). Topological limits to parallel processing capability of network architectures. Nature Physics, 17(5), 646–651. doi: 10.1038/s41567-021-01170-x

[3] Albers, A.M. et al. (2013) Shared representations for working memory and mental imagery in early visual cortex. Curr. Biol. 23, 1427–1431

7 views0 comments


bottom of page