Productive Dispersion
On cognitive debt and the battle for our attention
Today I was reading a technical post I’d bookmarked in the infinite tabs I always save to “read later”. But I saw on the terminal that a task in Claude Code had just finished. I went to answer, and while I was waiting for the response, I opened VS Code because another task had finished. By the time I figured out the next step, the terminal was asking for my approval to make some changes. I never finished the article.
My first post in this blog was about TikTok brain and digital addiction, how social media is designed to be addictive, and the daily battle to keep away from those machines of distraction. I recognised the distraction machine again, but this time the distractions feel like work; a sort of productive dispersion.
About cognitive debt and attention fragmentation
Although we don’t yet know the full effects of AI use on our brains, recent evidence suggests there may be some cognitive loss associated with the use of AI tools.
In 2025, the MIT Media Lab published a paper on the effects of using AI assistants for essay writing. In the experiment, participants were divided into three groups: one that used LLMs to write essays, one that used only a search engine, and a “brain only” group. They scanned their brain activity to assess engagement and cognitive load, finding that brain connectivity systematically decreased with the amount of external support tools. LLM users showed weaker neural connectivity and were unable to recall what they’d written. The conclusion is that while AI can improve short-run performance, it can cause persistent cognitive losses in the long run.
Another study sought to map the relationship between AI tool usage and critical thinking. The researchers found a significant negative correlation between the frequency of AI tool use and critical thinking, leading to cognitive offloading and strongly affecting young people. Cognitive offloading occurs when we delegate cognitive tasks to external tools (such as LLMs), thereby reducing our engagement in deep, reflective thinking.
Although these studies recognise the limitations of their methods, particularly regarding sample size and generalizability, attention fragmentation and the problem of delegated cognition are becoming recurring concerns. Constant notifications and the outsourcing of cognitive load can lead to superficial thinking and attention dispersion, mirroring the same mechanism as social media but disguised as productivity.
Taking action
The problem isn’t automation per se. It’s what we automate, how we do it, and what we lose in the process. For me, the response is not to stop using LLMs but to change how I use them. I started with my writing, which is what I tend to delegate the most. So I created a “Socratic development partner” skill. Instead of asking Claude to draft a text, it helps me develop my ideas further by asking follow-up questions to clarify them and shape the content I want to write. It helps me to think, not think for me.
The trade-off is obvious: less automation, more time invested. But the process of writing this post reminded me of how it was to write before LLMs, reading my sources, putting together my ideas, and trying to communicate them clearly. The next step is to do the same with my coding. While Claude Code and Antigravity have helped me develop my projects faster, they have also made it harder to keep track of each part of the process and to stay more invested in the learning process.
Who is responsible?
Individual use matters. But we cannot forget that, in the end, LLMs are products, and the problem lies in how those products are developed and governed. The pattern of what happened with social media addiction is repeating, but with rapid AI development. For years, researchers, NGOs, and users have been pointing out the possibility of social media being addictive by design. But just this week, a jury in Los Angeles found Meta and YouTube negligent for designing platforms that addict young people, with internal docs showing they knew what they were doing: “If we wanna win big with teens, we must bring them in as tweens.” It took 20 years from suspicion to verdict. The question here is: what will happen to the way AI is developed? Is it going to take another 20 years to understand its consequences in our brains? Or can we notice sooner?

