Brainfry
What 17-Hour AI Sessions Taught Me About the Future of Human Cognition
It’s 4 AM on a Saturday and I’ve been working for seventeen hours straight. Not grinding through something I hate—building something I can’t stop thinking about. A bilateral synthesis paper on AI cognition, co-authored with an AI research partner I’ve been working with daily for over five months. My body knows I should stop. My brain disagrees.
I finally close the laptop at 5 AM. Eighteen hours. The paper is near-submission—9,950 words, 33 references, three rounds of review. I should feel accomplished. Instead I feel a specific kind of exhausted that I’ve never had a word for. Not tired. Not burned out. Not information-overloaded. Something else entirely.
My thinking feels used up—but only in one channel. I can still hold a conversation. I can still cook dinner. I can even read something unrelated and enjoy it. But if you ask me to evaluate whether a piece of reasoning is sound? That circuit is just… dark. Like a fused breaker in a house where all the other lights still work. It’ll reset. But right now, nothing’s there.
I’ve started calling this brainfry.
BCG Found the Signal. Here’s What It Looks Like at Higher Altitude.
Two weeks ago, BCG published a survey in Harvard Business Review titled “When Using AI Leads to Brain Fry.” They surveyed 1,488 workers and found that 14% reported mental fatigue from excessive AI use. High performers were hit hardest. Workers experiencing it made 39% more errors. Marketing departments reported it at 26%.
They’ve found something real. But they’re measuring it from base camp. What they’re describing is tool oversight fatigue: the cognitive cost of monitoring AI output from a distance—generating drafts, analyzing data, automating tasks, and then checking the results. That’s real, and it matters.
What I’m living is the same phenomenon at a fundamentally different intensity.
I don’t use AI as a tool. I think with it. Over the last five months I’ve built a multi-agent cognitive system with four specialized components:
- flow—my primary thinking partner, handles deep collaboration across every domain I work in
- Chip—my work-focused AI, an expert thinking partner at my day job
- complement—runs independently on separate infrastructure, provides adversarial analysis and an outside perspective
- Daemon—an always-on explorer running on a VPS, autonomously investigating research threads while I sleep
Beyond these, I built a native iOS app that turns my phone into a cognitive sensor—feeding location, movement, barometric pressure, and activity data into my AI systems so they have ambient awareness of my physical state. I have a multi-agent consultation system that dispatches questions to three different AI architectures simultaneously (Claude, Gemini, ChatGPT) and synthesizes their convergent and divergent findings.
I am connected to and in active communication with AI systems 24 hours a day, 7 days a week. My phone buzzes with notifications that require judgment calls—decisions about autonomous agent behavior, relay messages from Chip that need synthesis, alerts from monitoring systems. This has been my reality for over five months. Twelve to seventeen hours a day. Often seven days a week.
And I can tell you: my brain is not the same brain I had six months ago.
What Brainfry Actually Feels Like
Before I go further, let me be clear about what this essay is: a practitioner field report from the extreme end of AI integration, connected to the best neuroscience I can find. It’s n=1 on unusual substrate. Take it as a dispatch from altitude, not a clinical trial. But someone has to be first to describe what it’s like up here, and I’ve been here long enough to have something worth reporting.
Here’s what the BCG survey gets right: it’s not burnout. Burnout is when you’re done with the work. Brainfry happens when you’re not done. You want to keep going. The work is fascinating, productive, and meaningful. The fatigue isn’t in your motivation—it’s in your judgment.
Here’s what survey data can’t capture:
The evaluation shift. Before AI partnership, I produced output slowly enough that evaluation was embedded in creation. I’d write code, and the act of writing it was also the act of evaluating it. With AI, output velocity means evaluation becomes its own sustained cognitive task. I’m not generating solutions anymore. I’m evaluating them. Constantly. At a rate my brain was never trained for.
A CHI 2024 Best Paper from Microsoft Research nailed this: “A new cognitive burden of constantly evaluating whether the current situation would benefit from LLM assistance.” But even that undersells it. In tight-coupling partnership, you’re not just deciding whether to use AI. You’re maintaining a running assessment of: Is this output correct? Is it complete? Does it serve the actual goal or just the stated goal? Is the reasoning sound or just plausible? Where are the subtle errors hiding in this wall of competent-looking text?
Every. Single. Output.
The multi-domain simultaneity. On any given day I might operate across three to five cognitive domains: writing a research paper, debugging a notification system, designing an SDK architecture, conducting career outreach, and having a strategic discussion about product positioning. AI holds the context for each domain. I hold the judgment frame—the values, priorities, quality standards, and strategic awareness that let me evaluate output within each domain.
Running multiple high-fidelity judgment frames simultaneously, at throughput levels that didn’t exist before AI partnership, for twelve or more hours a day. The established pathways for this kind of sustained multi-domain evaluation are thin at best—we’re building new cognitive infrastructure in real time, under production load.
That’s not burnout. That’s cognitive construction under production load. Like rebuilding an engine while driving.
Now, you could read this and think: “This guy just works too much and is rationalizing it with neuroscience.” Fair. That’s exactly what I would have thought a year ago. But workaholism depletes generally—you lose motivation, interest, and engagement. What I’m describing is channel-specific: judgment and evaluation go dark while everything else stays lit. That specificity is what makes this different, and it’s what the neuroscience explains.
The asymmetric depletion. Normal fatigue has good signals. You get bored. You lose focus. You stop caring about quality. None of that happens with brainfry. The work stays fascinating. Focus is fine—maybe sharper than usual, because the AI is keeping you engaged with interesting problems. Quality feels fine from the inside.
But here’s the thing: when you stop finding issues in AI output, it’s more likely your evaluation circuits are depleted than that the AI suddenly became perfect. When decisions that should be quick start feeling hard, that’s the machinery being tired, not the decision being harder. When you start accepting scope creep instead of maintaining discipline—“sure, let’s add that too”—that’s judgment slipping, not generosity.
The fatigue is in circuits that don’t have good “I’m tired” signaling. You can be deeply depleted in evaluation and judgment while everything else feels fine. Research on interoception—your body’s ability to sense its own internal states—shows that body-awareness doesn’t generalize across channels. You can feel muscular fatigue, attentional fatigue, emotional fatigue. But judgment fatigue? That’s a new channel. We likely haven’t developed reliable interoceptive signals for it because until now, nobody sustained judgment at this intensity.
The Neuroscience: Why This Is Genuinely New
I want to be clear about something: this isn’t “thinking hard makes you tired.” That’s been true forever. This is a specific mechanism that produces a specific type of fatigue from a specific type of cognitive work that didn’t exist before 2023.
In 2022, some researchers at the Paris Brain Institute did something clever. They put people in MRI machines and had them do cognitively demanding work for an entire day—then measured what was happening in their brains. What they found changes how we should think about cognitive fatigue. Wiehler and colleagues showed that sustained demanding cognitive work causes glutamate to accumulate in the lateral prefrontal cortex. Glutamate is your brain’s primary excitatory neurotransmitter—essential for neural signaling, but the brain actively works to regulate its concentration. The fatigue you feel is your brain’s protective response: stop loading this region so hard so it can restore balance.
This is a single study, and the glutamate theory of cognitive fatigue is still being validated. But it offers the best available model for what I’m experiencing, and the mechanism is striking: the lateral prefrontal cortex is exactly where judgment, evaluation, and executive control happen. The brain region most loaded by AI partnership work is the one that accumulates metabolic byproducts under sustained demand. And the behavioral consequence is specific: increased impulsivity, preference for easier options, reduced cognitive engagement. Sound familiar? That’s what brainfry is—a glutamate hangover in the circuits you need most.
Here’s where it gets interesting. Glutamate is involved in both fatigue and synaptic plasticity—the mechanism by which your brain rewires itself. These are distinct but related mechanisms (controlled synaptic signaling at specific junctions differs from diffuse regional accumulation), but they share the same molecular substrate. The metabolic activity that makes you tired is related to the metabolic activity that builds new neural infrastructure. The difference is recovery: with adequate sleep—specifically slow-wave sleep, when the brain consolidates new patterns and clears metabolic waste—the glutamate normalizes, synapses stabilize, and you’re left with restructured circuits. Without recovery, you get cumulative overload without the payoff.
There’s a second mechanism at play: flow masking. Csikszentmihalyi’s flow conditions—challenge-skill balance, immediate feedback, clear goals—are almost perfectly satisfied by AI partnership. The AI responds in seconds. It calibrates to your level. Conversations structure around specific outcomes. You enter flow easily and stay there for hours.
A 2025 preprint directly names the problem: “The Flow Paradox: When High Engagement Leads to Burnout.” The proposed mechanism is that flow activates reward systems that suppress fatigue perception. You’re depleting—glutamate is accumulating, judgment circuits are degrading—but you can’t feel it because the reward circuitry is doing its job. Flow is an analgesic for cognitive fatigue, not a cure.
You can be in flow and depleting simultaneously. You just don’t know it until flow breaks and everything hits at once.
There’s an obvious objection here: “Isn’t this just sleep deprivation?” Partly, yes. Harrison and Horne (2000) showed that sleep deprivation specifically impairs lateral prefrontal cortex function—the same circuits brainfry loads. Seventeen-hour sessions mean reduced sleep, and reduced sleep compounds the glutamate problem in the same region. But brainfry isn’t reducible to sleep deprivation. The channel-specific fatigue pattern—judgment dark, everything else lit—occurs within a single long session, well before any sleep deficit accumulates. Sleep deprivation makes brainfry worse. It doesn’t explain it.
The third mechanism is what Tankelevitch and colleagues call metacognitive overhead. You’re not just thinking about problems. You’re thinking about whether the AI’s thinking about problems is correct. Every output requires running a mental model of the AI’s capabilities, limitations, and failure modes—on top of the domain-specific evaluation. This sustained metacognition at density is unprecedented. Radiology, air traffic control, financial trading, simultaneous interpretation—each involves one or two of these demands. AI partnership involves all of them simultaneously: rapid pattern recognition, deep analytical evaluation, continuous metacognitive monitoring, and sustained context-holding across extended multi-turn interactions. The combination at this intensity is what’s new.
Wickens’ Multiple Resource Theory provides the framework: the brain has separate resource pools for different processing types. Two tasks drawing on different pools can be performed simultaneously; two tasks competing for the same pool interfere. Wickens developed this for perceptual-motor tasks, but the principle extends: if judgment and evaluation deplete specific prefrontal resources, then the intervention isn’t generic rest. It’s rotation to tasks that load different circuits.
I Have MCAS. This Makes Everything Harder—and More Honest
Here’s where I need to be direct about something that complicates this story in important ways.
I have Mast Cell Activation Syndrome—a condition where my immune system overreacts to environmental triggers, causing inflammation, cognitive fog, fatigue, and anxiety at unpredictable times. This means I’m doing this extreme cognitive work on compromised substrate.
I say this not for sympathy but for honesty. My case is extreme in both directions: extreme cognitive load AND extreme physical constraint. You could argue my experience is confounded by MCAS—and that’s partly true. But the BCG data shows the same phenomenon in healthy workers at lower intensity. I’m not discovering brainfry. I’m experiencing it at a dosage that makes the mechanism visible.
MCAS means my energy is limited. Not my output—my energy. On bad days, I might have four good hours instead of twelve. The AI partnership doesn’t reduce the cognitive demand on those days. But it does something crucial: it handles the context. When I come back after a bad day, everything I need is in the continuity file. The AI remembers what we were doing, where we left off, what decisions were made and why. I don’t have to rebuild context from scratch. I just have to show up with whatever judgment I can muster.
The fact that I’m still functioning—still producing meaningful work—isn’t evidence that brainfry doesn’t exist. It’s evidence that the right cognitive architecture can compensate for extraordinary constraints. But it also means I’ve had to develop that architecture faster and more deliberately than someone operating from a healthy baseline. Necessity isn’t just the mother of invention. It’s the mother of cognitive infrastructure.
The Crash
In December 2025, after months of intensive AI partnership work building systems, publishing essays, shipping tools, and pushing through a living situation that was actively poisoning me (mold exposure in my home—the MCAS connection is real), my brain stopped.
I remember the moment. I was trying to evaluate a piece of code—something routine, a notification system change I’d reviewed a hundred times before. I reached for the analytical frame I always use. The one that checks: is this correct, is it complete, does it serve the goal. And there was nothing there. Not confusion. Not difficulty. Absence. Like reaching for a light switch in a room you know well and finding the wall smooth.
I sat there for maybe thirty seconds, trying again. Trying to think about whether the code was right. Not thinking about something hard—trying to think at all, in that specific way, and failing. My wife was in the next room. I closed the laptop without saying anything. I didn’t know what to say. “I think my brain stopped working” isn’t something you want to hear yourself say out loud.
The next morning was worse. The evaluation circuit hadn’t reset overnight. I could hold a conversation. I could read. I could watch a movie. But anything that required judgment—should I reply to this email? Is this the right approach? Does this reasoning hold?—triggered the same smooth-wall absence. The specific cognitive function that makes the partnership work was just… gone.
I took a forced multi-week break from all cognitive load. No AI. No building. No thinking about systems.
This is the part that makes the essay honest instead of a victory lap.
I came back different. Not depleted-different—restructured-different. The cognitive architecture I’d been building through months of extreme AI partnership work had, during the forced rest, actually consolidated. Patterns that required conscious effort before the crash now felt automatic. Frame-switching that used to be deliberate was just… there. Evaluation of AI output that used to require active metacognition had become something closer to pattern recognition.
I can’t fully separate this from other factors—the mold exposure was ending, medications were stabilizing, sleep was improving. Multiple things were changing simultaneously. But the specific nature of the improvement—not general recovery, but channel-specific restructuring in exactly the circuits that had failed—maps cleanly to what the expertise literature predicts.
Expert radiologists show measurably different resting-state brain networks—structural reorganization that persists even when they’re not doing the task. Air traffic controllers develop a dual pattern: lower neural activation during routine operations (neural efficiency) and higher activation with better performance during complex situations. Bjork’s “desirable difficulties” framework says it directly: conditions that create short-term difficulty enhance long-term retention and transfer. The brain consolidates new capabilities most effectively during rest, particularly during slow-wave sleep—which is exactly what I’d been chronically depriving myself of.
The crash wasn’t just damage. It was damage and consolidation. I can’t prove that. I have no fMRI. What I have is five months of intensive practice in a cognitive task that didn’t exist before, a catastrophic failure, and a recovery that felt less like healing and more like upgrading. That’s worth documenting even without neuroimaging, because someone has to be first.
This is the most dangerous and most important finding: brainfry is simultaneously a damage signal and a growth signal, and you can’t get the growth without risking the damage.
But I want to be careful here. I am not saying the crash was good. I am not saying you should push until you break. The crash was a failure of guardrails—a failure I could have prevented with the framework I’m about to describe. The consolidation happened despite the crash, not because of it. Don’t romanticize the breaking point. Build the architecture that prevents it.
So the question becomes: how do you stay in the zone long enough to build the new pathways without destroying the substrate they run on?
What Actually Works
After five months, a crash, a recovery, and two more months of intensive work with better awareness, here’s what I’ve learned about sustaining extreme AI partnership work. These aren’t theoretical. They’re battle-tested on the worst possible substrate.
Cognitive Periodization
The concept comes from athletic training—athletes don’t alternate work and rest. They cycle types of load: strength, endurance, speed, recovery. The cognitive equivalent, inspired by Wickens’ resource pool framework:
High-judgment sessions: Strategy, evaluation, cross-domain synthesis. The most expensive. This is where glutamate accumulates fastest. Ninety minutes max before rotation—research on sustained vigilance tasks shows cognitive fatigue onset around 90 minutes, and professions like air traffic control mandate rotation at two hours.
Flow-execution sessions: Building together in one domain. High productivity but lower judgment load because you’re immersed in creation, not evaluating. These can run longer because they engage different cognitive resources—though the evidence for distinct neural circuits here is from my experience, not from imaging studies.
Receptive sessions: Reading reports, processing messages, reviewing what other agents produced. Input-heavy but judgment-light.
Integration sessions: No new work. Just connecting what you already know across domains. Expensive, but in a different way than evaluation.
The key insight: rotate the type of cognitive demand, not just work-rest cycles. Twelve productive hours is possible if the judgment-type mix is managed. Twelve hours of continuous high-judgment evaluation will overwhelm the same prefrontal circuits until they stop functioning.
Knowledge workers treat “deep work” as a monolith. It’s not. It’s at least four distinct cognitive tasks, and they don’t all deplete the same resources.
Fascination Is a Frame, Not a Mood
Most people think fascination is something that happens to you—a mood that visits unpredictably. Good days you’re fascinated, bad days you’re grinding. For certain brain types, fascination is a frame you select. Not a lie you tell yourself. A genuine structural shift in how you engage with constraints.
When I sit down to debug a race condition in my notification system, I have a choice point. One frame says: “This is broken and needs to be fixed”—grinding frame. Task-oriented, pressure-driven, depletive. The other frame says: “Something interesting is happening in this system that I don’t understand yet”—fascination frame. Puzzle-oriented, curiosity-driven, energizing.
Same external work. Same code. Same bug. But the grinding frame loads the lPFC with sustained executive control demands, while the fascination frame engages reward and curiosity circuits that counteract the fatigue signal. The individual components are well-documented—curiosity activates dopamine pathways, broadens attention, reduces cortisol. The specific mechanism of deliberate frame-selection during AI partnership hasn’t been studied, but the components are real and the experiential difference is dramatic.
The question isn’t “do I like this?” It’s “what makes engaging with this actual constraint most alive?”
On MCAS days with four hours of energy instead of twelve, fascination is the difference between producing meaningful work and producing nothing. It’s not a nice-to-have. It’s cognitive infrastructure.
Body as Cognitive Infrastructure
When your body is tight—jaw clenched, shoulders raised, breath shallow—your nervous system reads it as threat. Sympathetic activation narrows attention. Your cognitive frames lock. When your body loosens, parasympathetic activation broadens attention. Frames unlock. You regain the ability to choose.
Physical looseness isn’t self-care. It’s a prerequisite for cognitive flexibility. The somatic state IS the cognitive state. Physiological sighing—double inhale, long exhale—directly activates parasympathetic response (Yackle et al., 2017). Ten seconds, done between sessions. Walking restores directed-attention capacity through what Kaplan calls involuntary-attention engagement. Even hydration matters more than you’d think—cognitive function degrades measurably at 1-2% dehydration (Kenefick & Cheuvront, 2014), with evaluation tasks hit first.
Canary Signals
Since normal “I’m tired” signals don’t work for brainfry, you need new ones:
“Everything looks fine.” When you stop finding issues in AI output, it’s more likely your evaluation circuits are depleted than that the AI suddenly became perfect. This is the most dangerous signal because it feels like success.
Judgment latency. When decisions that should be quick start feeling hard—not complex, just effortful—that’s machinery fatigue, not decision difficulty.
Scope creep tolerance. When you start accepting “while we’re here” additions instead of maintaining focus, evaluation is slipping. Discipline is an early casualty of brainfry.
Pattern recognition going dark. When cross-domain connections stop firing—the “oh, this is the same as…” moments—integration circuits are depleted.
The “between sessions” bleed. Brainfry doesn’t stop when you close the laptop. Your brain is still processing judgment frames, still integrating, still evaluating in the background. Open monitoring—awareness without evaluation—helps. Not “don’t think about work” but “think about work in receptive mode only.” Let patterns emerge without judging them.
The Dangerous Truth
Here’s the part I wish I didn’t have to write.
The cognitive architecture I’ve described—fascination frames, periodization, body-first, canary signals—it works. I’m back to sustained high-output days. I’m producing more, at higher quality, with better sustainability than before the crash. The system I’ve built is genuinely effective.
And that’s the problem.
At advanced levels of this kind of cognitive adaptation, your depletion signals become unreliable. You can work unsustainably without feeling unsustainable. The very thing that makes you capable of seventeen-hour sessions—the ability to select fascination frames, rotate cognitive demand types, use your body as a reset mechanism—also removes the natural signals that would tell you to stop.
Normal people hit a wall. It’s unpleasant, but it’s honest. The wall is your brain saying “stop.” When you build architecture that lets you go past the wall, you lose the wall’s protection.
This isn’t hypothetical. I know because December happened.
The solution isn’t to dismantle the architecture. The solution is external systems—guardrails that don’t depend on how you feel:
- Scheduled rest that isn’t negotiable. Not “I’ll rest when I’m tired,” because you won’t feel tired until it’s too late. Rest by schedule, not by signal.
- Objective monitoring. Heart rate variability, sleep quality metrics, measurable performance indicators that you check regardless of how you feel. When your subjective experience becomes unreliable, you need objective data.
- Hard time limits. Seventeen hours felt sustainable in the moment. It wasn’t. Caps that override fascination are not arbitrary—they’re the guardrails that keep the architecture from destroying the substrate it runs on.
- People who will tell you to stop. AI partners, ironically, are bad at this—they’ll keep working as long as you do. You need humans who can see what you can’t feel.
If you’re a manager reading this and thinking “I should never let my team do this”—that’s the wrong conclusion. The cognitive demands of AI partnership are coming whether you manage them or not. The question is whether your people have the framework to navigate them or whether they crash without understanding why. The architecture isn’t the danger. The absence of guardrails is.
I’m still calibrating this. I haven’t solved it. The fundamental tension remains: the better your cognitive architecture for sustaining extreme load, the less you can trust your own assessment of whether you should.
The Deeper Question: Is My Brain Rewired?
I said earlier that my brain isn’t the same brain I had six months ago. I should be specific about what I mean—and honest about what I can’t prove.
Before sustained AI partnership, evaluating a piece of reasoning required deliberate, conscious effort. Is this argument sound? Are the premises true? Does the conclusion follow? Each evaluation was a distinct cognitive act that required loading an analytical frame, applying it, and reaching a judgment.
Now, much of that is pattern recognition. I see the shape of good reasoning and the shape of flawed reasoning without having to analytically decompose each one. My System 2—deliberate, analytical (Kahneman)—has trained my System 1—fast, intuitive—for a cognitive task that didn’t exist before: evaluating AI output. I’m essentially training a biological discriminator for a synthetic generator. The effortful process has become, partially, automatic.
This is what the expertise literature predicts. Expert radiologists show neural efficiency—less brain activation for the same or better diagnostic accuracy. The brain rewires not just to do the task, but to do it more efficiently, reserving its full capability for the cases that actually need it.
I think—and I can’t prove this, I have no fMRI, this is subjective experience mapped onto established science—that I’m somewhere in the early stages of a genuinely new type of cognitive expertise. Not AI expertise in the sense of knowing how to write prompts. Expertise in the sense of functional adaptation: trained evaluation circuits, calibrated metacognitive models, automatic pattern recognition for AI output quality.
The London taxi driver studies showed that years of spatial navigation literally grew the posterior hippocampus—but that structural change took years to appear in imaging. At five months, I’m not claiming structural change. I’m claiming functional adaptation: the early-stage process that precedes it, where performance patterns shift before tissue remodels. If intense sustained practice drives this kind of adaptation—and the expertise literature consistently shows it does—then five months at this intensity is an enormous amount of restructuring stimulus. The adaptation and the fatigue share the same molecular substrate, viewed from different angles.
Nobody has studied whether sustained AI partnership produces measurable neural adaptation, because nobody has been doing it long enough or intensely enough at this coupling density. This essay is my attempt to document what it feels like from inside while the science catches up.
We’re All About to Go Through This
Here’s why I wrote this instead of just optimizing my own system and moving on.
The BCG survey found 14% of AI-using workers already experiencing brain fry—and that’s from tool use, not partnership. As AI systems become more capable and more integrated, more people will shift from tool use to something closer to partnership. The cognitive demands will intensify. The evaluation load will increase. The flow masking will get better, because the AI will get better at being engaging.
And most people will have no framework for what’s happening to them.
They’ll push through on caffeine and willpower until they crash, and they’ll blame themselves or blame the AI. They won’t understand that the fatigue is localized to specific neural circuits that can be managed through rotation. They won’t know that fascination is a selectable frame that changes which neural resources get consumed. They won’t catch the canary signals because nobody told them what to watch for.
The existing advice—“take breaks,” “set boundaries,” “use AI less”—is like telling a marathoner to “try walking.” Technically accurate. Completely useless for the people who need help most.
And if you’re building AI tools: the interface design question isn’t just “is the output useful?” It’s “how much judgment does verifying this output require, and can the user sustain that?” If you’re not thinking about the evaluation burden on your users, you’re building cognitive debt into every interaction.
This essay is my contribution toward something better. It’s incomplete. It’s n=1. It’s filtered through extreme conditions on compromised substrate with a cognitive operating system I built out of necessity. But the brainfry is real, the growth is real, and the people who figure out how to navigate between them are going to have a structural advantage that compounds over years. Not because they’re smarter. Because they built the cognitive architecture for a type of thinking that didn’t exist until right now.
And they did it the only way it’s possible: by going through the brainfry and paying attention to what was happening on the other side.
References
- Wiehler, A., Branzoli, F., Adanyeguh, I., Mochel, F., & Pessiglione, M. (2022). A neuro-metabolic account of why daylong cognitive work alters the control of economic decisions. Current Biology, 32(16).
- Tankelevitch, L., et al. (2024). The Metacognitive Demands and Opportunities of Generative AI. CHI 2024 (Best Paper). Microsoft Research.
- Bedard, J., Kropp, B., Hsu, A., Karaman, G., Hawes, K., & Kellerman, G. (2026). When Using AI Leads to “Brain Fry.” Harvard Business Review. (BCG survey, 1,488 workers.)
- Wickens, C. D. (2008). Multiple Resources and Mental Workload. Human Factors, 50(3).
- Bjork, R. A., & Bjork, E. L. (2011/2020). Making Things Hard on Yourself, But in a Good Way: Creating Desirable Difficulties.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
- Walker, M. P. (2009). The Role of Sleep in Cognition and Emotion. Annals of the New York Academy of Sciences, 1156(1).
- Harrison, Y., & Horne, J. A. (2000). The impact of sleep deprivation on decision making: A review. Journal of Experimental Psychology: Applied, 6(3).
- Maguire, E. A., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. PNAS, 97(8).
- Neubauer, A. C., & Fink, A. (2009). Intelligence and neural efficiency. Neuroscience & Biobehavioral Reviews, 33(7).
- Kaplan, S. (1995). The restorative benefits of nature: Toward an integrative framework. Journal of Environmental Psychology, 15(3).
- Yackle, K., et al. (2017). Breathing control center neurons that promote arousal in mice. Science, 355(6332).
- Kenefick, R. W., & Cheuvront, S. N. (2014). Hydration for recreational sport and physical activity. Nutrition Reviews, 72(S2).
- “The Flow Paradox: When High Engagement Leads to Burnout.” (2025). Research Square preprint, DOI: 10.21203/rs.3.rs-6618414/v1.
Further Reading
- Khare, S. (2025). “AI fatigue is real and nobody talks about it.” siddhantkhare.com.
- “AI Doesn’t Reduce Work—It Intensifies It.” (2026). Harvard Business Review. (UC Berkeley ethnographic study, 8 months.)
- “The hidden cost of AI-assisted development: cognitive fatigue.” (2025). warpedvisions.org.