The Cost of Every Yes
The Cost of Every Yes
I’ve been writing about AI as an amplifier for weeks now. About how the bubble of what you can get done is effectively infinite. About how removing the natural speed limits on knowledge work creates a burnout trap. About how innovation requires cultures that can absorb friction and how the right tools keep replacing the frameworks that preceded them.
Most of what I write here is about personal projects — this site, side tools, health tracking, agentic workflows. But I use AI far more at work than I do here. The scale is different. The stakes are different. And the opportunity cost problem hits harder when a wrong turn doesn’t just cost you an afternoon — it costs a developer sprint.
There’s a thread running through all of it that I haven’t named directly until now: the more you can do, the more it costs you to choose.
Seth Godin puts it cleanly:
Every choice comes with a cost. When we spend an hour reading a book, it’s an hour we didn’t spend listening to speed metal. When we take on one client, we’ve chosen not to pursue a different option. Opportunity cost is real, and as we’ve been given more access, more tools, and more opportunities, the cost continues to increase.
That’s always been true. But AI has made it viscerally, operationally true in a way it wasn’t before.
The FOMO Is Real and It’s Structural
When I could only do what I personally had time and skill for, the boundaries were clear. I couldn’t write a GitHub Action, draft a blog post, run an SEO audit, and generate hero images all in the same afternoon — so the question of which one to do first was constrained by obvious limits. There wasn’t much to agonize over. The bottleneck was me.
Now the bottleneck has shifted. I have agents that can run tasks in parallel. I have a Telegram-based command center that lets me dispatch work from my phone. I have tools that organize their own documentation and track my health data without me touching a keyboard. The capability ceiling has moved so far up that the practical constraint isn’t “can this get done?” anymore. It’s “should this get done right now, instead of that?”
That shift creates a new kind of pressure. Not the pressure of scarcity — I can’t do enough. The pressure of abundance — I could do all of it, so why am I not?
That’s the FOMO. And it’s not irrational. It’s a rational response to an environment where capability has outpaced strategy.
Saying No Is the Strategy
Godin finishes the thought:
You’re spending your time whether you realize it or not. And without a strategy, the time you spent is wasted.
I keep coming back to this because it reframes the entire AI productivity conversation. The narrative right now is about what AI enables. What new things are possible. What workflows you can automate. What tasks you can delegate to agents. And all of that is real — I’ve written about it extensively because I’m living it.
But enabling isn’t the hard part. Choosing is the hard part.
Every yes to an AI-assisted task is a no to something else. Every agent dispatched on a coding task is context and attention not spent on a strategic decision. Every hour spent configuring a new integration is an hour not spent on the work that integration was supposed to support. The tool makes execution cheap, but it doesn’t make prioritization easier. If anything, it makes prioritization harder, because the cost of execution has dropped so low that everything feels worth doing.
I wrote about this dynamic in “The Paradoxical Art of Doing and Not Doing” — the HBR study that found AI workers burned out not because the tools failed, but because they succeeded too well. People took on tasks they would have previously outsourced, deferred, or avoided. The capability removed the natural limits. Without deliberate strategy to replace those limits, more capability just meant more exhaustion.
What it looks like in practice: I pick up a ticket, notice it’s missing documentation, and start filling the gap. Reasonable. Then I’m analyzing the underlying e-commerce logic to make sure the docs are accurate. Still reasonable. Then I’m refactoring the whole thing into Behat Gherkin — 12 features, over 150 user scenarios — because the format will make the requirements clearer for developers. Technically excellent. But was it the priority? That’s the question I didn’t stop to ask. I entered problem-solving mode and lost track of what I was actually supposed to be doing. The work was good. The sequence was wrong.
The discipline isn’t in learning to use the tools. It’s in learning to say no to what the tools make possible — and to stop before a valuable tangent becomes the whole day.
Opportunity Cost Has Two Layers Now
Here’s the part that’s genuinely new about AI and opportunity cost. There used to be one layer: your time. Now there are two.
Layer one: your time and attention. This is the classic opportunity cost. An hour spent on project A is an hour not spent on project B. Your cognitive capacity is finite. Your energy depletes. This hasn’t changed — if anything, the Yerkes-Dodson dynamics make it more important to protect this resource, not less.
Layer two: your token budget. This is the new one. Every task you delegate to an AI agent costs tokens. Tokens cost money. As Sam Altman has framed it, intelligence is becoming a metered utility. When I wrote about budgeting intelligence — tiering models by complexity, caching aggressively, scoping context tightly — I was describing the mechanics. But the strategic frame is opportunity cost: every token spent on a low-value task is a token not spent on a high-value one.
Prioritization isn’t just about managing your calendar anymore. It’s about managing a literal cost function. Which tasks justify a frontier model? Which ones should route to a lighter model? Which ones shouldn’t involve AI at all? These aren’t just efficiency questions. They’re strategy questions. They’re choices about where to allocate a finite — and paid — resource.
That cost function is about to get more complicated. Project Glasswing — just announced, a coalition of Anthropic, AWS, Apple, Google, Microsoft, Cisco and others — introduces Claude Mythos, a model that Anthropic says surpasses all but the most skilled humans at finding and exploiting software vulnerabilities. It’s already surfaced a 27-year-old flaw in OpenBSD and a 16-year-old bug in FFmpeg that automated testing had missed for decades. It’s not publicly available. It costs $25/$125 per million input/output tokens — more than Opus. Access, for now, is reserved for large organizations and government partners doing cybersecurity research.
That gap between what’s possible and what’s accessible is its own form of opportunity cost — a systemic one. And the question it raises is bigger than any individual token budget: if AI can now find and exploit vulnerabilities faster than humans can patch them, what does it mean to release the next frontier model before the infrastructure it will run on is secured? Every computer and every system attached to the internet is part of that equation. That’s not a prompt. That’s a civilization-scale prioritization problem.
Parallel Processing Doesn’t Eliminate the Choice
One of the things I’ve learned running agents in parallel is that concurrency doesn’t remove the prioritization problem. It transforms it.
Yes, I can dispatch five agents simultaneously. One drafts a blog post. One runs an SEO audit. One refactors a codebase. One tracks sodium intake. One organizes documentation. That’s real — it’s happening in my workflow right now. The bubble diagram from my earlier post wasn’t hypothetical. The outer bubble really is bigger than one person could fill.
But parallel processing introduces its own costs:
- Attention fragmentation. Five things running means five things to review, five results to evaluate, five potential rabbit holes. Your role shifts from executor to air traffic controller. That’s a cognitive load of its own.
- Context cost. Every parallel agent needs context. Memory files, system prompts, project state. The more agents running, the more tokens being consumed across all of them simultaneously.
- Integration overhead. The outputs of parallel tasks eventually need to converge. The blog post needs the image the other agent generated. The SEO audit needs the content the writer produced. Coordination between parallel streams is its own form of work.
The distinction I keep returning to — “what I can do” versus “what can get done” — holds up. But it needs an addendum: what should get done, and in what order, and at what cost?
That’s the strategic layer. Agents and people helping you don’t eliminate the need for strategy. They make strategy the primary skill.
Time Is the Only Non-Renewable Resource
Godin again:
When we recognize that time today is the investment we make to transform our lives tomorrow, the invisible axis becomes even more obvious.
Token budgets can be replenished. Agent capacity can scale. You can always spin up another session, route to another model, add another tool to the chain.
You cannot get back Tuesday afternoon.
This is the thing I keep having to remind myself. The tools are seductive precisely because they compress time — what used to take a week takes an afternoon. But that compression creates its own trap. If every task now fits in an afternoon, you can fill every afternoon with tasks. And if you do, you’ve traded the time compression for time saturation. You saved hours on each thing and spent those hours on more things. Net change in margin: zero.
The people who will benefit most from AI aren’t the ones who figure out how to do the most with it. They’re the ones who figure out what not to do — and protect the time they reclaim for thinking, resting, and making better choices about what comes next.
What I’m Practicing
I don’t have this figured out. I’m actively working through it. But a few principles are emerging:
Prioritize before you prompt. The moment you send a task to an agent, you’ve committed attention and resources. Decide whether this is the highest-value use of both before you hit send. The speed of execution makes it tempting to skip this step. Don’t.
Let AI do the unglamorous work — and accept that it’s useful, not lazy. Some of the most effective things I do at work feel embarrassingly simple: asking the Claude Chrome extension how to navigate a complicated vendor application, or prompting it to explain what a dense configuration screen is actually doing. It feels like taking a shortcut. It’s not. It’s appropriate tool use. The cognitive load saved goes somewhere better.
Build documentation that earns its keep. The work I find most satisfying lately is ticket intake — analyzing requirements, cross-referencing related work, surfacing gaps, and generating AI-augmented developer suggestions that include pseudo code or Behat Gherkin. When that documentation develops and self-sustains as you use it, it multiplies the value of every subsequent ticket. But it requires one non-negotiable discipline: scrutiny before it reaches a developer. An AI-assisted ticket that goes in the wrong direction doesn’t just waste tokens. It wastes a sprint. The review gate is the strategy.
Use parallel processing for independent streams, not for avoiding decisions. Running five things at once is powerful when the five things are genuinely independent and all worth doing. It’s expensive and distracting when you’re running them because you can’t decide which one matters most.
Say no more than yes. This is the hard one. The capability is real. The FOMO is real. The pressure — internal, external, ambient — to do more because you can do more is real. But every yes is a choice, and every choice has a cost. The strategy is in the nos.
Godin’s framing has been rattling around in my head because it names something I’ve been feeling but hadn’t articulated: the cost of having more options isn’t just the options you don’t pick. It’s the cognitive weight of knowing you could have picked them. AI has given us access to an extraordinary set of capabilities. The opportunity cost of that access is higher than it’s ever been.
The question isn’t what you can do. It’s not even what you can get done. It’s what you’re willing to not do — and whether you’ve chosen that deliberately, or just let the current carry you.
That’s strategy. And right now, it’s the thing most of us are missing.
Related Reading
- It’s Not What You Can Do — It’s What You Can Get Done — The infinite bubble, and why delegation is a skill
- The Paradoxical Art of Doing and Not Doing — Why AI burnout is real and what neuroscience says about it
- Innovation Breaks Things. Culture Eats Strategy. — The organizational version of this same tension
- I Don’t Need OpenClaw Anymore — When saying yes to the better tool means saying no to the one you built
About the Author
Kevin P. Davison has over 20 years of experience building websites and figuring out how to make large-scale web projects actually work. He writes about technology, AI, leadership lessons learned the hard way, and whatever else catches his attention—travel stories, weekend adventures in the Pacific Northwest like snorkeling in Puget Sound, or the occasional rabbit hole he couldn't resist.