In March 2026, I asked Claude to analyze my code projects, all 16 repos, and generate a structured intelligence report. Complexity estimates, patterns across projects, the AI model evolution visible in my GitHub history over eight months.
The report came back while I was doing something else. I opened it later and it was exactly what I’d asked for: organized by project, cross-referenced for patterns, with an executive summary and a “content flywheel” diagram that traced how several of my projects fed into each other.
The file header says: “Report generated by Claude in Cowork mode — 2026-03-24.”
I want to talk about what that meant, and why it changed how I think about which tasks are worth assigning to agents.
What Cowork actually is
Cowork is Anthropic’s managed-agent platform. The relevant difference from a regular Claude Code session: agents run in the cloud. Your laptop doesn’t need to be on. You can close the lid, and the work continues.
Claude Code Routines, launched April 14, 2026, currently in research preview, is Anthropic’s answer to the same problem at the task-scheduling level. A Routine is a saved configuration: a prompt, one or more source repos, a set of connectors. You set a trigger (scheduled, API, GitHub webhook) and it runs without you.
OpenAI Codex has had similar functionality in its Automations, scheduled tasks, API triggers, parallel execution, which went generally available in April 2026. Three major platforms converged on the same pattern at almost the same time: define the agent, set the trigger, cloud handles execution.
The architecture converging is not a coincidence. The constraint being removed is real.
The constraint I didn’t know I had
Before cloud execution, every agent task required a running session. Which meant: me at my computer, a terminal open, actively waiting.
That constraint shaped what I asked agents to do more than I realized. A 40-minute research task is different when you have to sit there than when you can walk away. A task that requires fetching and processing 16 repos is something you’d think twice about on a Wednesday afternoon but wouldn’t hesitate to kick off before dinner if you know it’ll be done by morning.
I wasn’t making a principled decision to only assign certain tasks to agents. I was making a friction-based decision. The friction was masquerading as judgment.
When the friction goes away, you start discovering the actual scope of what’s worth automating. For me, that meant asking agents to do things I’d previously considered too long to wait for: comprehensive project analyses, multi-source research syntheses that pull from a dozen different documents, weekly reporting that takes 30+ minutes but runs on a schedule.
What’s now scheduled
My current Routines setup (still expanding): the daily research pipeline runs on a cron schedule, Dwight scans Reddit trending topics, produces research briefs, and updates the intel briefing. That used to require me to kick it off. Now it runs whether I’m at my desk or not.
Routine limits on different plans: Claude Code Pro (now removed from $20 tier; Max at $100/month minimum) allows 5 Routines/day. Max 5x allows 15. Team and Enterprise go to 25. For my use case, 5 is enough for daily operations. If I were running this for a team, I’d want more.
The practical ceiling isn’t the plan limit, it’s what you can realistically review. A Routine that runs and produces output you don’t read isn’t useful. I check my briefings every morning. Five scheduled tasks I actually read beats twenty I scroll past.
What it means for how you think about agent work
The mental shift I didn’t expect: when tasks run asynchronously, you start thinking about agents more like employees and less like tools.
When you’re running a session and watching the output appear, you’re supervising. You’re in the loop at each step. The agent is running for you, in your presence.
When you set a Routine to run overnight and review the output in the morning, the dynamic shifts. The agent did work. You’re reviewing what it produced. That’s a management relationship, not a supervision relationship.
Shubham Saboo’s framework for this, he’s the originator of the Chief of Staff pattern I’ve built on, is to treat agents like new hires: give them a workspace, scoped credentials, clear instructions, and a review cadence. The cloud execution model makes that analogy more literal. The new hire didn’t need you watching over their shoulder all day. They just needed to know what to do.
The honest limitations
Async execution introduces new failure modes. A synchronous task fails and you see it immediately. An async task fails and you might not find out until you look at the output the next morning.
Stopping conditions matter more. Max turns. Token budgets. Clear success criteria in the Routine’s prompt. Without those, a stuck agent can run for a long time before you notice.
I’ve had two failed overnight Routines in the last two months. One got stuck in a loop fetching a URL that kept timing out. One produced output that was technically correct but missed the scope, it answered the question I asked rather than the question I meant to ask. Both were harness failures: the first needed a retry limit, the second needed a better task boundary.
I fixed both. They’re in the AGENTS.md now.
The report that started this
That code projects analysis from March, 16 repos, structured intelligence report, generated while I was away from my desk, was the moment that made this concrete for me. Not an exciting demo. Just a task I’d been meaning to do for months that was too tedious to sit through, now done.
The file is at intel/data/aj-code-projects.md. I still reference it. It’s where the model evolution table in Post 6 of this series comes from.
Cloud execution didn’t change what agents can do. It changed which tasks I was willing to ask them to do. That’s a bigger shift than it sounds.
