All posts

12 April 2026

Anthropic Managed Agents: Real Test, Real Limitations

Anthropic's Managed Agents went live in April 2026. Here's an honest breakdown of what it does, where it falls short, and how it compares to n8n and Claude Code.

Watch on YouTube
Anthropic Managed Agents: Real Test, Real Limitations

Anthropic’s Managed Agents: Real Test, Real Limitations

I’ve been running a Mac mini as an always-on server for months. It handles my Claude Code agents, cron jobs, and scheduled workflows — and it keeps running when my laptop is closed. It’s the setup I landed on because nothing else gave me the same combination of local file access, persistent scheduling, and full control over credentials.

Then Anthropic launched Managed Agents, and I wanted to know: does this change things?

Short answer: it’s a strong foundation with some gaps that matter. Here’s the honest version.

By the end of this post you’ll know exactly what Managed Agents is, what the current limitations are, how it compares to tools you’re probably already using, and where I think Anthropic is heading with it.


What Managed Agents Actually Is

Managed Agents is Anthropic’s cloud infrastructure for running Claude agents. It went into public beta on April 8, 2026. The idea is simple: you define the agent logic, Anthropic runs the container. No server to provision, no machine to keep on.

You write a system prompt, connect tools via MCP servers (up to 20 per agent), and the agent executes tasks autonomously inside a secure cloud container on Anthropic’s infrastructure. Every session is logged, every tool call is visible in the debug view, and the whole thing runs without your machine being involved once you trigger it.

One thing to get out of the way early: this is a console.anthropic.com product. Your claude.ai Pro or Max subscription doesn’t cover it. You need an API key with its own billing — $0.08 per session-hour plus token costs. It’s pay-as-you-go, and costs are low for most use cases (a full day of testing and iteration during my build came to around $2.40), but it is a separate billing relationship from whatever you’re paying for claude.ai.

If you haven’t used the Claude API Console before, I covered the difference between Claude Code and the API layer in the Claude Is Shipping So Fast video — worth a watch for context.


The Demo: Weekly AI Digest to Telegram

To test the platform I built a weekly research digest agent. Every Friday it searches Anthropic’s blog, TechCrunch, ArXiv, and a handful of subreddits — then sends a formatted update to Telegram.

Here’s the setup flow:

  1. Create the agent — name it, write the system prompt, describe what it does. The console creates the agent and provisions a cloud environment for it.
  2. Connect MCP servers — this is where you add tools. Web search, Telegram, whatever you need. You’ll see the Credentials Vault here — more on that in a moment.
  3. Trigger a test run — the Transcript view shows every search and tool call in real time. The Debug view shows token count and latency per step.
  4. See the output — the Telegram message arrives. Formatted, sourced, sent autonomously. Nothing on my machine was running.

The important thing to understand about the MCP pattern is that Telegram is just one output destination. The same agent logic could route to Slack, Notion, ClickUp, Airtable — anywhere with an MCP server. You swap the output tool, the rest of the agent stays the same.

For a deeper look at how I think about research agents in general, the Research Agent video covers the underlying pattern well.


Four Limitations Worth Knowing Before You Build

This is the section most reviews skip. Here’s what I actually ran into.

1. The credentials vault is not a general secrets manager. The vault stores MCP server credentials — OAuth tokens and static bearer tokens for MCP integrations. That’s it. Arbitrary API keys — Telegram bot tokens, webhook secrets, third-party API keys — can’t go in the vault. They go in the system prompt. That’s a real security consideration for anything you’d hand to a client or run in a sensitive production context.

2. No local file system access. The agent runs in Anthropic’s cloud container. It can’t see your files, your folders, your local database — nothing on your machine. Right now there’s no context outside what you build inside the console environment. If your workflow depends on local documents or internal data, this isn’t the right tool yet.

3. The interesting features aren’t in the standard beta. Multi-agent coordination, persistent memory, and outcomes are all listed as “research preview.” You have to request access separately. What you get out of the box is the base layer — useful, but not the full picture Anthropic has shown in demos.

4. It’s still early. This is a public beta. Anthropic has flagged that behaviours may be refined before full release. I wouldn’t build a critical production workflow on it yet — especially anything that involves sensitive credentials in system prompts.


How It Compares: The Table That Actually Matters

There are three Anthropic products doing agentic tasks right now, and they’re built very differently:

Claude Code (server)Claude CoworkManaged Agents
Where it runsYour machineYour machineAnthropic cloud
Local file accessYesYesNo
Machine must stay onYesYesNo
SchedulingGit-based triggersBuilt-in (machine on)API-triggered
API key requiredNo (subscription)No (subscription)Yes (console billing)
Credential management.env / system.env / systemVault (MCP only)

I’ve covered both Claude Code and Cowork in detail — the Cowork Setup video and the Cowork Full Walkthrough are the best starting points if you want to understand where those fit.

Against n8n and Make: Those tools have one significant advantage right now — a mature credential management system and hundreds of pre-built integrations. Managed Agents gives you more flexible reasoning and tool-chaining, but you’re in a more raw environment. Until the credential story improves, n8n is still the safer choice for production workflows that involve sensitive keys. Storing a bot token in a system prompt is not a production-grade solution.


Where I Think This Is Going

Anthropic now has three separate agentic products with three different scheduling models. None of them talk to each other in any meaningful way — yet. Claude Code schedules via git triggers, Cowork schedules tasks on your computer but the machine must stay on, Managed Agents runs cloud-side. They’re fragments of a system that doesn’t fully exist yet.

But the trajectory is clear, and the cloud container model is the right foundation. My read on where Anthropic is heading:

  • Claude Code integration — hand off long-running tasks from Claude Code to a Managed Agent so your machine doesn’t need to stay on
  • File and folder context — give the cloud agent access to context outside the console environment
  • SSH-style access — the logical end state of the container model is an agent that behaves like a dedicated remote machine
  • A real credentials vault — one that holds any secret your agent needs, not just MCP tokens

MCP is clearly the platform bet. Every integration is being built on that standard. When the credential and context gaps close, this becomes genuinely useful for production business systems.

For now, the Mac mini stays. But I’ll be watching this closely.


Final Thoughts

Managed Agents removes the part of building AI agents that most people find painful — the hosting, the backend setup, the infrastructure. What remains is the agent logic, which is the interesting part anyway.

If you’re not technical and want to start experimenting with AI agents without setting up a server, this is a reasonable place to start. If you’re already running Claude Code on a dedicated machine, Managed Agents isn’t a replacement yet — it’s a different tool with a different trade-off set.

Try it at console.anthropic.com — you’ll need API billing set up, not your claude.ai subscription.

Watch the full walkthrough to see each step live → YouTube

Want AI working in your business?

Book a free discovery call — no pitch, no jargon. Just a conversation about where AI fits for you.

Book a Free Discovery Call