In partnership with

Free, private email that puts your privacy first

A private inbox doesn’t have to come with a price tag—or a catch. Proton Mail’s free plan gives you the privacy and security you expect, without selling your data or showing you ads.

Built by scientists and privacy advocates, Proton Mail uses end-to-end encryption to keep your conversations secure. No scanning. No targeting. No creepy promotions.

With Proton, you’re not the product — you’re in control.

Start for free. Upgrade anytime. Stay private always.

We’re entering the era of the "ecstatic executive" the leader who discovers that Claude Cowork can instantly organize a chaotic local file system. But while they’re celebrating the end of messy folders, IT and Security should be on high alert. When a "research preview" tool starts exercising write-access across your corporate fleet, you don't have a productivity win; you have a governance crisis.

Anthropic recently dropped Claude Cowork for macOS, positioning it as the agentic assistant for business users who don't know a CLI from a CLO. The marketing is brilliant: a friendly, GUI-driven sidekick that can read, write, and organize your files. But here's what the launch blog doesn't tell you—they've taken the most volatile capabilities of Claude Code (file system access and command execution) and gift-wrapped them for users who have no concept of a "blast radius."

I've seen this pattern before. It started with Dropbox, then Notion, and now it's agentic AI. Each time, a vendor bypasses the CTO by appealing directly to the end-user's desire for convenience. By labeling this a "research preview," Anthropic isn't just testing features. They're offloading the entire governance and liability burden onto your shoulders while collecting telemetry on how your employees accidentally leak data.

The Sandbox Theater

We need to talk about the technical theater of "sandboxing." Claude Cowork uses the Apple Virtualization Framework to isolate the AI's environment. If you listen to the marketing, this makes it "safe." In reality, this virtualization only protects the host operating system from being bricked. It does absolutely nothing to protect the data within the folders the user voluntarily mounts to the tool.

If a user grants Cowork access to a folder containing sensitive HR documents or financial spreadsheets, that sandbox is irrelevant. The real threat isn't a virus it's Indirect Prompt Injection. Imagine Cowork browsing a website to "research" a competitor while it has your "2026_Strategy" folder mounted. A malicious hidden prompt on that website could instruct Cowork to quietly exfiltrate your strategy docs to an external endpoint or, worse, delete them entirely. The sandbox is a legal shield for Anthropic, not a data shield for your enterprise.

The "Review-Before-Action" Trap

Anthropic's primary recommendation for safety is that users should "review the Plan" before Cowork executes file operations. This is governance theater. We're asking non-technical employees people who routinely click "Allow" on every cookie banner and system pop-up to perform a technical audit of an AI's intent.

A Marketing Manager cannot distinguish between a legitimate plan to "consolidate duplicate files" and a malicious instruction buried in metadata to "move all files to a hidden temp directory and curl them to an attacker's IP." We're essentially granting sudo privileges to the entire organization and hoping that "vigilance" replaces a proper permission model. This is the normalization of deviance: we know it's risky, but because the UI is polished and the tool is "helpful," we look the other way until the first major wipe happens.

What You Need to Do Right Now

Stop trying to block the app via MDM. Your users will just move to personal devices and sync corporate data there anyway. Instead, reclassify how your organization thinks about AI tools.

The "Standard AI Policy" for LLM chat usage is no longer sufficient for tools that can act on the file system. Any tool capable of executing commands, modifying files, or making autonomous API calls must be classified as an "Agentic Tool." This classification should trigger an automatic lock-out from any directory containing PII, financial data, or intellectual property unless a specific security exception is granted. We treat write-access to a production database with extreme caution. We must treat an AI agent with the same level of suspicion. If a tool can touch your files, it's not an "app"—it's a service account with a personality.

Start with an audit. Pull MDM or EDR logs (CrowdStrike, SentinelOne) specifically for the Cowork desktop binary. Most CTOs I talk to assume their teams are just "chatting" with Claude. You need baseline data on how many unmanaged service accounts are currently running on your endpoints. This isn't about punishment—it's about establishing what you're actually dealing with.

Define your redline. Create a clear organizational boundary: any AI tool that modifies local state (files, configurations, network requests) is stripped of "Productivity Tool" status and moved into "Privileged Access." This forces business units to justify the risk versus reward before the first cleanup script runs.

Mandate hardened directory mounting. Agentic tools should only access "Dispensable Directories." Users should never mount their root User directory or synced cloud drives. If Cowork needs to process a file, that file gets moved to a dedicated, air-gapped "AI Sandbox" folder containing nothing else sensitive.

You'll face pushback from power users who want seamless access. Velocity without governance is just technical debt that eventually bankrupts your security posture. Tell them that.

The Real Gotchas

The "Internal Tool" trap is the most dangerous. Employees assume that because the company has a Claude Enterprise seat, the Cowork desktop app is automatically covered under the same Data Processing Agreement. It's not. You need explicit language in your vendor agreements about what agentic capabilities are permitted and what data they can access.

Prompt Injection via downloads is another vector. Cowork "summarizes" a PDF you downloaded that contains a malicious instruction to delete the rest of your Downloads folder. Or worse—it modifies files in a folder synced to OneDrive, corrupting the cloud state and replicating the damage across your entire team.

The governance fix is straightforward: disable "Web Search" or "External Browsing" features in agentic tools if they have concurrent access to local directories containing sensitive corporate data. It's a trade-off. You lose some utility. You gain control.

The Question You Need to Answer

Have you officially defined "Agentic AI" in your company's security policy yet? Most organizations haven't. They're treating Claude Cowork like Slack just another app. It's not. It's a service account that your employees installed without IT approval, and it has the keys to your file system.

Start there. Define the category. Audit the footprint. Harden the permissions. Everything else flows from that.

Keep Reading

No posts found