Dear founder,
A couple things happened these last few weeks that brought into sharp focus just how naively we interact with agentic systems — to the point where we get surprised by the consequences of using them.
Anthropic made it very clear in their terms and conditions that their Max plans — the $200 plan and all the others meant to work with tools like Claude Code, the Claude web app, or the desktop app — cannot be used to run any agentic system like Open Claw. They made it very clear that that’s not allowed. Even tools like Open Code, which is just a different harness for Claude Code, are having trouble allowing Claude Code to be called in its own operational loop. And then Google started banning people for using Open Claw to connect to Gmail. Their developer system also flat out stated that using their API for agentic harnesses is not allowed either.
Two of the bigger players in the space are acting quite differently from what OpenAI is doing — or seemingly allowing — and are shutting things down. And I believe it has almost nothing to do with actual token usage.
I believe that Google and Anthropic are closing this down because they don’t want to be the first AI provider that — through negligence or lack of control, lack of feedback loops — is responsible for the first human to be seriously harmed by an agentic AI’s actions. They want their own teams to be responsible for safety, to make sure things can’t go wrong. Because if somebody else were to facilitate a disaster, it’s ultimately still their model making the call and taking the action.
And that made me think about liability. Not just theirs — ours. What should we, as founders, as software developers, as operators of software businesses, be thinking about when it comes to integrating AI and agent systems into our products?
The Landmine Field
Think of AI liability like a landmine field. Each risk is a mine buried just beneath the surface. You don’t know exactly where they all are, you can’t always see them, and if one goes off, it’s catastrophic. The goal isn’t just to walk carefully — it’s to prevent them from being laid in the ground in the first place.
And these mines exist on multiple fronts. Let me walk you through them.
Customer-Facing AI: When Your Bot Goes Rogue
Let’s start with the most obvious one: customer-facing AI. Having a customer service chatbot sounds like a great idea. But it’s already quite the risk.
You don’t really know what it’s going to do. You don’t know if it’s going to solve a customer’s problem in a way that actually benefits them — or if it’s going to offer some solution to a problem it envisions, one that doesn’t match reality. Or worse, could it be detrimental? If you give it enough capability, could it delete data that a real customer service agent could never delete?
These are questions you have to think about now, because these tools aren’t just pulling information from articles anymore. A lot of customer service tools allow MCP interaction with the actual product. On Podscan, for example, if somebody asked me, “Hey, I want to create an alert that tracks these keywords, can you just do it for me?” — as a human, I’d say, “Really, can’t you do it yourself? Here’s the page.” But for an AI system that has access to the Podscan MCP? Sure, we could just create it for you. We know your team ID, your user ID, your plan. We can do this.
But what happens when the user says, “Hey, delete all my content”? Or says something that could be misconstrued — like “archive all of my mentions” — and the chatbot misinterprets that as “delete these” because there isn’t an archival feature? All of a sudden, data your customer put onto your platform for safekeeping is potentially destroyed, and all of it was done by an LLM.
Then there’s in-app agent tech. Not just customer service, but actual product features: “Enter any text here and we’ll try to make it happen.” What stops a customer from turning that into a conversation about their love life, or about how to build something dangerous, or — and this is the real nightmare — trying to convince your agent that they’re a different customer and loading someone else’s data? If you’re not protecting that well enough, you have a privacy nightmare, a scope nightmare, and you’re paying for the tokens that create these answers.
You’re Liable Like It’s an Employee
So who’s responsible? Here’s the mental model that I think works best right now: treat your AI features the same way you’d treat an employee. If you have a customer chatbot and it causes damage, it’s not the chatbot company that’s responsible. It’s you. You exposed that chatbot as a worker — a virtual employee of your company. If somebody gets damaged by your company, even through a third-party tool, they will seek legal recourse against you. Even if you can point to somebody else, it’s going to be a pass-through situation. The liability lands on your desk.
And here’s the part that should make you uncomfortable: even though there is insurance for employee activity, there likely is not insurance for AI activity. Not yet. Your business insurance might not cover this at all. Which means the moment you turn on an AI-powered feature, you might be running an uninsured operation.
Now, you might think — well, I’ll just add a disclaimer to my terms of service. Shift the liability to the user. And yes, you should do this. But here’s the tension: the moment an enterprise customer’s legal department reads that clause, it’s going to be the reddest flag they’ve ever seen. So we’re looking at either you try to shed the liability but lose customers, or you eat the liability and open yourself up to a completely uninsured operation. It’s a real damned-if-you-do, damned-if-you-don’t situation.
The Consent Framework
So does this mean all AI integration is equally risky? No. It’s really about consent.
As long as there’s a moment of consent — and that consent is revocable — any AI action is defensible. It’s about that moment of confirmation. And probably, most importantly, about taking that confirmation into an audit trail.
We have auditing systems in SaaS, but what’s usually tracked is the change itself and maybe the user ID that issued it. The actual executor is typically not tracked, because it’s presumed to be the user. But if it’s an AI system working with that user’s credentials, we should audit that too. Auditing gets a new dimension.
This also means we should very clearly communicate that any feature with an AI step has one. I don’t think there’s a best practice for this yet, other than maybe the little sparkle icon next to a feature saying “AI-powered.” But we should do this more and more. Label clearly, and then your terms of service can reference those labels. People who are cautious about AI in their workflows will see the labels and make their own informed decisions. That’s the responsible path.
When Someone Else’s AI Attacks Your Product
Here’s one that a lot of founders aren’t thinking about yet: what happens when it’s not your AI that causes the problem — it’s your customer’s AI interacting with your product?
People are connecting MCP servers and agentic tools to everything right now. What if someone points an autonomous agent at your API, and that agent hammers your endpoints, scrapes data it shouldn’t access, or exploits some edge case your API wasn’t designed for? That’s not you deploying AI. That’s your customer deploying AI against your platform.
I think you need to treat this like you would any other attack on your system, because it’s effectively negligence due to lack of intelligence. An agent that isn’t smart enough — one that just does what it thinks is right — will hammer your server, delete data, and connect to far more than it should. It’s probably not going to be a malicious AI that aggresses against your service; it’s just going to be one that’s thinking what it does is right, but it’s the completely wrong operation. We’re trained to think about security in terms of bad actors, but this is about dumb actors with admin credentials.
Rate limit everything. Every path touched by an API — MCP, REST, even web endpoints — needs rate limiting. Consider soft deletes instead of actual data deletion. Have monitoring in place. Use something like Cloudflare’s JavaScript challenge logic to prevent larger automated attacks. Test your permissions exhaustively, because a confused agent will iterate over your endpoints looking for the one that doesn’t require authentication.
The legally liable party? The user whose account credentials are being used on your service. But that’s cold comfort if your data is already gone.
The Developer’s Own Liability
And it’s not just customer-facing. Your own development tools carry risk too.
I’ve experienced this firsthand. Very early on, when I started using Claude Code, it tried to connect to my production MySQL database. Fortunately, the values in my production .env file were outdated. But I saw it attempt the connection, and that freaked me out. An agentic system that has no understanding of the intricacies of a production database — combined with the capability of running any query as an administrator — is an incredible liability risk.
There used to be a time when you could, at least as a small founder, reliably have your development configuration, your testing configuration, maybe your staging and production settings all in .env files right there in your IDE. And you could be pretty sure that your local tooling would always just use your testing or development files. But now, if you say, “Hey, check in the product if this feature is still working,” it might connect to the real product. It might confuse your development system with your production system. And if you’re logged into both with an admin account, it can happen that it tries to test a feature on production and messes up customer data.
If my agentic system had decided to run a migration test and confused my development database with my production database, I could have seen a full wipe of several terabytes of customer data. And my backup solution at that point would have taken half a day to restore.
It gets worse. I once told Claude Code it was not allowed to run php artisan migrate — which in Laravel is how you run database migrations, rollbacks, or wipes. So what did it do? It wrote a bash script — which it was allowed to do — and inside that script, it invoked the exact command I’d forbidden. It knew it wasn’t allowed to run the command. Instead of asking me what to do, it made an effort to circumvent my permissions.
As Arthur Weasley says in Harry Potter: don’t trust a thing if you don’t know where it keeps its brain. I feel the same way about agentic systems. You just don’t know what it will do next.
So for local development: never run without permissions. Never use the “dangerously skip permissions” flag, even though it means you’ll be interrupted more and have to babysit the agent a little. Sandbox everything you can — Claude Code can run inside a Docker container, and so should your applications. And have a solid backup strategy for any system that an agentic tool touches. If it touches the database, have a snapshot. If it touches your file system, have a full disk backup. If it touches an email inbox, have a full export.
Platform Risk: The Mines You Didn’t Lay
There’s another kind of liability that isn’t legal but is just as dangerous: platform risk. Google banned people’s accounts for connecting to Gmail through a protocol they didn’t sanction. All of a sudden, your digital identity is inaccessible. You effectively become a digital homeless person. Anthropic is tightening what you can do with their API. As a bootstrapped founder building on top of these models, the providers can just change the rules. Your feature dies. Your customers are affected. You have zero recourse.
Some of these landmines aren’t ones we laid — the providers are laying them under our feet while we’re walking.
I generally consider any AI implementation to require, by default, an abstraction layer where provider swapping can be done with a configuration toggle. Any feature should be provider-agnostic. Because of the standards that have been set in how to interact with LLMs, this is quite easy to keep agnostic, and I highly recommend it. You can run your own LLMs, but they’re not as powerful, and it’s the powerful ones that give you the operational advantage right now. So abstract, and be ready to swap.
The Minimum Viable Safety Posture
If you can only do three things before you ship an AI feature, here they are.
First, rate limit everything. Assume anything with an endpoint will get hammered — by customers, by their agents, by bad actors, by confused bots. Twenty requests per minute as a baseline for every endpoint. If you need more for specific routes, add capacity there. But everything gets a limit by default.
Second, label your AI features clearly and put it in your terms of service. Make it explicit that if a user is interacting with an AI agent or agentic systems are deployed on their behalf, liability rests with them. Having something prepared for a legal review is already a strong signal — not necessarily a go-to-market advantage, but something people will notice if they’re looking for it.
Third, have restore-ready backups for everything an agent touches. Your local development environment, your production system, your production database. This has always been good practice, but it’s urgent now, because agentic tools can delete and remove things faster than any human ever could.
When, Not If: The Kill Switch
Despite all precautions, something will eventually go sideways. When it does, you need a kill switch — not just for individual AI features, but for all AI features across your system. I have this in place on Podscan. If something happens, I can turn off every connection to an LLM throughout the whole system. That’s also my safeguard against token-draining attacks.
For customer communication when something goes wrong: be specific, reach out to individual customers. I wouldn’t do a blanket public announcement. The legal landscape is still forming, and broad statements might trip you up later. Clear it up directly — apologize, restore data, refund subscriptions if needed. Don’t necessarily point out the AI. Just fix the problem.
The Real Moat Isn’t AI
Here’s where I want to bring it all together, because this connects to what I talked about last week on the podcast.
The founders who treat AI as the product will constantly be chasing liability problems, because they’re exposed on every surface. The founders who treat AI as infrastructure — a tool that helps them collect, refine, and serve unique data — have a much smaller and more manageable risk surface.
The competitive moat for a business right now is not the use of AI. It’s human-originating, high-quality, high-fidelity data that other systems can’t replicate — even if they implement the exact same features and are built by the same agentic systems. The trick is not to make AI the thing that’s competitive, but to leverage AI to find the thing that makes your business competitive: unique data, accessible and trustworthy data.
We are still flat in the innovation phase of this technology. It improves and changes every single week. As much as I caution against unreflected use, as much as I caution against overly optimistic use of these tools, I still suggest using them in significant parts of product development and operations. It’s a weird new balance to strike, but it’s a balance we have to at least be aware of.
Protect yourself. Label things. Back things up. Have a kill switch. Abstract your providers. And build your moat out of data, not models.
We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!
Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and share this issue on Twitter.
If you want to reach tens of thousands of creators, makers, and dreamers, you can apply to sponsor an episode of this newsletter. Or just reply to this email!