Why AI-Generated Code Hurts Your Exit — The Bootstrapped Founder 418


Dear founder,

We’re living through a fascinating moment in software development. AI coding tools can build features faster than ever before. They can scan entire codebases, spot things we might miss, and implement changes across dozens of files in seconds. It’s incredible.

But there’s something we need to talk about. Something that’s quietly accumulating in our projects while we marvel at how quickly we can ship features.

This is called comprehension debt.

A quick word from our Sponsor, Paddle.com. I use Paddle as my Merchant of Record for all my software projects. They take care of all the taxes, the currencies, tracking declined transactions and updated credit cards so that I can focus on dealing with my competitors (and not banks and financial regulators). If you think you’d rather just build your product, check out Paddle.com as your payment provider.

A New Kind of Technical Debt

Comprehension debt is like technical debt, but fundamentally different. Technical debt is about choices developers consciously make to defer work until a later stage of the business. You know you’re taking a shortcut. You know you’ll have to come back and fix it. That’s traditional technical debt.

Comprehension debt is different. It’s when we don’t comprehend what the system does anymore. We don’t understand it because we never built the understanding in the first place. And this is happening more and more with AI-generated code.

The Theory of Your Codebase

There’s this concept I came across from Peter Naur, the mathematician behind the Backus-Naur form. He calls it “theory building.” The idea is that code implodes when the team that has the theory—the mental model—of that particular piece of code is dissolved.

The code will keep running and producing results, sure. But the moment you try to modify it, you’re in trouble. You don’t have the theory to actually work with it. Naur says you have to rebuild not just the program, but also its underlying theory by a new team.

This has always been a challenge in software projects. But AI makes it exponentially more complex.

How AI Builds and Destroys Understanding

Here’s what happens when you use AI coding tools. You give it a prompt for a feature. The AI actually assembles an internal mental model of what needs to be done and what needs to be changed. You see this in all these wonderful coding tools—they pull together things from different parts of the codebase. They figure it out. They know what to change. They even see connections that you as a developer might have missed because they can scan through the whole codebase and see all the little moving parts.

So the AI assembles this theory—probably inadequate in some ways, but sufficiently adequate for this particular task. It builds a theory of the codebase, of the product. Then it implements the feature.

And if you’re lucky, if it’s in your system prompt, you tell it to write tests or document the behavior in some way.

But here’s the problem: the actual model that it created, the mental model, the theory of the product, is never persisted. It’s immediately lost. Once the prompt is run through and the context of that conversation is done, the model is gone. Nobody knows it.

So constantly, by adding more and more features to your product, you’re facilitating the creation of new models every single time you prompt. Models that are built up, implemented through, and then torn down.

And if you don’t read the code, if you don’t understand what is added and why, you will not develop this model as a developer. You will never comprehend the underlying theory of the product.

The Maintenance Problem

This creates a real problem. Eventually, if the AI for some reason is unable to rebuild the underlying understanding or model of the codebase as it had in the past, then it will not be able to adequately and correctly modify and change the system.

I think one solution being developed right now is context persistence in our agentic coding systems. Not just about the prompt you’re currently giving, but a deep understanding of the codebase maintained over time. A document, if you want to call it that, in which you work.

But if you are a developer who is already capable of building mental models from reading code, then you really have to do code review. If you just want to build something quickly and you don’t care about this, then using AI agentic systems to create code for you is fine. But you will find that eventually, the ownership of the theory of the codebase needs to lie with you.

The Business Transfer Problem

This becomes especially critical if you want your business to be acquired or if you want to hire.

You have to be able to transfer not just the raw codebase over to somebody. You have to be able to transfer the internal knowledge of the codebase to this person. Usually this happens through training—training somebody to replace you in a business, or as you grow, hiring your CTO, your lead developers, junior developers.

Everybody who gets to work on the codebase is trained into it. You transfer parts of, or the whole of, your mental model to those people. You transfer it over so that they can eventually integrate the model into their own brain and keep the theory alive.

But what happens when you don’t have that theory yourself?

Before, it was impossible to build a functional software business without deeply understanding the software of your product. Now, there’s a chance that someone could buy a product that nobody understands. Nobody.

The Acquisition Risk

This adds a very real layer of risk. You have time bombs unintentionally built into the product that, once someone buys the business, will quickly surface. Because people want to make changes to underlying systems and structures that the previous owner—the person that prompted this into existence—never thought about.

When you integrate a new software as a service business into your portfolio as a private equity company, or you integrate it into your existing system as a strategic buyer, there are things you will touch that the person who built this never considered. That’s inevitable. You buy a business because it has components that your current system lacks, and your current system has features that this new system lacks.

So the chance of quickly finding a hidden time bomb is quite high.

This is likely going to be priced into the acquisition—the money they pay for it. If you cannot show them that either the code is well documented, that these time bombs are avoidable, manageable, or non-existent, or that you are still in complete control of the codebase and have the theory of that codebase, of the product, you’ll see it reflected in the valuation.

I think buyers will become very, very intimately aware of this in the future, to the point that they will be demanding not just a codebase, but a kind of documentation system, a traceable and trackable history of the projects and the choices that were made within it. They will need some way to train their developers or the agentic systems that they have established internally on the correct functionality of your codebase and how to best work with it.

At some point, you will not be available to them anymore. And maybe even if you sell your business, if it was all AI-generated, you might not even know how it works yourself.

Fighting Comprehension Debt

So what do we do about this?

Right now, to deal with this kind of comprehension debt, I highly recommend making part of your system prompt for all these agentic tools that you use to very deliberately comment. Put comments into the system that facilitate a theory and a mental model of your codebase. Make it maintainable. Add code that helps the thing be maintainable.

Document choices that were made along the way. Document, maybe even the prompts that went into the codebase. Keep your prompts documented. Keep them logged, so you can step through historically and see what changes were made and what choices were made for and through these changes.

A Tool Idea

This is also an interesting field to build a tool in. It would be very interesting to offer a service or tool that constantly checks the internal theory as much as it can read from the codebase and deduce from reading through it using AI tooling.

This tool would constantly keep this theory both available to AI systems—kind of as an MCP, a description of code examples or code locations—and over time, track when the theory changes.

For example, let’s say you’ve determined, as the program’s designer, that in every list you create, people can sort the list, filter the list, and export the list as a CSV file. That’s just a choice you made. That’s the theory of your product—it does this. In every list, you have shared components, shared views, shared design patterns.

Over time, you’re prompting and prompting, and all of a sudden, the AI chooses to build a list that is sortable and filterable but doesn’t have an export. At that point, the theory of your system changes from “every list has these features” to “some lists have this feature.”

At that point, you should either be alerted—that would be an interesting business to build—or your agentic system is told that there is a discrepancy, and it should recreate the actual theory that it had prior to this.

I think that would be an interesting tool. Kind of an internal error tool, like a code quality maintenance tool. We don’t really have many tools right now that deal with code quality before errors happen. But I think this would be one of them.

The Founder’s Responsibility

So maybe comprehension debt can be fought by a deeper and more obvious understanding of the underlying theory in software businesses. As founders, we have to think about the fact that this is an additional risk if we add this comprehension debt. It’s something that an acquirer or us in the future will have to pay at some point.

Really be careful with how much of the theory building you allow your AI to do and control. Or at least be intentional about how much of this theory you retain.

Because at the end of the day, your business isn’t just the code. It’s the understanding of what that code does, why it does it, and how it all fits together.

That understanding is becoming more valuable than ever.

Thank you for listening to The Bootstrapped Founder.

Find me on Twitter @arvidkahl.

Attention founders, PR experts, and marketing teams: Are you missing critical conversations about your brand? Podscan.fm monitors over 4 million podcasts in real-time, alerting you when influencers, customers, or competitors mention you. Turn unstructured podcast chatter into competitive intelligence, PR opportunities, and customer insights with our powerful API.

And if you’re a founder searching for your next venture, discover validated problems straight from the market at ideas.podscan.fm - where AI identifies startup opportunities from hundreds of hours of expert discussions daily, so you can build what people are already asking for.

Share this with anyone who needs to turn conversations into competitive advantage.

Thank you for listening. Have a wonderful day, and bye bye.


We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!

Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and ​share this issue on Twitter.

If you want to reach tens of thousands of creators, makers, and dreamers, you can ​apply to sponsor ​an episode of this newsletter. Or just reply to this email!

To make sure you keep getting your weekly dose of Bootstrapped Founder, please add arvid@thebootstrappedfounder.com to your address book or whitelist us.

Did someone forward you this issue of The Bootstrapped Founder? ​You can subscribe to it here!​

Want to change which emails you get from The Bootstrapped Founder or unsubscribe for good? No worries, just click this link: ​change email preferences​ or ​unsubscribe​​.

Our postal address: 113 Cherry St #92768, Seattle, WA 98104-2205

Opt-out of preference-based advertising

Arvid Kahl

Being your own boss isn't easy, but it's worth it. Learn how to build a legacy while being kind and authentic. I want to empower as many entrepreneurs as possible to help themselves (and those they choose to serve).

Read more from Arvid Kahl

Podcast, YouTube, Blog Dear founder, A couple of years ago, I tweeted that “the best tech stack is the one you already know.” To this day, this is one of my most resonating tweets. People keep bringing it back, and founders who’ve been around for a while seem to particularly agree with it. They’ve gone through the learning experience of trying new tech stacks, only to find that investing a lot of time in new technology often isn’t worth it when all you wanted to do was actually build a...

Podcast, YouTube, Blog Dear founder, As I’m building yet another software service business after having built and sold one back in 2019, I keep wrestling with a fundamental question that might sound simple but has profound implications: What do I actually own in this business? THE BOOTSTRAPPED FOUNDER • EPISODE 416 416: The Ownership Paradox: What Do You Really Control in Your Software Business? 19:12 MORE INFO And before we get to that, a word from our Sponsor, Paddle.com. They’re a great...

Podcast, YouTube, Blog Dear founder, Let’s talk about handling multiple ICPs as a solo founder. This is something I’ve been wrestling with at Podscan, and I know many of you face the same challenge: you’re building a product that could serve two, three, maybe even five different ideal customer profiles. And you’re trying to figure out how to keep them all balanced—or whether you should even try. THE BOOTSTRAPPED FOUNDER • EPISODE 415 Handling Multiple ICPs as a Solo Founder 20:14 MORE INFO...