From Code Writer to Code Editor: My AI-Assisted Development Workflow — The Bootstrapped Founder 395


Dear founder,

If you're into PHP, Laravel, and how Taylor Otwell turned an open-source framework into a thriving multi-million dollar empire, check out this week's interview episode of my podcast!

There’s something deeply unsettling about being dramatically more productive while feeling like you’re barely working. That’s the strange dichotomy I find myself in every day now with AI-assisted coding. My output has multiplied significantly over the last few months, yet I often feel like I’m under-utilizing my time. It’s probably the most interesting and confusing part of how I build software today.

This shift has been so massive that I can barely recognize how I used to work. The difference isn’t just in the tools—it’s in the entire role I play as a developer. And I want to share exactly how this works, because I think we’re witnessing a fundamental transformation in what it means to build software.

The Voice-to-Code Pipeline

Let me walk you through what I actually did today, because it perfectly illustrates this new workflow.

Whenever I need to build something that extends existing code—whether I wrote it or AI wrote it previously—I’ve found the most effective approach is to draft a very well-defined specification of what I want. But here’s the key: I don’t type these specifications. I speak them.

I have a microphone attached to my computer for podcasting, and it’s always on. I use a tool called Whisper Flow on my Mac that lets me hit a key command, start speaking, and when I hit the finish command, the full transcript gets pasted into whatever input field is active. Whether that’s ChatGPT, Claude, Perplexity, or my coding assistant—anywhere I want to put text.

This is so much faster than typing, and the transcription quality is excellent because Whisper Flow has an AI step at the end that smooths out the transcript. Instead of just raw transcription, it reduces misspellings and makes the text more coherent.

So when I have a coding task for Juni—that’s the AI coding assistant for IntelliJ PHPStorm, my IDE of choice—I just start talking. I might switch between windows, look up articles, research related information, all while verbalizing my thoughts. Once I’m done speaking, I go back to my IDE, select the Juni window, and paste what I said. That becomes my prompt.

The Anatomy of an Effective Prompt

These spoken prompts typically follow a specific structure. I start by speaking through where we are right now—what’s the current status of the code I want changed, which files are relevant, what business logic will be impacted. Then I describe what I want the changes to look like: interface components, different wording, new logic, different outcomes.

I try to prompt outcomes first. I give as much detail about the desired outcomes as possible. About half the time, I also provide detailed implementation steps, because sometimes I know exactly what the solution should look like—I just don’t want to type it out. I’ll say something like, “Here’s the class I would create, and here’s the job I want you to create for this, and the job gets dispatched at this point in that file.”

I usually don’t give line numbers. I just name the functions where changes should happen, and that has a pretty high rate of correct completion.

The 40/20/40 Framework

After developing this workflow over months, I’ve noticed my time breaks down into a consistent pattern: roughly 40% setting up the prompt, 20% waiting while code is generated, and 40% reviewing and verifying the code.

That 40% upfront investment in the prompt is crucial. The more time you spend giving the AI context, the less likely you’ll run into unexpected errors. A highly contextualized prompt will generate code that doesn’t surprise you.

I’ve found that being verbose helps tremendously. I often repeat myself when describing what I want, especially for critical business logic. If I’m dealing with important data that could be corrupted or mishandled, I’ll lay out every single scenario, what the data should look like, what’s allowed, what’s not allowed. I make it almost repetitive to ensure the AI system understands every case.

This level of detail pays off because the AI takes that same care and projects it into other parts of the application that you might not have even thought would be involved in the changes.

Beyond the Code Generation

When I know that multiple files and complex interactions are required, I give the AI explicit instructions to be thorough in the planning stage. I’ll tell it to search for all files where certain code might be relevant, or where models that are being changed are used. I want it to find every place that needs modification instead of jumping to the first place and forgetting others.

Inside my IDE, I open two or three core files that I know will be involved before running the AI. This gives the agent anchor points—it doesn’t have to search the whole codebase, but can start with these key files and find references from there. I’ve found much better success with this approach than giving the AI either the entire codebase or nothing at all.

Usually, the AI runs for five to ten minutes, depending on the task complexity. Sometimes for very large features requiring dozens or hundreds of file changes, I need to come back and say “continue” to finish the implementation. But for normal or mid-scope features, one shot is usually enough.

The Most Important Part: Code Review

Then comes the 40% that’s probably the most crucial part of this workflow: code review. This is where I investigate every change, line by line. Looking at code you didn’t write requires intense focus, but it certainly beats having to write all the code yourself.

Since I’ve given such a specific, scoped definition of what I want, the generated code usually aligns well with my expectations. But I have one non-negotiable rule: I must understand every single line of code written by an AI agent. Even if it looks good and works, I always test changes immediately to catch any logic errors.

Most of the time—about 80% in my experience—the code works on the first try. When there are issues, they’re usually small: a forgotten import, slightly incorrect syntax, minor logic errors. These typically take no more than two or three changes to fix.

Code review gets harder when entirely new files are created. When there’s a completely new job being defined, I have to actually read and understand the entire definition. I can’t just look at what changed and verify if it looks right—I need to dive deeply into the logic.

Guidelines and Best Practices

One of the most powerful features of these AI coding assistants is the ability to set guidelines. Juni, for example, lets you define coding standards that get applied every time the agent runs. You can instruct it to create unit tests for every new method, integration tests for every new job, define your testing suite, specify your coding style—all of this gets automatically applied.

In their documentation, they even suggest having the AI create these guidelines by investigating your existing codebase. You can task it to understand how you currently build your product, how your code is structured, how you approach jobs, database connectivity, all of that. Then it codifies this understanding into guidelines.

I recommend always using guidelines. They’re useful not just for code quality, but for providing architectural insight. You can tell the system: “My backend is Laravel, my frontend is Vue.js, we try to use as few libraries as possible, we prefer external services for certain features.” This kind of context makes integration decisions much more intelligent.

The Broader AI Toolkit

Of course, agentic coding isn’t the only way I use AI in development. For less technical issues—operational challenges, server errors, data manipulation—I use conversational AI like Claude in the browser.

When my server starts throwing 502 errors, I can ask: “What could be the reason? Where should I start looking? Which log files should I investigate?” When I have a large JSON object and need to extract data with a bash command, or need a script to convert CSV to JSON, I handle these through back-and-forth conversation rather than integrated agents.

Recently, I used Claude’s artifacts feature for prototyping. I was working on analytics visualization for Podscan—we track podcast charts and rankings over time. I took example data straight from production, pasted the JSON into Claude, and asked it to generate three different ways of visualizing this data as live React components.

Claude created three different interactive components that I could actually use and test. Once I found the one I liked, I had it convert that into a Vue.js composition API component for my project. Then I threw that component into my coding agent and had it integrate everything properly. It’s an incredibly powerful workflow for prototyping and iteration.

Documentation as Code

One of the most impressive applications has been documentation generation. This week, I overhauled the documentation for the Podscan Firehose API after a customer mentioned some parts were outdated.

The Firehose API is a webhook-based data stream that sends information about every podcast episode we transcribe the moment we finish processing it. It contains the full transcript, host and guest identification, sponsor mentions, main themes—basically everything we analyze, dispatched as a sizeable JSON object.

To update the documentation, I took my existing markdown documentation from Notion, then turned on the Firehose for a test webhook to collect real data. After getting about 30-40 episodes worth of actual data, I exported it as CSV and had Claude create a script to condense the transcript portions so I could fit more examples into the AI context.

Then I told Claude: “Here’s the existing documentation, here’s real data showing the actual structure and field variations. Update the documentation to be comprehensive and accurate.”

A minute later, I had extended documentation that included everything from my prior version but replaced the simple examples with a full table-based index of every field, their types, when they’re present, and their purposes. It was about 95% correct, and the remaining 5% took very little time to fix.

The Psychological Shift

Here’s what’s most fascinating about this entire transformation: those 20% moments when the AI is generating code still feel like cheating. It feels like somebody else is doing the work for me. I have to remind myself that without this process of specifying, executing, and reviewing, I wouldn’t get half the things done that I accomplish in a day.

I’m often humbled by the speed at which these systems generate code. It’s not always right, but neither is mine. I’m confused when people complain about AI writing code that doesn’t work—they seem to forget that they themselves write code that doesn’t work immediately and needs debugging. Coding, for most of us, has always been trial and error until the errors become small enough not to matter.

AI systems work the same way internally. They have self-checking loops. I often see an agentic system test its own work, realize something didn’t compile, and try again until it finds the right approach. That’s exactly what a developer would do.

From Writer to Editor

What we’re witnessing is the transformation from being code writers to code editors. We’re no longer the writers of code—we’re the editors of code. What we call a “code editor” might as well be redefined: not a program that allows us to type things, but a tool where we say “Yes, I approve this code.”

It’s a fascinating redefinition of terms in our industry. And I think this is becoming the norm very quickly, because we need to understand something important: today’s version of AI-assisted coding is the worst version we’ll ever see again. It might be the best we’ve had so far, but it’s also the worst we’ll have going forward. Tomorrow’s version will be better, and the day after that will be even better.

These systems will become more reliable and autonomous. We’ll see fewer interventions needed and more “yes, this works” responses. Particularly with the advent of people understanding MCPs (Model Context Protocols) and integrating external verification systems, AI will be checking its own work through external tools.

The Skills That Matter Now

If you’re still writing code entirely by hand, that’s fine—it’s still something we need to be able to do. I occasionally take a step back and code manually, then quickly get frustrated with my own limitations, which have always been there. It’s not that I forgot how to code; it’s that it’s always been a struggle to get things just right.

But if the alternative is telling someone to write code for you, then understanding that code and saying “Yes, this is exactly what I would have written” or “No, try again, you missed this thing”—that’s a superpower all in itself.

We thought coding was the superpower, but it turns out the typing part never mattered. What matters is understanding what good code looks like, what it does, what it doesn’t do. Being able to discriminate good code from bad code suddenly becomes much more important than being able to type code into an editor.

You still need to understand algorithmic complexity and basic algorithms so you can prompt effectively, but you don’t necessarily need to implement everything by hand anymore.

Looking Forward

This obviously translates to other fields as well. You can apply this same 40/20/40 approach—taking time to get the prompt right, giving as much context as possible, then expecting mistakes and approaching it from a corrective, verification perspective—to writing, sales, outreach, research, composition, really anything.

This feels like the inevitable conclusion of automated software development. We’re experiencing something that certainly wasn’t possible a couple of years ago, and it feels like we’re just getting started.

If you’re not already experimenting with AI-assisted development, I encourage you to give it a try. See if it fits somewhere into your workflow. Start small, maybe with documentation or simple scripting tasks, and work your way up to more complex features.

The future belongs to those who can effectively collaborate with AI systems, and the best way to learn that collaboration is to start practicing it today. The tools will only get better, but the fundamental skill of knowing how to specify what you want, review what you get, and iterate toward better results—that’s something you can start building right now.

What matters isn’t whether you can type faster or remember more syntax. What matters is whether you can think clearly about problems, communicate effectively with AI systems, and recognize quality solutions when you see them. Those are the skills that will define the next generation of builders.

We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!

Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and ​share this issue on Twitter.

If you want to reach tens of thousands of creators, makers, and dreamers, you can ​apply to sponsor ​an episode of this newsletter. Or just reply to this email!

To make sure you keep getting your weekly dose of Bootstrapped Founder, please add arvid@thebootstrappedfounder.com to your address book or whitelist us.

Did someone forward you this issue of The Bootstrapped Founder? ​You can subscribe to it here!​

Want to change which emails you get from The Bootstrapped Founder or unsubscribe for good? No worries, just click this link: ​change email preferences​ or ​unsubscribe​​.

Our postal address: 113 Cherry St #92768, Seattle, WA 98104-2205

Opt-out of preference-based advertising

Arvid Kahl

Being your own boss isn't easy, but it's worth it. Learn how to build a legacy while being kind and authentic. I want to empower as many entrepreneurs as possible to help themselves (and those they choose to serve).

Read more from Arvid Kahl

Podcast, YouTube, Blog Dear founder, I had the opportunity to sit down and talk to Taylor Otwell, the creator of the popular PHP framework Laravel. THE BOOTSTRAPPED FOUNDER • EPISODE 394 394: Taylor Otwell — The (Quite Entrepreneurial) Creator of Laravel 41:55 MORE INFO Not only did he invent this highly popular open-source framework; he also turned it into a very profitable business that helps developers build profitable businesses themselves. I know very few other open-source devs that have...

Podcast, YouTube, Blog Dear founder, Just a few more days left on this deal: I recently recorded an in-depth session for Rob Walling's SaaS Launchpad. This module is called AI in SaaS, and I share over a dozen pitfalls, unobvious risks, and hard-earned insights from my own AI SaaS building journey. Use the code ARVID150 to get $150 off until June 8th. Today I want to share something that’s been on my mind ever since a conversation I had with an investor friend of mine, Tyler Tringas. THE...

Podcast, YouTube, Blog Dear founder, Before we get into this week's AI-centric issue, I want to let you know that I recently recorded an in-depth session for Rob Walling's SaaS Launchpad. This module is called AI in SaaS, and I share over a dozen pitfalls, unobvious risks, and hard-earned insights from my own AI SaaS building journey. The course was already great, but... you know... ;) Use the code ARVID150 to get $150 off until June 8th. Alright, let's get to it. There’s a term I’ve been...