Dear founder,
Picture this: An engineer gets called to fix a massive industrial machine that’s been down for hours, costing the company thousands. She walks in, looks around for maybe two minutes, picks up a hammer, and gives one precise tap to a specific component. The machine roars back to life.
The bill? $10,000.
The company is outraged. “All you did was hit it once with a hammer! How can you charge $10,000 for that?”
The engineer calmly explains: “The hammer tap was $5. Knowing exactly where to hit? That’s the other $9,995.”
This story perfectly captures what I want to talk about today – the nature of expertise and a growing concern I have about how AI tools might be fundamentally changing our ability to develop it.
The Friction Problem
I recently read an article that got me thinking about one of AI’s most celebrated features – its ability to remove friction from processes that used to be quite difficult. This sounds wonderful, right? Who doesn’t want things to be easier?
But here’s the thing that’s been nagging at me: what if the removal of friction also removes our capacity for expertise?
Think about what happens when we use AI systems – particularly the sophisticated agentic ones – to do work for us. The abstraction of planning, the individual execution of steps, the verification of those steps, and the adaptive behavior to overcome problems and regressions – all of this gets abstracted away from us.
This makes me wonder: can expertise be formed without friction at all? Is friction a necessary requirement for expertise, or are there ways to become an expert even when the struggle of understanding nuance and running into problems over and over again is missing?
When it comes to software development, entrepreneurship, being a founder – there’s so much in that process, that lifestyle, that requires very intense dealing with setbacks and failures and mistakes and errors. If we have tooling and automations that take every single pain point, every single point of friction away from us, what happens to our development?
My assumption is that we either become much slower at developing skills similar to the tools we’re using, or we’re prevented completely from building the repertoire of understanding and behavior that we’d call expertise or professional experience.
And in the absence of that expertise, we might not be good founders, good entrepreneurs, or good developers.
The Good Get Better, the Bad Get Worse
I came across this phrase recently – I’m not sure who said it originally, might have been me thinking out loud – but it goes: “With AI, people who are already good at a thing get better, and people who are not yet good at a thing get worse.”
The core of this statement lies in how we work through friction. Through working through criticism, experimentation, failing and trying other ways, reading about other people’s experiences and trying to integrate them into our own – and failing at that too, then finding our own way to integrate other people’s experience.
It’s through all this friction that we build what I call the muscle of execution. We build the muscle of comprehension that allows us to develop judgment, discernment, and the capacity to understand good from bad.
What Expertise Really Means
Most of the time, when we think about an expert, we think of someone capable of executing a certain task in a particular field to a standard of excellence. But I think that’s too narrow a definition.
An expert is a person who has developed taste. A person who has the capacity to judge tasteful from tasteless, good from bad, beautiful from lacking beauty. These aren’t things that are innate to us – an expert has developed taste in their industry, developed the capacity for discernment, and can very quickly apply that taste, that discernment, that judgment, to any new situation.
Going back to our engineer story – that’s exactly what expertise looks like. It’s knowing what works, knowing what doesn’t work, and being able to almost immediately discard the many ways that don’t work, so that the number of ways that do work in whatever options you have left is high. That’s discernment. That’s the capacity to make a judgment call.
How the Already-Good Use AI
Let me give you a personal example. I’ve been a developer for 20 years at this point, so I have some inkling of what good code might look like, and I know how a seasoned software developer would approach solving a problem.
When I prompt an AI system, I don’t tell it to “build me this” or “build me that.” I give it a scoped definition of what I want the same way that I, as a developer, would want it. I tell it: this is the application this feature is in, this is the kind of customer it’s for, this is the data it will work on, here are the things it will need to interface with, here’s the input, here are the outputs, here’s probably how I would do it, here are the steps I’d think about, here are edge cases to consider.
I have a lot of understanding of existing codebases and how new code is written. In my prompt, I give this information to the AI system. I don’t tell it to “write me a game where wizards fight monsters.” I would tell it to build me a roguelike side-scroller that uses sprites from a certain sprite database and has a specific level design system that I’d then detail extensively.
I have the phrasing, the understanding, the capacity of knowing what another developer would need to build this well. That’s how I use the tool – just as if I were to outsource this work and externalize it.
This makes me faster because I can do this in parallel with many different systems. I can spend five minutes scoping and then wait ten minutes for a result, versus five minutes scoping for myself and then implementing for thirty minutes. The machine writes code quicker than I do, and even if there are errors, it’s quicker to look through the bug, find the problematic line, and fix it.
But here’s the crucial part: once I look at the code that comes out, I can discern if that looks like code I would have written. I can judge its quality. I get to benefit from this AI system in a way that somebody who wouldn’t know if that code works or not definitely would not benefit from.
When AI Doesn’t Help
I recently talked to a customer who’s not technical, but they’ve been trying to build applications using Lovable, which is a tool where you can prompt your way to a fully capable application. Now, I’ve used Lovable before – if you look at the PodScan homepage at podscan.fm, you’ll see what a day or so of hacking around in Lovable can produce. It’s really cool what I could build just by coding my way through prompting.
But this person has been trying to integrate the exact same API that I’ve been using for PodScan into their own Lovable application, and they’ve been struggling. Whenever new data comes in that doesn’t perfectly adhere to the format they prompted the system with, it fails.
They’ve spent hundreds of thousands of tokens trying to get it to work by just telling it to “try it differently” – not by knowing what the code should look like or what the data might be structured as, but by just hoping to get it right.
That’s a waste of time and money. It’s taking somebody who likely wouldn’t have spent two days trying to figure this out, made them spend more money, and made their life objectively worse. This AI tool isn’t useful for them because it just doesn’t get the results they need.
I could probably help and prompt that tool out of the abyss back into a functioning system, but the tool itself isn’t the magical component here. In all things AI, the magical component is the person capable of prompting it effectively.
The Learning Dilemma
So if AI systems make good people better and bad people worse at coding, writing, creating art, creating insights – basically creating anything – what should we do?
Honestly, I wouldn’t want to learn how to code today by using vibe coding or prompt-centric software development tools. I would definitely want to learn to code using AI, having somebody help me build things, building them with me maybe, but not building them for me.
Because if I don’t know how a thing is built, how can I judge the quality of the building? If I don’t know how this code came to be, what the alternatives would have been, how can I say that this is the optimal choice, or that this choice has a good reason behind it?
This is something we need to be really careful with when attempting mastery at anything – outsourcing the act of struggling through our first and even later experiments in that field.
The Necessity of Friction
We need to have friction. We need to fail a little bit every now and then to develop the capacity to overcome challenges, to understand why things work one way and don’t work another way, and to build the taste that’s required to discern a good thing from a bad thing.
Through working through challenges, we build the muscle of comprehension.
This muscle allows us to build judgment and discernment and the capacity to understand good from bad. It’s what separates someone who can execute a task from someone who has true expertise.
Moving Forward Thoughtfully
So whenever you think about outsourcing something to AI, think about the fact that that very act might make it harder for you to do the thing in the first place.
This doesn’t mean we should avoid AI tools – they’re incredibly powerful and can make us significantly more productive. But we need to be intentional about how we use them.
Use AI as a collaborator, not a replacement for your own thinking and struggle. Let it handle the tedious parts while you maintain involvement in the conceptual and creative aspects. Keep enough friction in your process to continue building your expertise muscle.
Because at the end of the day, the most valuable thing you can develop isn’t just the ability to get AI to do things for you – it’s the judgment to know whether what it produced is any good.
The hammer tap is easy. Knowing where to hit – that’s the expertise that no amount of automation can replace.
That’s what I’ve been thinking about lately. I’d love to hear your thoughts on this – are you finding that AI tools are making you better at what you do, or are they creating a dependency that might be limiting your growth? Hit me up on Twitter @arvidkahl or send me an email.
We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!
Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and share this issue on Twitter.
If you want to reach tens of thousands of creators, makers, and dreamers, you can apply to sponsor an episode of this newsletter. Or just reply to this email!