AI is Flipping Our Relationship with Technology — The Bootstrapped Founder 391


Dear founder,

Last week, I read a tweet by Channing Allen, one of the co-founders of IndieHackers, about the term “second brain.” He was saying how it was kind of crazy that just a couple years ago, folks were publishing bestselling books on the brilliance of spending hours a day doing manual data entry into spreadsheet apps like Notion and calling these “brains.”

And over the last couple years, we’ve developed completely new technology for this with large language models.

People like Channing now log no fewer than 1,000 autobiographical words every day into an LLM with long-term memory, which combined with access to the whole internet becomes a thing that knows him better than he knows himself. It’s not just a better second brain, he says it’s on the way to becoming a better first brain.

That’s a fascinating development, and I want to explore what this means for us as creators, founders, and humans. I’m looking at this from the perspective of a technologist, but also as someone who’s seen their own work transformed by these tools in just the past year.

The Evolution of Brain Extensions

In the philosophy of transhumanism—which explores where humanity is headed and how technology shapes our future—there’s something called the “body extension theory.” Humans are the first animals to really embrace extending their bodies through technology. I don’t necessarily mean hardware tech as we software technologists use the term, but any tool use that extends and strengthens parts of the human body.

The hammer is a version of the fist that’s much more resilient and stronger than a fist could ever be. A saw is a much stronger version of human teeth. A microscope extends our eyes.

When I got my first phone with Internet access, I felt like all of a sudden my brain was also extended. It was extended to a point where I didn’t have to rely on rote memory anymore. I could look things up and use my phone as an ever-connected data source—a kind of outsourced brain.

I think this is what Channing is describing with the evolution of the second brain concept. The second brain was the attempt to outsource knowledge and make it parseable, linkable, and interpretable. People did a lot of manual work to give it shape and structure. Yet that was all it was: an outsourced version of what you already knew.

If we really want to go there, books were the original second brain. They’re a fragment of the brain—a slice of what we know and care about at a certain point in time. Books made brain-to-brain communication possible without requiring people to be in the same place. We could put our thoughts in a book, or take somebody else’s thoughts from their book and import them into ours. It’s an unwieldy process, maybe, but it’s kind of an external hard drive that takes a long time to copy files onto your own internal hard drive.

Then “second brains” or “idea gardens” came along because tools like Notion and Obsidian made it easier to sort and connect information. Search became very fast and reliable, so you could trust that the knowledge you put in there would be retrievable.

And then came AI—this weird thing that is almost a brain in itself. Properly prompted and continuously triggered through agentic systems, an AI keeps thinking for you as long as it’s instructed to think, much like our own brain is kept alive by the rest of our body.

My Experience: AI as a Coding Partner

Let me share how I’ve experienced this firsthand, particularly in my use of AI as an extension of my cognitive function for coding.

Initially, AI was pretty bad at coding. It would come up with the wrong stuff, not understand context well, and often create functions that didn’t do what I wanted. So I initially dismissed it, but the models got better, and eventually I found myself actually using the code that was provided, mostly in-line by GitHub Copilot.

I noticed that I could now have code assistance that, if I named my function correctly, was capable of creating the function body. Particularly in JavaScript, these systems were very good—they’re trained on an existing massive corpus of JavaScript applications. So if I have a function like “formatDateAsRelative” with a parameter called “date,” it’s very likely that the code assistant will create something that outputs “5 days ago” by using whatever date formatting library is already in the project.

It was very useful for clearly scoped functions from the start, and it got better at the less clearly scoped and more complex functions over time. Now we have agentic systems like Windsurf for Cursor or recently even IntelliJ’s Junie.

I’ve started using Junie because it’s part of the latest update and comes with my IDE subscription. It does the work really well, to the point where I’m now becoming a manager of agent software developers who do all the software development work for me. My own job is to specify as clearly as I can what I want done, and then to code review and test the results of these agents.

Let me give you a concrete example. I have a background task in Podscan that generates a massive list of all the podcasts in our system—about 3.7 million—including their IDs, RSS feed URLs, and when they last posted an episode. I use that list on my checking servers to regularly pull RSS feeds and check for new episodes.

Recently, I noticed that the function hit a memory ceiling and occasionally failed because it would load gigabytes of podcast data into memory. So I went to Junie and tasked it with this prompt (almost verbatim):

“In the podcast file where the generate podcast feed list function happens, I want you to implement a locking mechanism that prevents this function from running more than once at a time. The lock should really just cover 15 minutes of time but be immediate to prevent dual execution. Also, I want you to severely reduce the memory requirement of that function—right now all data is loaded into RAM. How about writing the target JSON file one chunk at a time? Finally, add more logging to this process. I need to have more insight into memory pressure and execution progress.”

From that prompt, Junie created an execution plan: examine the current implementation, research locking mechanisms in Laravel, modify the function to implement the lock, refactor the function to write JSON data in chunks, add logging, and test the changes. And it executed flawlessly. It created all the required things in about 40 seconds.

My coding work was to very clearly formulate what I needed, wait a couple seconds for it to be implemented, and then go through the code to make sure that what I needed was actually happening.

The Reversal of the Flow of Benefit

What I find fascinating about what Channing is saying—and what I’m experiencing—is that the flow of benefit is changing.

A second brain was a more tangible, more searchable, more reliable external copy of a fragment of our own brain. But now, we’re taking our knowledge and injecting it into large language models that already have vast knowledge and insight.

A Notion database is shapeless and inert—empty until you fill it with your knowledge. But a large language model used with retrieval-augmented generation becomes something else entirely. In a way, it’s not that the AI is part of us; it’s that we are part of the AI.

All my ideas of what the future would look like, all the sci-fi stories that strongly featured cyborgs and AI-augmented humans—AI was always the thing that was propped on top of the human condition. It’s now looking like, particularly with our usage of AI as this amorphous cluster of graphics cards interfacing with us through chat, it is our knowledge that is imparted on the system.

We still get the benefit from using these tools, don’t get me wrong. But I think Channing is right: we are creating a non-human brain that is synthesizing all the human experiences from our interactions with it and from the training data it has already absorbed—our forum posts, books, social media posts—the reports of our human condition.

We’re constantly training a bigger and more experienced first brain—maybe the “zeroth brain,” the brain of all brains, a kind of amalgamation that might be the singularity.

Which makes me wonder: when will the point be where we might stop benefiting from this? Is there going to be a point where the thing doesn’t need us anymore and we will lose access to it, or will we continuously benefit from a better and better brain that exists between us?

The AI knows you better than you know yourself, because it has perfect recall. It has the capacity to constantly fact-check everything you’re saying by finding references outside of it. It has the capacity to contextualize things that to you are one way, but to an outside perspective could be interpreted in very different ways.

Skill Atrophy vs. Evolution of Work

This shift from doing the work—from building up the mental model, implementing it as code, and testing that reflection of the mental model—to setting or building up the mental model, describing it adequately, having it done for you, and then testing what the AI has done… this shift removes the act of creation, or rather shifts the act of creation from writing the code to writing the spec.

While this is, I think, a natural progression in all careers (we tend to go into more management positions, and even very capable software developers end up being team leads and CTOs), it is wild that this particular part of software development is not necessarily done by humans anymore.

One of the things I very quickly get confronted with when it comes to AI use is skill atrophy. Are we losing a valued skill? You might argue that the valued skill in all of this is knowing what you want and being able to explicitly express it in a specification, instead of knowing how to type things into a computer in a language a computer can understand.

Coding was the in-between form. Writing code has been around for what, 40 years, maybe 60 years? It’s a rather new kind of work in the human world. We had a phase where we were writing machine code, then a phase where we were writing code into text editors, then came IDEs where a lot was already done for us. Code was already being written for us in many ways—intelligent code completion existed before AI, and compiling already transforms our high-level code into machine language.

It is now that all code is written for us—we don’t need to know how to write it, we just need to know how to read it, test it, and check it. This might be risky for people who’ve built careers in coding, but it also might deter people from learning how to code, which makes it harder for them to understand how to specify and test code, to understand its implications.

A disconnect from production means disconnect from the work. While it’s great to not have to write/code/draw, it weakens our understanding of the value and intricacies of creation. Handing that over to machines can be called tool use, but it also might dampen our creative capacities, particularly when it’s done from 0 to 100 in just a few years.

The Future of Human Cognition

Programming will move up one level. And I think that happens outside of AI-assisted coding as well. Research moves up one level. Now it’s not about actually diving into articles or books and finding the data yourself—it’s instructing agentic systems to find the right works, the right data, and put it into a shape that’s usable.

Our interaction with this “first brain” means that our own brains need to learn a new way to interact with this kind of information source. The way we work still has the same needs and requirements, the same inputs and outputs, but how the work is done has changed—who exactly loads the data into memory. It’s not us anymore; it’s now the machines. We then instruct those machines to extract the most meaningful parts that help us get to where we want.

Hundreds of years ago, people memorized poems and books and facts as much as they could because books were scarce and expensive, so memorization was important. Even throughout my own history as a student, we were still forced to memorize a lot of things because you couldn’t build a clear understanding without having these things memorized.

Well, not anymore. At this point, the memorization is done for us. If you have a large language model that has been explicitly trained on historical information, then the accessibility of that information and the potential examinations of it are already within the model.

The interface we have to this model is by asking it questions or giving it tasks and prompting it. What reading and writing were will shift into being able to correctly understand the capacities of these systems, how to prompt them well to get the information you need, and how to judge the quality, correctness, and authenticity of that information.

In this world where we’re training the “Biggest Brain of all” that then does work for us, our capacity to judge quality and results, plus the capability to instruct just the right thing on the right data with the right sources and reasoning—that’ll be the challenge of the next decades. It will be humanity’s next milestone behavioral change.

Where nobody could read 1,000 years ago, now barely anybody can prompt and judge AI outcomes effectively. We’ve seen it with fake news—people have a hard time judging for correctness. So being able to judge for correctness and authenticity, and to instruct correctly, will be an important part of dealing with this new technology. It will have a sizable place in the human experience over the next years, decades, maybe hundreds of years to come.

Conclusion

I know that Claude (a humanly named AI system) is not a real being—there is one Claude. It’s a massive-scale operation of data centers and computers with graphics cards that all have loaded the same or similar language models. But you could argue that a human being is nothing but a collection of cells and organs that comprise all the different functions in the body as well.

So Claude is just as much one entity, and not one entity, as a human is an entity or a collection of different kinds of cells. If this keeps growing, both in size and capacity, at what point could we argue that we are dealing with a form of life, or at least a form of thought?

It’s very easy to lose these big questions and to not care about them, because in the short term, these tools are so incredibly powerful. They’re incredibly empowering to people who don’t know how to code, write, cook, or do their taxes—they can teach anything and everything, and with agentic systems they can now do almost anything and everything.

But we are constantly feeding our data, our corrections, our interactions, into a system that is building its own brain. And maybe that’s not the wisest choice.

As founders and creators, we need to be thoughtful about how we use these tools. The question isn’t whether to use them or not. The question is how to use them in a way that enhances rather than diminishes our human capacities, that makes our work more meaningful rather than less.

How do we build a future where the “first brain” and our own brains work together, rather than one simply replacing the other?

I don’t have all the answers, but I think it’s a conversation we need to have—as technologists, as founders, and as humans. The future we’re building depends on it.

We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!

Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and ​share this issue on Twitter.

If you want to reach tens of thousands of creators, makers, and dreamers, you can ​apply to sponsor ​an episode of this newsletter. Or just reply to this email!

To make sure you keep getting your weekly dose of Bootstrapped Founder, please add arvid@thebootstrappedfounder.com to your address book or whitelist us.

Did someone forward you this issue of The Bootstrapped Founder? ​You can subscribe to it here!​

Want to change which emails you get from The Bootstrapped Founder or unsubscribe for good? No worries, just click this link: ​change email preferences​ or ​unsubscribe​​.

Our postal address: 113 Cherry St #92768, Seattle, WA 98104-2205

Opt-out of preference-based advertising

Arvid Kahl

Being your own boss isn't easy, but it's worth it. Learn how to build a legacy while being kind and authentic. I want to empower as many entrepreneurs as possible to help themselves (and those they choose to serve).

Read more from Arvid Kahl

Podcast, YouTube, Blog Dear founder, Before we get into this week's AI-centric issue, I want to let you know that I recently recorded an in-depth session for Rob Walling's SaaS Launchpad. This module is called AI in SaaS, and I share over a dozen pitfalls, unobvious risks, and hard-earned insights from my own AI SaaS building journey. The course was already great, but... you know... ;) Use the code ARVID150 to get $150 off until June 8th. Alright, let's get to it. There’s a term I’ve been...

Bootstrapped Founder Logo

Podcast, YouTube, Blog Dear founder, Today I want to talk about something that’s been on my mind lately – and probably yours too if you’re building anything with AI. It’s the age-old question: should you run language models locally or just use APIs? Now, I know what you’re thinking. “Arvid, Local AI vs APIs been debated to death.” But here’s the thing – most of these conversations happen in a vacuum, full of theoretical scenarios and benchmark comparisons. What I want to share today is what...

Bootstrapped Founder Logo

Podcast, YouTube, Blog Dear founder, After about a year of building Podscan.fm, my podcast platform and social monitoring tool, I’ve finally reached that general area of profitability. It’s that break-even moment that every founder aims for, and now I’m at a point where I really need to focus on things besides building the product. Why? Because the product is clear. The value proposition is fairly clear. I know who my customers are, and I know how to talk to them. Now, what I should be...