Dear founder,
I think we’re at the precipice of a pretty significant change in how we build software products. Obviously, the recent ascent of vibe coding and all the agentic coding tools that we find very useful and highly effective shows a difference in how we approach building products. But there’s another change - not just in how we build, but in who these products are for.
This issue is sponsored by Paddle.com, my Merchant of Record payment provider of choice, as they handle Sales tax, credit cards failing, all of that. I highly recommend it, so please check out Paddle.com.
The Evolution of Software Consumers
There used to be a time where websites were mostly for people. People were browsing websites, reading websites, handwriting websites in HTML in text editors. The late 90s, early 2000s were pretty much that.
Then there was a time where APIs became all the rage and applications became programmable with each other. Applications would exchange information and orchestrate each other to get data from one point to the other. First, software was made for people, and then software was made for machines to consume.
Now we have this weird hybrid of man and machine - the agentic AI. The AI that follows human logic and human approaches to process and discovery and exploration, but is effectively a machine that can do it much faster, much more targeted, and maybe also much more dangerously than what most humans could do.
We’re looking at the age of building interfaces for AI systems.
The Progression of Concerns
It’s always a progression, right? When you look at user interfaces - browsers or desktop applications - there was a progression towards making these things computer accessible. Browsers deployed things as HTML documents or JavaScript applications, where the backend would make available an API with a more structured approach - maybe a JSON API, or a GraphQL API, or some kind of remote procedure call system. It would be more technical, more focused on what machines would like to consume.
But I think there was a migration of concerns from “this is what a human wants to accomplish” towards “this is what a human wants a machine to accomplish.”
And now there’s this part of the interface that isn’t consumed by humans, but it’s also not consumed by repeatably expectable or reliable machine systems anymore. Now we have agentic systems that try stuff, just like a human explorer would try to see if they can get some information. If they can’t, they try alternative approaches. They might have different ways of solving a problem. They might try to fabricate new solutions to previously unencountered and undiscovered challenges.
The agentic system will require its own user interface in a way. What the website or the rendered DOM was for a browser that allowed us to visualize information, and what the API endpoint was for machines - we have to think about how we will make this available to AI systems right now and in the future.
Current Solutions and Future Needs
Right now, we will likely have these AI systems use API systems. The MCP - the Model Context Protocol - is an idea. It’s kind of an API of sorts, an API that translates certain things into API calls on the backend. It’s like a wrapper around an API system, around a service to make it accessible to LLMs.
But there might be a future where this wrapper is not flexible enough. We want systems to be able to quickly integrate with the capacities of our services, of the businesses that we run, the platforms that we run, the data that we make available to our customers and users. We want systems that are flexible enough for new ideas to not be reliant on us having to implement some kind of endpoint.
Funny enough, the GraphQL idea was more aimed at this - make available an endpoint and allow the consumer to configure exactly what data they want. I see similar things in the DSL for the Elasticsearch engine, which I’m using quite a lot for PodScan. It’s something that frightened me when I was first encountering it a couple decades ago, and has been something frightening in the back of my head ever since. It’s just such a complicated way of expressing highly complex queries, but AI has really helped me dig into it and build working queries for me, so that now I actually understand the DSL better.
The DSL is so flexible that I don’t think you could enumerate the full possible spectrum of all possible search queries. I think it’s infinitely flexible, which is something that we will need with AI systems in the future.
The Security Challenge
At the same time, that flexibility is also a security risk. When we allow a lot of flexibility to the systems, then all of a sudden we have this transmission system - the border where our data ends and our system ends and where the consumption starts of this external agentic AI system, however intensely and however maliciously it may try to access our system. Now we have to defend it at that point, while still making most of our data available to everybody else who uses the system benevolently.
It feels problematic in a way that we have very established security systems for API systems and for browsers. We have cookies in browsers. We have sessions in browsers and for APIs. We have tokens, we have authentication systems, we have OAuth, we have secrets like HTTP header properties that reliably and secretly communicate and have encryption and all of that.
It’s going to be problematic to do this with agentic systems. Agentic systems will obviously need some kind of permission and authorization and authentication systems. The authentication system is very similar to what we currently have - that won’t change much. But in a world of flexible data use, permission settings for using a data node or using some kind of interactive system might not just be a binary choice.
Real-World Complexity: The Calendar Booking Example
Let’s imagine we’re integrating with somebody else’s calendar booking system. We run a company that offers, I don’t know, virtual meeting rooms, and we want to integrate not just with somebody else’s calendar system, but somebody else’s booking agentic systems where software is trying to mediate booking an event between any number of parties.
Now if we give access to that system, how do we reliably structure it so that these agentic systems that might get into novel situations work properly? Somebody might say, “Well, I have time, but only for one hour on that day, depending on if somebody books something else on the day before, or the time slot before. If that is booked by Friday, then I’m available here.”
People have very imprecise ways of determining their availability, and an agentic system, compared to an API where you would have a very rigid, very strict system of “tell me exactly when you will be available,” an AI system might be more flexible, or try to be more flexible with that.
How do we build systems that allow for this without causing errors, without causing maybe mis-bookings, over-bookings, maybe overloading the system? Because all of a sudden there’s a recursive loop in the agentic system, and it just tries to check everything and then book something and check something else, and then unbooks because it doesn’t fit anymore, but now it fits again, and then it books again.
Situations like this - that is not just a rate limiting issue. That is an issue of protecting yourself against novel situations that your system has not yet discovered because you haven’t encoded it in the logic of it. You haven’t encoded it in the expected situations that consumption patterns or consumers might run into.
The Version Management Problem
There are many other problems. If an LLM system has been trained on an old version of an interface, and it’s not instructed to fetch and work with a new version - some kind of updated version of the interface - it is going to try and use the old one, which might require some kind of translation layer or a rerouting system.
These are not really novel problems, but they have a different taste this time around, a different connotation, because we’re now working with much less precise inputs that are not as typed and not as binary - not as right or wrong as they used to be. The moment we interact with systems that supposedly draw discrete information from a non-discrete description or a non-discrete system, it’s going to be an interesting challenge.
The New Consumer: Neither Human nor Machine
Besides the human user and the machine user - the API-centric, very well-defined information exchange kind of consumer of our software - we now might have to deal with a consumer that is much faster, much more capable of experimenting in a very short amount of time, and therefore potentially overloading our services or creating junk data in our services, or misusing and misappropriating services than we ever were before.
This makes it very complicated to even communicate clearly what a software product can and cannot do. Because we’ll effectively try to open as much surface as possible to these agentic tools and rely on them to appropriately use the platforms. This is particularly complicated when we’re used to either human speed or machine speed.
When a human uses a product, you have an understanding of how long they will wait, what should be synchronous and what should be asynchronous, what data to present and what data not to present. The same goes for an API when you have a clearly defined contract between service and consumer - a transactional exchange of sorts.
Then all of a sudden we have agentic users who act like a human in one way and act like a machine in others. A lot of the tools that we see out there have human-in-the-loop features where the execution of workflow is paused until the human interacts. So now, what is a session with these kinds of consumers? Is it a connection? A request? Is it a series of requests within a certain timeframe, or is it a certain order of requests that includes - or maybe doesn’t include - a human in the loop?
It becomes hard to explain and communicate how exactly the borders of the service work when it’s not even clear how much machine and how much human is in the consumer.
The Economic Risks
I know this might all sound a little bit solvable. Obviously, we’ve had rate limits before, we’ve had the concept of well-defined inputs and outputs before. We had patterns like “be precise in what you output and be flexible in what you allow into a system.” All of this existed before.
But particularly with the technology that is developing as crassly and as quickly as agentic consumer systems and the technology on top of which they sit, I don’t really know anymore. It feels like we are looking at a world, an industry reality where product consumption has become less expectable, less plannable, and less reliable.
On top of that, AI-centric products are really prone to being disastrous when maliciously abused. If we open up our APIs to flexible systems and they overconsume, and that requires us to send API call after API call to a system that charges us for each of these calls, but we want to be flexible and we want to allow agentic systems to do their work on us, then all of a sudden we have to be able to incur a massive potential cost that we may not want to.
And the cost may appear even though it wasn’t intended to be produced or caused because the system just went haywire and was stuck in a loop or was effectively launching an unintentional denial of service attack due to its own internal parallelized architecture.
There are so many interesting new problems that face us as we open up our applications to AI systems.
The Path Forward
I think it’s generally a good idea at this point to look into the Model Context Protocol because it’s the first attempt at understanding how products and systems can be used by and through large language models and the applications that are based on them.
I personally have been attempting to experiment with that kind of stuff for PodScan because I know that it’s interesting for my customers to be able to use PodScan from inside their agentic loops and their conversational AIs. It hasn’t really resulted in a public thing just yet, but I know that within a couple of years, we’ll have systems like this built into every single platform and product out there.
It’s just unavoidably required for these interfaces to exist, just like Laravel - the tool that I use - has a web-based component that allows people to use it, and an API component with Sanctum, which is an internal plugin that allows API-based call-based authentication. There will be something built into the framework quite soon, probably because Laravel has been really intensifying their AI efforts, both in how to build Laravel and what Laravel can support.
There will soon be something that will be the third option that will allow LLMs programmatic access to any application through a normalized interface internally. I find this fascinating.
Conclusion: Embracing the Unknown
So, who knows where we’re going? We have to think about this as a consumer - as a qualified consumer of our applications - and protect ourselves from the potential downsides of allowing machines to think and act like humans but still be machines.
We’re entering uncharted territory here. The age of AI consumers is upon us, and as founders and developers, we need to start thinking about how our products will serve not just humans, not just traditional APIs, but these new hybrid entities that combine human-like reasoning with machine-like speed and scale.
The challenges are real - from security to cost management to simply understanding what a session means anymore. But so are the opportunities. Those who figure out how to build robust, flexible, and secure interfaces for AI agents will be at the forefront of the next wave of software innovation.
It’s time to start experimenting, learning, and preparing for a world where our biggest users might not be human at all.
We're the podcast database with the best and most real-time API out there. Check out podscan.fm — and tell your friends!
Thank you for reading this week’s essay edition of The Bootstrapped Founder. Did you enjoy it? If so, please spread the word and share this issue on Twitter.
If you want to reach tens of thousands of creators, makers, and dreamers, you can apply to sponsor an episode of this newsletter. Or just reply to this email!