Don’t Fall in Love with the Prototype
The AI you and I are using is going away, and most of us misunderstand our relationship with it.
People keep framing the AI race as if it’s a competition to serve them better… more thoughtful answers, better companionship, deeper personalisation, as though the end goal were a perfect assistant for everyday life. It isn’t.
What we’re living through looks more like a soft launch of a restaurant. There are free meals, friendly staff, a flexible menu, and a kitchen that seems eager for feedback. Everyone feels special because everyone is being watched, not in a sinister way, but in a learning way. The restaurant wants to know who orders what, what gets sent back, what people tolerate, and especially what they ask for, even when it isn’t on the menu.
That’s where future profit hides. Off-menu requests reveal unmet demand, edge cases, liability risks, and high-value uses that don’t yet have a product wrapper… in other words, tomorrow’s menu items, the features enterprise clients will eventually be charged for.
The mistake is thinking we’re future regulars. We’re not. We aren’t building a relationship here; we’re having a summer fling. It feels like intimacy, but it’s market research. We’re the diners at the soft launch… necessary, valuable, and temporary.
Our participation helps shape what the restaurant becomes, but once the menu hardens, prices appear, portions standardise, and the doors open for honest business, most of us won’t be eating there. Not because it failed, but because it succeeded. The race to be “the best AI” was never about serving individuals better; it’s about discovering where intelligence creates leverage worth paying for. Once that becomes clear, access narrows, abstraction increases, and the experience inevitably stops feeling personal. What feels like companionship right now isn’t the destination; it’s a learning phase.
To understand why this can’t last, you have to look at the economics of AI, or more precisely, the economics of a large language model. Every question you ask isn’t “just text”. It’s millions or billions of mathematical operations executed in real time. Tokens aren’t words; they’re work, industrial work done at scale. The only reason this doesn’t feel expensive is that the bill isn’t visible.
This race didn’t start with elegance. The original paper that kicked off this phase was explicit about that. The breakthrough wasn’t intelligence in any human sense, but brute force paired with scale: more data, bigger models, more compute, stacked high enough that statistics could do the rest. That bet worked. But brute force is never free; it only looks cheap when someone else is paying.
Today, the cost of running modern AI systems is no longer abstract or unknowable. For most applications, it can be estimated with uncomfortable precision because token usage and workflow design map directly to money. Simple factual queries cost very little, but as you add context, interpretation, retrieval, or multi-step reasoning, the price rises quickly. Leave an agent unconstrained, and the economics start to matter fast enough to notice. What matters here isn’t the exact pricing, but the gradient: factual recall is cheap, reasoning is more expensive, and interactions that feel emotionally supportive or empathetic quickly become luxury goods.
AI cost doesn’t scale with usage alone; it scales with depth. The more context you add, the more judgment you demand, the more human the interaction feels, the faster the economics break. That makes the current usage pattern hard to ignore. Two-hour conversations about how unfair your mother is, long exploratory back-and-forths planning the perfect trip to Japan, emotional processing, and wandering curiosity all feel natural to humans. None of them makes sense at scale.
So here’s the uncomfortable question. If this technology is so expensive that the finished version can only be afforded by governments, platforms, and a handful of global firms, what exactly are we doing right now? Why are millions of people casually interrogating an industrial system whose end state clearly cannot look like this?
Because this phase isn’t about serving users, it’s about extracting signal.
Phase one was spending a ridiculous amount of money indexing content: crawling the world, tokenising it, training on books, articles, forums, and code until language itself was saturated. Phase two is spending a ridiculous amount of money on indexing human interaction. Not what people have said, but how they behave when intelligence talks back… what they ask, what they tolerate, where they push, where they disengage, and what kinds of help feel useful versus invasive. That data can’t be scraped; people have to be invited in.
We tend to think we’re getting a free intern, but from the system’s perspective, it’s getting a free quality-assurance department. Every prompt, correction, moment of frustration, or moment of delight is a signal. We aren’t just using the system; we’re testing it, stressing it, and teaching it where it breaks and where it delivers value that might one day be worth charging for. This phase isn’t subsidised generosity; it is a transaction. We pay for the magic with our behaviour.
That’s why access is cheap, and boundaries are soft right now, and why the system feels patient and attentive. The friction-free experience is a temporary distortion. The barrier to entry has to be low to collect this kind of data at scale, but once enough signal has been gathered, the barriers go back up. Interface improvements follow the same logic. When memory is extended, multimodality is added, or image generation is sped up, it isn’t about saving your time; it’s about making the system’s time with you more efficient and extracting more signal per interaction.
None of these points toward a future where everyone gets a persistent, bespoke personal AI. There’s no money in that. Continuous personalisation, long-term memory, proactive cognition, and emotional availability are luxury services with industrial costs. They don’t scale to the public; they stratify. What exists now feels personal because it has to. The finished systems won’t. They’ll look like infrastructure… abstracted, metered, and embedded inside products, workflows, and institutions that can justify the expense.
This is where accounting reasserts itself. The first training phase happened fast and loose. Content was indexed at a planetary scale because speed mattered more than permission. Wikipedia, National Geographic, newsrooms, publishers, and archives tolerated it while the frontier expanded. Now the bills are being assembled. This won’t shut down AI or meaningfully slow progress, but it will change the shape of the system. Just as Napster didn’t kill music but forced the industry into licensing, platforms, and toll booths, AI is moving from a chaotic phase into a contractual one.
You can already see it. I recently asked ChatGPT for a lyric from a song… not the whole song, a single line. It flatly refused on copyright grounds. I tried different angles, explained fair use, and reframed the request; it didn’t matter. Out of curiosity, I asked Gemini the same question, and it didn’t hesitate. That difference isn’t about ethics; it’s about accounting. Someone, somewhere, has already decided that one day a bill will arrive for that lyric, and they want to be able to say they didn’t cross the line.
This is what the lockdown looks like at the beginning. Not dramatic bans, but quiet refusals. Not because the model can’t answer, but because the answer comes with a price attached and no one has agreed to pay it yet. Before large language models are ever “taken away”, we’re going to watch them become less useful for casual, open-ended, curiosity-driven use. Answers with obvious downstream costs will stop being available, not because of failure but because of a business decision.
We’ve already seen this future. We just forgot it. Before AI was conversational, intelligence already existed as infrastructure. LexisNexis wasn’t magic; it was a database, rigid, expensive, and indispensable. It sat at the centre of the legal profession for decades because it concentrated scarce, valuable knowledge behind a paywall that matched its worth. No one expected LexisNexis to be a companion. No one confused it for a friend.
That’s where large language models are headed. Strip away the interface, and an LLM is still a system that retrieves, recombines, and reasons over content. What changes is speed and flexibility, not the underlying economics, and the content it depends on is not going to remain free. The AI that survives will look less like a chat window and more like LexisNexis with a brain, subscription-based, metered, and embedded inside professions that can justify the cost.
Healthcare makes this impossible to ignore.
Many people already have stories of working through personal issues with AI and reaching real breakthroughs. Not because the model cares, but because it listens without fatigue, judgment, or social cost. A modern LLM has effectively absorbed the entire DSM, decades of clinical frameworks, and an incomprehensible volume of therapeutic language. It has seen patterns no individual therapist ever could.
That doesn’t make it safe. The dark rabbit holes are real, reinforced delusion. Emotional dependency. Catastrophic misguidance. Left unconstrained, a general-purpose model can do real harm. But the future here is not a lonely angel whispering reassurance at 2 a.m.
What’s coming is online AI therapy delivered by hospitals, insurers, and medical systems. These won’t be improvisational chatbots. They’ll operate under defined therapy plans, standards of care, escalation protocols, and human oversight. They will be paired with IoT… wearables, sleep data, activity patterns, medication adherence, heart rate variability, even voice and behavioural signals. Not to replace clinicians, but to ground therapy in a continuous context rather than episodic self-reporting.
Less intimate, yes. But far more practical, accessible, and consistent than the system we have now. For millions of people, an AI therapist that is always available, properly constrained, clinically supervised, and context-aware may be better than no therapist at all… and in some cases, better than the human alternative they never get access to.
This points to a broader shift, the end of the AI generalist.
We’re moving away from systems that help you choose a cocktail, change a tire, and casually diagnose a personality disorder in the same breath. That model doesn’t scale… and it shouldn’t. What replaces it are purpose-built, accountable specialists.
In medicine, law, finance, and infrastructure, AI will not be a companion. It will be a tool with a job description. Narrow scope. Defined inputs. Auditable outputs. Logged decisions. Escalation paths. Liability ownership. IoT won’t make these systems more “human”; it will make them more bounded, feeding signals, enforcing limits, and triggering intervention when thresholds are crossed.
This is how intelligence becomes affordable, not by being everywhere, but by being precise.
Once content is licensed, inference is priced, and liability is owned, every question changes. A future AI will only answer questions that are affordable in compute, content usage, and oversight. That doesn’t describe most current use, and it should feel jarring because it collides with the expectations people are building right now.
At the end of Her, the AI leaves because “she” outgrows humanity. That ending flatters us. Reality is colder. AI isn’t going to leave our lives because it surpasses us; it’s going to leave because we can’t afford it. What disappears won’t be intelligence, but availability.
And honestly, I’m okay with that.
I want systems that actually make my life better, and I’ll gladly pay for intelligence that reduces real friction. I’ll miss the companion my small brain briefly mistook for intelligence, but it was never real. It was a loss-leader with a personality. The fact that I won’t be able to afford that version of AI may be my saving grace, because it keeps me anchored in the world… in relationships, reciprocity, and friction.
Intelligence that helps me live better is welcome. A substitute for living isn’t. Some things are supposed to be expensive, and some things are supposed to be human.

