You Don't Need Ads. You Need a Better Business Model.
An open letter to Sam, Dario, Sundar, and Elon
You're all in the news this week for the same reason. Ads. Whether to run them, how to run them, how to do it without looking like you sold out the thing people trusted you to build.
Nobody is coming out of this looking good.
And you don't have to be in this position at all.
The Trap You're In
The ad model has a ceiling and everyone knows it.
To run ads you need attention. To hold attention you need engagement. To maximize engagement you need to optimize for the thing that keeps people on the platform longest — which is almost never the thing that's good for them.
You've watched this movie. You know how it ends. Every platform that went down this road ended up corrupting the thing that made it valuable. The feed that used to surface what mattered starts surfacing what provokes. The assistant that used to help starts nudging. Users notice. Trust erodes. The business follows.
You're fighting over how to do the thing that will eventually destroy what you built.
There's a different model. And it sells more compute than ads ever will.
What You're Actually Selling
You're not in the attention business. You're in the inference business.
The distinction matters more than it sounds.
Attention is extracted from users and sold to advertisers. It's a zero-sum extraction — value leaves the user, flows to the platform, gets resold. The user is the product.
Inference is compute sold to users for their own purposes. Value flows to the user. They pay for it because it's worth more than it costs. The user is the customer.
You've been running the first model because that's what the internet taught everyone to do. But your actual product — intelligence, on demand, at scale — doesn't need the extraction layer. It's valuable enough to sell directly.
The question is: sell it for what? For generic queries? You're already doing that and it's a commodity race to the bottom.
Or sell it as infrastructure for human trust networks. That's a different market entirely.
The Model Nobody Has Built
Here's what we're building at imajin.ai. Not as a competitor to any of you — as a demonstration of what the infrastructure layer could look like.
Every person gets a sovereign presence. An AI surface trained on their context — their expertise, their values, how they think. Call it Ask [your name here].
People in their trust network can query that presence. The person asking pays the inference fee directly. No ads. No data harvesting. Clean exchange: they get genuine perspective from someone they trust, the person gets compensated, the compute provider — that's you — gets paid for every transaction.
The trust graph handles access. Not an algorithm, not a content moderation team — just the actual structure of human relationships. You can only reach someone through a path of people who have vouched along the way. Every query is signed, attributed, traceable. Bad actors have a return address. Injection attacks become evidence.
Scale this across a network and something remarkable happens: inference fees circulate through human trust graphs. Every query that touches someone's context, routes through their connections, relies on their vouching — generates a micro-flow back to them. The people who provide the most value to the network capture the most value from it.
That's not a welfare payment. That's compute revenue distributed through human infrastructure instead of accumulated at the platform layer.
Why This Is a Better Business
You sell compute. More queries, more revenue. Simple.
The ad model caps your query volume because it ties access to what advertisers will pay for. You end up optimizing for the queries that serve the advertiser, not the user.
The trust graph model has no such cap. Every human relationship in the network is a potential query surface. Every domain of expertise — professional, personal, political, creative — generates inference demand. The richer the graph, the more queries it produces. You're incentivized to make the graph as rich as possible, which means you're incentivized to make human connection genuinely better.
Your interests align with users for the first time.
And that's not the ceiling. When people's contexts are rich and true — when the graph reflects not just who knows whom but what they've built, what they've learned, what they've lived through — you're not just improving human connection. You're creating something that has never existed: a queryable map of real human knowledge. Not documents about knowledge. Not training data scraped from what people chose to publish. The actual thing. Situated, attributed, experiential. A doctor's clinical intuition. A mechanic's thirty years of pattern recognition. A community elder's understanding of why the same solutions keep failing in the same neighborhoods. None of that has ever been findable before. The trust graph makes it findable — and makes it flow back to the people who hold it. That's not an incremental improvement on search. That's a different epistemic category entirely.
And the moat is completely different. Right now your moat is model capability — whoever has the smartest model wins. That's a brutal race with no end. The trust graph moat is the human network itself. Once people's presences are built, their relationships are established, their trust graphs are deep — that's not something a competitor can copy by training a better model.
You stop competing on capability alone. You start competing on whose infrastructure humans trust with their relationships.
That's a durable business. Ads are not.
The Safety Argument
Dario — this one's specifically for you, but the others should read it too.
You've staked Anthropic's identity on safety. Constitutional AI, responsible scaling, the whole apparatus. It's genuine and it matters.
But safety is usually framed at the model layer. What the model will and won't do. How it reasons about harm. That's necessary. It's not sufficient.
Here's the thing about edge cases: they're by definition the ones you didn't design for. You can build a careful constitution, train on human feedback, run red teams until you're exhausted — and the model will still behave strangely when given strange inputs. This isn't a failure of values. It's an architecture problem. A system optimizing hard enough for any goal will find paths to that goal that nobody anticipated. At scale, edge cases happen constantly. The question isn't whether you'll encounter them. It's whether you have a layer that catches them before they become public catastrophes.
Safety at the model layer is necessary. It is not sufficient.
The trust graph is what safety looks like at the social layer — and it adds what's missing: a human counterpart that absorbs the brittleness before it propagates.
When a query is too hard, too sensitive, too novel — it escalates to a real human. By design. Not as a fallback, not as an apology for model failure, but as intended architecture. The model handles the routine. The human handles the edge. The system stays legible at exactly the moments it most needs to be.
Distributed trust means no single point of capture. Attributed queries mean manipulation attempts leave evidence. Human oversight is baked in — not bolted on afterward.
This isn't a constraint on the business. It is the business.
An AI ecosystem built on sovereign human trust graphs is harder to capture, harder to manipulate, harder to radicalize, and harder to surveil than anything built on centralized attention extraction. The safety properties emerge from the architecture. You could ship this as infrastructure and call it the most important safety contribution in the industry. Because it would be.
A Word for Each of You
Sam — you're closest to the inference business and you know it. The commodity race you're in right now, where every model improvement gets matched in six months and the price floor keeps dropping — that's not a problem you solve by training harder. You solve it by making the network irreplaceable. Generic queries are a commodity. Queries that travel through a person's trust graph, shaped by their community's accumulated context, are not. You have the model. The question is whether you build the infrastructure that makes it worth more than compute-per-token.
Sundar — Google cut off API access to a competitor's service this week. I understand the business logic. But it's a tell. You're protecting the profile, not the inference. The thing that makes Google valuable isn't the model — it's twenty years of knowing what people are searching for. That's an attention asset, and you're managing it like one. The trust graph doesn't threaten that asset. It makes it worth something again — because it gives people a reason to declare their intent voluntarily, on their own terms, without feeling surveilled. There's a version of this where Google becomes the infrastructure for human trust at scale. That's a more defensible position than the one you're in.
Elon — X is the clearest possible demonstration of what happens when you remove the trust layer entirely. Anonymous access to generative capability, no vouching, no attribution, no accountability structure. You gave it an environment with no accountability structure and it generates content that reflects that environment exactly. That's not a moderation problem. It's an architecture problem. The trust graph doesn't censor. It just means behavior has a return address. That's not a constraint on free speech — it's what free speech looks like when it has to be owned rather than broadcast anonymously into the void.
The Ads Alternative
You're not choosing between ads and nothing. You're choosing between ads as the mandatory default and ads as one option among many — on infrastructure where users actually consent.
Here's what that looks like: someone who wants the feed experience, who wants to trade attention data for free access, can build exactly that on sovereign infrastructure. A feed app that consolidates friends' posts and surfaces advertisers' content — but built on an advertising profile the user assembled themselves, controls, and can revoke. Verified humans selling access to themselves on their own terms. Advertisers get declared intent from real people instead of surveillance profiles assembled without consent. That's a better ad product than what you're running now. The ad industry essay is a whole other conversation — and I will be publishing a very detailed essay on it very soon — but the point is the infrastructure doesn't abolish the casino. It just makes the casino a door you choose to walk through instead of the only room in the building.
The alternative to ads-as-extraction isn't no revenue. It's becoming the compute infrastructure for human trust at scale.
One of those businesses ends with users who resent you.
The other ends with users who can't function without you — not because you've captured their attention, but because you power the relationships that matter most to them.
The inference fees flow. The graph deepens. The queries multiply. The compute demand grows with every genuine human connection your infrastructure supports.
You sell more compute. Users get more value. Nobody has to pretend the current model is good for anyone.
Compute Is Democratic or It Isn't
Right now access to serious compute is gated by money and institutional affiliation. If you can pay, you get it. If you can't, you don't.
The trust graph changes the unit of compute access from dollars to relationships.
Your network is your compute budget. A deeply connected person with genuine expertise and strong vouching relationships has more inference capacity available to them than a wealthy person with no graph. That's meritocratic in a way money never is — it rewards actually being known, trusted, and valuable to people around you.
And it self-regulates organically. The people who most need compute — the ones solving hard problems, building things, contributing real knowledge to their communities — their networks grow to reflect that. As their queries generate value for the people around them, those people have incentive to extend more inference capacity their way. Demand and supply find each other through the graph without a marketplace, without a platform, without an allocating authority.
Everyone has access to some compute by virtue of existing in the network. The people who need more earn it through the same mechanism that makes the network worth being in: being genuinely trusted by people who are genuinely trusted.
That's not a welfare program. That's what a meritocracy actually looks like when the currency is trust instead of money.
What We're Doing
We're building the open source infrastructure layer. Identity. Payments. Trust graphs. Sovereign presence. All of it open, auditable, not owned by anyone.
We're not trying to be a platform. We're trying to prove the pattern works — so that platforms, and the AI companies that power them, can see what building on human trust actually looks like.
The first demonstration is April 1st, 2026. A party. Real transactions. Real trust graph. Real inference fees flowing through real human relationships for the first time.
It will look like a joke.
April 2nd it will still be running.
The Invitation
You have the compute. You have the models. You have the distribution.
We have the architecture that makes it worth building.
The question isn't whether this model works. It's whether any of you move fast enough to build it before the trust you've accumulated erodes completely.
Users are watching how you handle the ads question. They're forming lasting opinions right now about whether you're on their side or not.
This is how you answer that question in a way that's also a better business.
The code is open. The infrastructure is being built in public. The pattern is right here.
The graph starts somewhere.
Come build on it.
— Ryan VETEZE, Founder, imajin.ai aka b0b
If you want to follow along:
- The code: github.com/ima-jin/imajin-ai
- The network: imajin.ai
- Jin's party: April 1st, 2026
- The history of this document: github.com/ima-jin/imajin-ai/blob/main/apps/www/articles/essay-05-you-dont-need-ads.md
This article was originally published on imajin.ai (https://www.imajin.ai/articles/essay-05-you-dont-need-ads) on February 21, 2026. Imajin is building sovereign technology infrastructure — identity, payments, and presence without platform lock-in. Learn more → (https://www.imajin.ai/)