Skip to content
Blog

Is Compute the New Airtime? Paying for AI and the Perpetuation of a Stratified Digital Economy

Authors Dr. Jonathan Donner

A decade ago,  I wrote about the metered mindset and how mobile data pricing shaped the way billions of people engaged with the internet. Then, as now, prepaid, mobile-first users don’t scroll freely. They “dip and sip,” ever mindful of their airtime or data balance.

Now the global majority may have another thing to ration. In this case, the resource isn’t airtime or data, it’s “compute”: the processing power that fuels foundational generative AI models like ChatGPT, Claude, and Gemini.

Large language models (LLMs) are ushering in a new era of digital engagement. OpenAI claims 800 million users—a staggering number, to be taken with several grains of salt. But assuming those users are being drawn from the world’s more prosperous residents, OpenAI and its peers may soon hit the ceiling of people who can pay directly for compute, at least via monthly subscriptions and credit cards.

As with mobile airtime and mobile data before it, access to compute is stratifying. Just as mobile users adapted to prepaid plans, a growing number of LLM users are navigating a fragmented, unequal landscape. There are (at least) six ways people access (and sometimes pay) for LLMs—and these pathways differ not just in cost, but in experience, user agency, and inclusion.

Six ways people access compute today

Two ways are straightforward – there is a clear exchange of money for compute between the user and the LLM provider.

1. The developer’s path: via API (B2B, pay-as-you-go)
Companies and coders pay per token to access models via APIs. This model is usage based  and ideal for embedding AI into apps or workflows. It scales, but it costs.

2. The consumer path: subscription plans (B2C)
Beyond a capped or limited “free tier,” flat monthly fees provide bundled access to many of the leading models (e.g., ChatGPT Plus, Claude Pro, Gemini Advanced). But access isn’t truly unlimited; tiers, model differences, and performance caps still apply. 

Four other ways are more roundabout. The exchange of money for compute is hidden or removed. But it is precisely because these exchanges are hidden that it is important to include them in the typology. Doing so underscores the complexity of AI access and highlights how pathways of access can impact dynamics of inclusion and exclusion.

3.  The power-user path: self-hosted (DIY enthusiasts and enterprises)
Some organizations and individuals download open models (like LLaMA or Mistral) and run them locally. This path is flexible, inexpensive, and private—but hardware intensive and technically demanding. Though few take this route, it supports innovation, protects privacy, and strengthens and diversifies the broader AI and digital ecosystem.

4. The organization-sponsored path: institutional access (somebody else pays for a user’s subscription).
Employees of large enterprises, civil servants, and even some university students can access advanced models through their institutions’ licenses. From the user’s perspective, the experience may feel free or unlimited, even though someone upstream is paying for the compute. Some, including OpenAI’s Sam Altman, have floated the idea of governments providing this kind of access more widely as a kind of digital public good.

5. The indirect path: pass-through (users access AI via other apps)
Here, users encounter AI embedded in third-party apps and services. One way or another, the app maker pays the LLM provider by tokens, but the user may experience the AI as “included” or “free.” Sometimes there’s a paywall, sometimes a freemium tier, and sometimes the cost is hidden entirely. This is AI as a feature, layered into innumerable tools from indie startups to enterprise platforms. 

6. The platform path. Baked into platforms (users receive “free” AI from big tech providers)
AI features are increasingly central elements of the experiences on major platforms from Meta WhatsApp, Microsoft (Office 365), and Google (Gemini on search and in Google Docs). In these cases, end users don’t pay directly; the platform absorbs the cost or monetizes access another way. The AI may feel free, but actually reflects a tangle of API costs, cross-subsidies, and loss-leader calculations designed to help the user—or at least, to keep them engaged with what’s in front of them.  When ads show up they’ll be here, too. 

Prices shape behaviors

As any economist will remind you, prices and scarcity shape behavior. So it’s no surprise that as users decide whether and how to engage with LLMs, they’re already navigating pricing tiers, free trials, embedded tools, and technical workarounds to make the most of what’s available. As with mobile data and home bandwidth caps, today’s LLM users are sensitive to cost and optimizing across access modes, sometimes by avoiding API fees and looking for free compute

But the outcomes of that optimization aren’t always visible—or equivalent. Two users of the same brand of AI might interact with different model tiers (and thus get different results) depending on how they pay, what interface they use, and whether their access is subsidized.

Inclusion doesn’t hinge only on whether someone uses AI, but also on how that use is shaped by cost, constraints, and context. It’s an emergent result of economic signals and behavioral adaptation. This may be the next digital divide: beyond access to AI to include variations in experience, shaped by who’s optimizing, who’s subsidized, and who’s left behind.

Signals to watch in the compute economy

This is not about hype cycles or AI risks, or change that’s so exponential as to render the economy unrecognizable. It’s just about fairness and access. The cost contours of the intelligence economy are still taking shape. But as this long list of “ways to pay” makes clear, a few signals and tensions are emerging, with implications for access and equity. 

  • How much advantage will platforms with LLMs maintain? The durability and centrality of the “platform access” way of paying are especially important for the future of competition. Google, Meta, and (soon) OpenAI can offer “free” AI at scale because they control both the models and the interfaces. Innumerable smaller players—like Slack or Notion—typically don’t. Instead, they rely on upstream providers and pay by the token. That distinction shapes who can compete, who can innovate, and who can afford to experiment. Consumer price sensitivity affects user behaviors and feeds back into durable (arguably monopolistic) platform power.
  • Will subscriptions shift toward true pay as you go?
    Current consumer plans are bundled and capped. Will we see finer-grained pricing that resembles prepaid mobile data and true sachet bundles? Who benefits if we do? Or are some users unlikely to ever pay separately for compute, with subsidized, embedded access becoming the dominant mode?
  • How visible will model differences be to users?
    Will people know when they’re using a recent model versus a fallback? Will they have the skills and literacy to discern when their queries are throttled or degraded? Transparency could become a new axis of trust and control.
  • What happens when ads enter the mix?
    Ads in chatbot results could offset costs for some users. Will this create new strata of premium AI for some, extractive AI for others? There may be an optimal segment for ad-based AI: users with enough disposable income to be worth advertising to, but not enough to opt out via subscription. Those with existing ad marketplaces (Google, Amazon, Meta) may do well with this model; others like OpenAI would have to build an ad marketplace from the ground up.  
  • How much will model selections (and pricing) matter to the majority of users? For some tasks, a strong open 2024 model might suffice. For others, the marginal power of a cutting-edge 2025 model will matter. Can users discern those differences, and when should they care?
  • Will some users go cyborg while others sip and dip?
    I use OpenAI for everything from editing and research to movie picks. I’d be using AI very differently—and much less—if I didn’t subscribe. AI under scarcity will be the norm for many. It’s time for policy and innovation communities to build that into their assumptions.
  • Will geopolitics shape access to affordable compute?
    Chinese models like DeepSeek offer aggressive pricing and open weights, making them appealing to enterprises and consumers in emerging markets. Meanwhile, Mistral, backed by French public funding, is being developed as a deliberate alternative to US-based LLMs. As LLMs become foundational to digital engagement, new intersections between sovereignties, epistemologies (systems and representations of meaning), and business models are emerging. The implications extend well beyond technology adoption, and this is increasingly a matter of statecraft and global economic alignment.

Where next

As generative AI spreads, this look at the variety of payment models and users’ responses to them underscores that the questions are broader than whether access exists, to include how it is created, and on what terms. 

As with previous paradigms (landlines, dial-up internet, airtime, broadband, and pay-as-you-go mobile data), pricing models, user behaviors, and platform incentives will shape a layered, uneven AI landscape. Moving forward, the digital development community needs to pay close attention to how compute is rationed, subsidized, and optimized. This is, of course, in addition to all the other inequalities afoot: differences in language capabilities, inbuilt bias in training data and responses, and the potential for surveillance and control. But pricing is another thing regulators, policymakers, and the broader digital community need to focus on, as the terms of access are not predetermined and can be shaped by careful, coordinated action. 

We’ve been here before. And as we plan for a future with AI embedded in everything we do, it’s worth remembering: the meter is still running.

Authors

Chief Knowledge Officer (CKO)

See More by Dr. Jonathan Donner

Explore more Blog posts

Thought-provoking reflections at the intersection of technology and society.