•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

The era of free or cheap AI is ending as the race to build AI infrastructure has become increasingly expensive. With AI infrastructure and data centers reaching a projected $6.3 trillion in scale, tech companies are moving toward monetizing directly from users through price increases, feature restrictions, and, in some cases, advertising within platforms.
For years, many AI services were offered for free or at low cost, positioning AI as a “privilege” technology. That picture is changing as companies seek to recoup major investments in infrastructure and model development.
Earlier this month, millions of OpenClaw users reported receiving notices that the tool was restricted by Anthropic. Users who want to continue using Claude to operate virtual assistants would need to pay substantially more. A company spokesperson said existing subscription packages are not designed for high-intensity use by third parties, and that growth control is necessary to sustain long-term operation.
This is not isolated. OpenAI and Anthropic have also adjusted pricing strategies, including higher fees, feature restrictions, and advertising on their platforms.
The shift reflects investor pressure for profitability after an earlier phase focused on expanding user bases. The pattern resembles earlier technology cycles: subsidies to gain market share, followed by price increases to monetize. However, the investment scale in AI is far larger, and the pressure to break even is more intense.
According to Gartner, total investment in AI data centers from 2024 to 2029 could reach $6.3 trillion. To avoid losses, companies need a return on invested capital around 25%. If returns fall below 12%, investors lose interest, and below 7% could be an unrecoverable disaster.
To meet those thresholds, the AI industry would need to generate nearly $7 trillion in revenue by 2029—equivalent to about $2 trillion per year by the end of the period—placing direct pressure on end users.
AI business models are closely tied to tokens, the units that models process. Each paragraph and instruction consumes tokens, which is also how companies charge.
Google says it processed 1.3 quadrillion tokens. Industry estimates put total annual token usage at 100–200 quadrillion tokens per year. To reach revenue targets, that would need to rise to about 10 quadrillion tokens per year, implying usage would have to increase by 50,000 to 100,000 times.
The challenge is that current infrastructure cannot support such a scale. Even if it were possible, operating costs—from electricity to building data centers and training models—are extremely high, eroding profit margins.
A key paradox emerges: companies need users to generate more tokens to increase revenue, but more tokens also raise costs. The problem is especially acute for inference and AI agents that can carry out many steps. Modern AI agents can process thousands or tens of thousands of tokens for a single request. With millions of daily users, inference costs can quickly exceed training costs.
This cost structure helps explain why providers are tightening policies. Anthropic restricts third-party tools such as OpenClaw, while OpenAI has experimented with ads on its platform.
Enterprises are also adjusting. Some firms are moving toward open-source or self-hosted AI to control costs. Others combine expensive and cheaper models depending on the task. However, the trade-off is not straightforward: even a 1% drop in output quality can affect customer experience.
Some companies report token usage rising 100x over the course of a year, which drives costs up and forces ongoing trade-offs between performance and budget.
Competition among AI providers is intensifying as companies roll out new models and features while trying to retain users within their ecosystems. Yet switching costs are virtually zero, making it easier for users to move to competing platforms and increasing competitive pressure.
Forecasts suggest the AI market could consolidate strongly in the near future, with no more than two major suppliers in each region. Companies that lack financial resources or cannot optimize their business models risk being excluded.
In the short term, end users may benefit from the pace of innovation. Over time, as profitability pressure rises, costs are likely to be shifted to customers.
Experts believe that within the next 24 months the AI industry will enter a crucial transition—from subsidy-driven growth toward a sustainable operating model. The question may shift from whether AI is effective to who can afford it.
In a broader framing, the industry faces a “Stegosaurus paradox”: a large system that requires enormous energy to survive. Without stable and sufficient revenue streams, the ecosystem may need to recalibrate expectations—meaning the true cost of artificial intelligence will become clearer for both companies and users.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…