Thursday, September 25, 2025

"The $4trn accounting puzzle at the heart of the AI cloud"

A question related to the one we raised introducing September 23's "OpenAI in talks to lease Nvidia chips instead of buying them, Information says". 

From The Economist, September 18:

A beancounter’s look at the hyperscalers’ balance-sheets 

IN ARTIFICIAL INTELLIGENCE, billions are so 2022. Three years after ChatGPT ignited the AI boom, the business is all about trillions. The market value of Microsoft, whose Azure cloud has a glistening AI lining, is not far off its recent $4trn record. Alphabet, which is giving Google an AI makeover, has just become a $3trn company. On a good day, Amazon, with a rival cloud, rounds to that. Meta, as much about AI these days as about social media, is now firmly a $2trn firm, plus or minus. Last week Oracle, challenging Alphabet, Amazon and Microsoft in the AI cloud, made a dash for $1trn. If some stock sales go to plan, OpenAI, ChatGPT’s maker, and its two rivals, Anthropic and xAI, could together be worth this much, give or take, by the end of the year.

AI revenues and expenditures are likewise a 13-figure affair nowadays. Worldwide spending on AI hardware and software nudged $1trn last year, according to Gartner, a research firm. This is likely to double to $2trn in 2026. Between 2024 and 2026 the five listed AI powerhouses will have splurged over $1trn on capital investments, chiefly AI data centres. A slug will end up with Nvidia and Broadcom, which furnish them and others with AI semiconductors. The duo (combined market capitalisation: $6trn) are together forecast to book almost $1trn in sales over that period.

Such a tally of zeros is normally reserved for official statisticians in G20 economies. In business, it is dizzying. All the more so given that examining the AI titans’ finances can be like staring into a black hole. Whether or not it promotes long-termism, President Donald Trump’s latest suggestion that American companies move from quarterly to half-yearly financial reporting is liable to make things blacker still. Pity the analyst without the levellest of heads and the brightest of torchlights.

Fortunately, some are equipped with both. And they are beginning to raise questions about aspects of AI book-keeping. Depending on the precise answer, the impact on the AI champions’ value could itself be measured in trillions of dollars.

The most pressing set of questions concerns the tech giants’ assets—specifically, the longevity of all those fancy AI chips they are installing in their data centres. Last year Nvidia, which makes most of them, said it would unveil a fresh AI chip every year rather than every couple of years. In March its boss, Jensen Huang, remarked that “when Blackwell starts shipping in volume, you couldn’t give Hoppers away,” referring to Nvidia’s latest chips and their predecessors, respectively.

No one took Mr Huang literally: that would imply that the useful life of his firm’s pricey products is now a mere 12 months. Rightly so, for he was clearly being tongue-in-cheek. And his biggest customers have in recent times been raising their servers’ lifetimes, reducing depreciation charges in their accounts. Microsoft pushed it up from four to six years in 2022. Alphabet did the same in 2023. Amazon and Oracle changed it from five to six in 2024. And in January Meta moved from five to five and a half years.

In the same month Amazon reversed course and moved back to five years for some kit, noting this would cut operating profit in 2025 by $700m, or about 1%, owing to a higher depreciation expense. Given the rapid advances in chipmaking, that seems optimistic. And Amazon’s AI rivals clinging to their elongated depreciation schedules look Pollyannaish. In July Jim Chanos, a veteran short-seller, posted that if the true economic lifespan of Meta’s AI chips is two to three years, then “most of its ‘profits’ are materially overstated.” A recent analysis of Alphabet, Amazon and Meta by Barclays, a bank, estimated that higher depreciation costs would shave 5-10% from their earnings per share.

Apply this logic to the entire AI big five and the potential overall hit to the bottom line is huge. The companies do not report the net book value of their computing infrastructure. But you can get a rough idea by multiplying the gross value, which each discloses in its annual filings, by the ratio of net book value to book value of all plant and equipment (excluding land, which is not subject to depreciation). Next assume that half the resulting sum is tied up in servers, in line with their estimated share of the global AI capital binge. Then, if all these servers lose their value in three years rather than however many each company assumes, their combined annual pre-tax profit would fall by $26bn, or 8% of last year’s total.

At the five companies’ current ratio of market capitalisation to pre-tax profit, this would amount to a $780bn knock to their combined value. Redo the sums depreciating the servers over two years instead of three and the size of the hit rises to $1.6trn. Take Mr Huang literally, and you get a staggering $4trn, equivalent to one-third of their collective worth.

The cloud-depreciation society
In reality, not all of the AI quintet’s servers would be useless after three years, let alone 12 months. They can keep performing oodles of non-AI work. Some AI tasks can also be farmed out to older processors....
....MORE 

 The Economist homepage

Nvidia's development cycle is currently around eighteen months, shorter than the magic number that Moore's  Law observed for the number of transistors in an integrated circuit, two years.

I don't know how long Nvidia can maintain that pace but Mr. Huang is pushing to shorten the development cycle further, thus making earlier generations of the company's GPUs obsolete even faster, a point we were pitching as a positive earlier this year regarding leakage of state-of-the art chips from the Middle Eastern buyers to China:

On the one hand with that many chips floating around that part of the world there is no way to keep a bunch of them from ending up in China. On the other hand, Nvidia's development cycle is focused on releasing new, more powerful chips every 12 -14 months meaning the current smoking hot H100 chips will have been superseded by two cycles at the end of the contract period. 

The first point, that chips will get to China is borne out by the recent news that $1 billion worth of chips had been smuggled into China in three months after the export ban on the more powerful Nvidia chips. 

note: the smuggled chips were not the ones destined for the UAE.

The second point is that the real technology transfer deterrent is in the pace of NVDA's development cycle. 
Although there are hiccups—most recently server racks overheating from the amazing amount of electricity flowing through the systems—the overarching goal is an almost metronomic rhythm to the development of new chips such that the H20s will be out-dated in under 2 1/2 years.

If interested in a deeper dive into the pace of development see also:

Nvidia Earnings Call Transcript: Q2 2026 (August 27, 2025)