Saturday, January 3, 2026

Nvidia's Jensen Huang: "King of Cannibal Island"

 From the London Review of Books, Vol. 47 No. 23 · 25 December 2025:

The Thinking Machine: Jensen Huang, Nvidia and the World’s Most Coveted Microchip 
by Stephen Witt.
Bodley Head, 248 pp., £25, April, 978 1 84792 827 6
The Nvidia Way: Jensen Huang and the Making of a Tech Giant 
by Tae Kim.
Norton, 261 pp., £25, December 2024, 978 1 324 08671 0
Empire of AIInside the Reckless Race for Total Domination 
by Karen Hao.
Allen Lane, 482 pp., £25, May, 978 0 241 67892 3

The tulip bubble is the most famous financial bubble in history, but as historical examples go it is also, in one crucial respect, misleading. That’s because anyone can see the flagrant irrationality which was at work. At peak tulip madness in 1637, rare bulbs were so expensive that a single one was worth as much as a fancy canalside house in Amsterdam. You don’t have to be Warren Buffett to see that the disconnect between price and value was based on delusional thinking.

Most bubbles aren’t like that. Even the South Sea Bubble, the event which gave its name to financial bubbles, had an underlying rationale: who can deny that the expansion of global networks of trade and capital turned out to be a vitally important and vastly lucrative event? Even if all the investors in the original bubble – including Isaac Newton, who realised it was a bubble, but got caught up in the excitement anyway – lost their shirts. The historical pattern is typically that a big, genuine innovation is spotted on the horizon. Money floods in to take advantage. Too much money. The flow of capital is so great that it is impossible to allocate it correctly, and distinctions disappear between what is likely and what is impossible, what is prudent and what is reckless, what might happen and what never could. After the flood of money, the doubts; after the doubts, the crash; and after the crash, the gradual emergence of the phenomenon that got all the speculators so excited in the first place. It happened with the South Sea Bubble, with the many railway manias of the mid-19th century, with the electrification mania of fifty years later and with the dot-com bubble at the turn of this century.

That is where we are now with AI. In the deep historical past of 2018, Apple became the first public company in the world to have a market capitalisation of more than a trillion dollars. Today, each of the ten biggest companies in the world is worth more than $1 trillion. Only one of them, the Saudi oil monopoly, Aramco, has nothing to do with the future value of AI. The top company, Nvidia, is worth $4.45 trillion. Not by coincidence, Nvidia shares are the purest bet you can make on the impact of AI. The leading firms are lending money to one another in circular patterns, propping up turnover and valuations. Colossal amounts of money are pouring in. Is it a bubble? Of course it’s a bubble. The salient questions are how we got here, and what happens next.

How did we get here? That story is among other things a narrative about two men, who gratifyingly correspond to the two main character types of the tech age: academically overachieving immigrant (Elon Musk, Sergey Brin, Sundar Pichai, Satya Nadella) and US-born college dropout (Steve Jobs, Bill Gates, Mark Zuckerberg). Companies founded or run by such men are the first, second, third, fourth, fifth and seventh most valuable in the world. Their combined value is $20.94 trillion – one sixth of the entire world economy.

Let’s begin in medias res. In the spring of 1993, three nerds visited a lawyer in Silicon Valley with the intention of setting up a company to make computer chips. The men were Curtis Priem, Chris Malachowsky and the person they had chosen to be their CEO, Jensen Huang, a Taiwanese-born electrical engineer with a talent for management and business. Malachowsky and Priem, according to Stephen Witt’s Thinking Machine, had complementary skills – they were, respectively, an architect and a chip mechanic. They wanted to make a new kind of chip, optimised for a rapidly growing sector: video games. Their employer, the large chip company LSI Logic, didn’t like the idea, so the three men cooked up a business plan, working mainly in a branch of the 24-hour chain restaurant Denny’s that was accessorised with bullet holes from drive-by shootings. Huang didn’t think the new company was worth launching until they had a credible chance of making $50 million a year in revenue. Fiddling with spreadsheets over long sessions at Denny’s, he eventually made the numbers add up. The three amigos went to see Jim Gaither, a lawyer well known in the Valley. Gaither filled out the paperwork, with the company’s name left as NV, for New Venture. Malachowsky and Priem were entertained by that: they had been playing around with company names that suggested their chip would leave competitors sick with envy. The coincidence was too good to resist. They decided to call their company Nvision. When the lawyer checked, it turned out that Nvision was already taken. They chose a backup: Nvidia.

Good choice of CEO, good choice of name. A third of a century later, Huang is the longest-serving CEO in the industry and Nvidia is the most valuable company in the world. Nvidia’s share of global stock market value is historically unprecedented: its shares make up a greater part of global indices than the entire UK stock market.

Huang had a hard start in life. He arrived in the US in 1973 aged nine, small for his age and not speaking much English. His parents, Hokkien-speakers from Tainan who had emigrated to Bangkok, had attempted to teach him and his brothers English by making them learn ten words a day, chosen at random from the dictionary. They sent Huang to the Oneida Baptist Institute in Kentucky under the mistaken impression it was a posh boarding school. In fact, it was a reform school for unruly boys whom the regular US education system couldn’t handle. Huang’s academic abilities meant that he was put in a class with boys a year older. If you were designing a formula to make a child a target for bullying, you couldn’t do much better. On his first night, Huang’s roommate pulled up his shirt to show him the scars he had accumulated from knife wounds. The newcomer, who stayed at school during the holidays because he had nowhere else to go, was given the job of cleaning the toilets.

This might sound like a deprivation narrative. Huang doesn’t tell it that way. He taught his roommate to read, and his roommate taught him to do push-ups – a hundred a day. The bullies stopped trying to tip him off the rope bridge he had to cross on the way to school. Huang says it toughened him up and, according to Witt, in a commencement speech in 2020 he ‘said that his time at the school was one of the best things ever to happen to him’. After two years in Kentucky, Huang moved to Oregon, where his parents had immigrated. He went to school and university and married there, before starting his career at the Silicon Valley microchip design company AMD. Many promotions and one job move later he met Malachowsky and Priem through LSI.

The trio’s new venture was far from an overnight success. There were at least 35 companies competing to build a specialised chip for video games, and it was evident that most of them were going to fail. When Nvidia’s first chip, the NV1, bombed, it looked as if their company was going to be one of them. ‘We missed everything,’ Huang later said. ‘Every single decision we made was wrong.’ He laid off most of the company’s employees and bet the shop on the successful design of their next chip, the NV3. (The NV2 was cancelled before launch.) Rather than build the chip the traditional way – they couldn’t do that, because they would have run out of money before it was finished – they used an emulator, a machine designed to mimic chip designs in software rather than silicon, to test it virtually. When the first real NV3 chip arrived, there was a crucial test. If even one of the 3.5 million transistors on the chip was flawed, it would be dead on arrival and Nvidia would disappear. It wasn’t and it didn’t. ‘To this day we are the largest user of emulators in the world,’ Huang says.

By this point, in 1997, Huang had made two big bets: one on video games’ insatiable demand for better graphics, and one on the emulator. Those successful bets kept Nvidia alive, and growing. He would make three more. The first was on a type of computing known as parallel processing. A traditional computer chip, such as the one inside the laptop I’m using, runs with a Central Processing Unit, a CPU, which works through computations in sequence. As chips have grown in power, the length and complexity of the computations have too. But chips had become so small that they were starting to run up against the laws of physics.

Parallel processing instead performs calculations not in sequence, but simultaneously. Rather than working through one huge calculation, it works through lots of small calculations at the same time. On YouTube, you can find the MythBusters, an excitable duo of American science-explainers, demonstrating the difference at an Nvidia conference in 2008 (Huang commissioned the demo). The MythBusters set up a robot gun to fire paintballs at a canvas. The first run works like a CPU: the robot fires a rapid sequence of blue paintballs, adjusting its aim after every shot to paint a smiley face. It takes about thirty seconds. Then they set up another robot gun, this time shooting 1100 paintballs simultaneously. The guns cough and in a fraction of a second – eighty milliseconds, to be precise – on the canvas appears a paintball copy of the Mona Lisa. The instant Mona Lisa is a visual metaphor for the way the new chips worked: instead of huge calculations done in sequence, a vast number of short calculations done at the same time. Parallel processing.

The video games industry loved the new chips, and demanded an update every six months, to render the ever more complex visual environments inside their games. Keeping up with that appetite was demanding and expensive, but it took Nvidia to a leading position in the chip industry. In The Nvidia Way, Tae Kim describes Huang’s relentlessness in keeping ahead of the competition. ‘The number one feature of any product is the schedule,’ Huang said, marking a difference between engineering elegance and Nvidia’s emphasis on getting-it-done, getting-it-shipped. The company’s chips were by this point so powerful that it began to seem bizarre that their only use was in allowing people to go online and shoot one another in increasingly complex and well-rendered sci-fi settings. At this point, Huang made another of his bets. He set Nvidia to develop a new kind of chip architecture, which he gave the deliberately obscure name CUDA, an acronym of Compute Unified Device Architecture.

The term doesn’t really mean anything, which was part of the point – Huang didn’t want the competition to realise what Nvidia was doing. Its engineers were developing a new kind of architecture for a new kind of customer: ‘doctors, astronomers, geologists and other scientists – highly educated academic specialists who were skilled in specific domains, but who maybe didn’t know how to code at all’. In Witt’s metaphor, the CPU is like a kitchen knife, ‘a beautiful multipurpose tool that can make any kind of cut. It can julienne, batonnet, chop, slice, dice, or hack ... but the knife can only ever chop one vegetable at a time.’ Nvidia’s processor, which the company was now calling a GPU, or Graphics Processing Unit, was more like a food processor: ‘loud, indelicate and power-intensive. It cannot chiffonade tarragon or score a crosshatch on a tube of calamari. But to mince a bunch of vegetables quickly, the GPU is the tool.’ The CUDA architecture took this tool and repurposed it for a new audience. In effect, gamers were paying for the chip development costs of the scientific users who Huang believed would show up. It was a version of ‘if you build it, they will come.’

They didn’t, or not in sufficient numbers to make CUDA a success. Demand failed to surge, and so did the company’s share price. There are many examples in the history of technology of an invention waiting for a ‘killer app’ – an application or function that suddenly gives the invention an irresistibly compelling purpose. The killer app for the PC, for instance, was the spreadsheet: overnight, a new technology that allowed a user to experiment with numbers and parameters and see what would happen if you tweaked a and b with the intention of arriving at z. It’s no exaggeration to say that spreadsheets remade capitalism in the 1980s by making it easy to run multiple alternative business scenarios and continue until you’d come up with something that made sense. Nvidia’s amazing new chips and their CUDA architecture were waiting for a killer app.

Salvation arrived in the form of an unfashionable branch of computing called neural networks. This was a field dedicated to the idea that computers could copy the structure of the brain by creating artificial neurons and connecting them in networks. Early neural networks were trained on labelled datasets, where the answer for each image was known in advance. The network made a prediction, compared it with the correct label, and adjusted itself using an algorithm called backpropagation. The major breakthrough came when researchers learned how to train networks with many layers of artificial neurons – ‘deep learning’. These deep networks could detect increasingly complex patterns in data, which led to dramatic progress in image recognition and many other areas. A computer scientist at Google, for instance,

fed their deep learning net a random sampling of ten million still images taken from YouTube and let it decide which patterns occurred frequently enough for the net to ‘remember’ them. The model was exposed to so many videos of cats that it independently developed a composite image of a cat’s face without human intervention. From then on, it could reliably identify cats in images that were not part of its training set.

Three things came together: algorithms, datasets and hardware. Computer scientists had developed the first two. It was Nvidia’s chips that brought the third – because, as it happened, the parallel processing of these chips was perfectly adapted for the new El Dorado of deep learning. Multiple calculations happening at the same time was exactly what constituted neural nets. These neural nets are the foundational technology for what was once called machine learning and is now generally referred to as AI. (Machine learning is a more accurate and helpful term, in my view, but that’s a topic for another day.)

The head scientist at Nvidia was a man called David Kirk. As he told Witt,

‘with parallel computing, it really took us a fair amount of convincing to talk Jensen into it ... Same with CUDA. We really had to make the business case.’ But with AI, Huang experienced a Damascene epiphany. ‘He got it immediately, before anybody ... He was the first to see what it could be. He really was the first.’

Huang reasoned that if neural nets could solve visual learning, they had the potential to solve everything else too. He sent out a company-wide email one Friday saying that Nvidia were no longer a graphics company. One colleague recalled: ‘By Monday morning, we were an AI company. Literally, it was that fast.’ That was in 2014. It was the fifth and most successful of Huang’s five bets, and the one that has made Nvidia the planet-bestriding colossus it is today.

Soon after this, if you were a nerd or nerd-adjacent, you started to hear about AI. Talking to people who knew more than I did about technology and economics, I would often ask questions along the lines of ‘What’s next?’ or ‘What’s the next big thing?’ and increasingly the answer that came back would involve AI. I have a particular memory of talking to an astute tech investor a few days after the Brexit vote, and asking him what he thought was going to happen. I’ve forgotten the details of his answer – we were drinking martinis – but the gist was AI advances in China. What struck me most was that I was asking about Brexit, but he regarded AI as so much more important that it didn’t even occur to him that was what I meant....

....MUCH MORE