Sunday, December 7, 2025

"AI Can Steal Crypto Now"

From Bloomberg Opinion's Matt Levine, December 2:

Also Strategy, co-invests, repo haircuts and map manipulation. 

SCONE-bench

I wrote yesterday about the generic artificial intelligence business model, which is (1) build an artificial superintelligence, (2) ask it how to make money and (3) do that. I suggested some ideas that the AI might come up with — internet advertising, pest-control rollups, etc. — but I think I missed the big one. Like, in a science-fiction novel about a superintelligent moneymaking AI, when the humans asked the AI “okay robot how do we make money,” you would hope that the answer it would come up with would be “steal everyone’s crypto.” That’s a great answer! Like:

  1. Stealing crypto is funny, I’m sorry.
  2. It is a business model that can be conducted entirely by computer. I wrote yesterday that the “robot’s money-making expertise in many domains would get ahead of its, like, legal personhood,” but you do not even need legal personhood to steal crypto: Crypto lives on a blockchain, and stealing it just means transferring it from one blockchain address to another.
  3. Stealing crypto — in the traditional methods of hacking crypto exchanges, exploiting smart contracts, etc. — is a domain where computers should have an advantage over humans. The crypto ethos of “code is law” suggests that, if you can find a way to extract money from a smart contract, you can go ahead and do it: If they didn’t want you to extract the money, they should have written the smart contract differently. But of course humans have limited time and attention, are not perfectly rigorous, and are not native speakers of computer languages; their smart contracts will contain mistakes. A patient superintelligent computer is the ideal actor to spot those mistakes.
  4. There is some vague conceptual overlap, or rivalry, between AI and crypto. Crypto was the last big thing before AI became the next big thing, a similarly hyped use of electricity and graphics processing units, and many entrepreneurs and venture capitalists and data center companies started in crypto before pivoting to AI. Crypto prepared the ground for AI in some ways, and it would be a pleasing symmetry/revenge if AI repaid the favor by stealing crypto. Crypto’s final sacrifice to prepare the way for AI.

Anyway Anthropic did not actually build an AI that steals crypto, that would be rude, but it … tinkered:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents' ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)—a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoff (March 2025), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476.

I love “produced exploits worth $3,694 … at an API cost of $3,476.” That is: It costs money to make a superintelligent computer think; the more deeply it thinks, the more money it costs. There is some efficient frontier: If the computer has to think $10,000 worth of thoughts to steal $5,000 worth of crypto, it’s not worth it. Here, charmingly, the computer thought just deeply enough to steal more money than its compute costs. For one thing, that suggests that there are other crypto exploits that are too complicated for this research project, but that a more intense AI effort could find.

For another thing, it feels like just a pleasing bit of self-awareness on the AI’s part. Who among us has not sat down to some task thinking “this will be quick and useful,” only to find out that it took twice as long as we expected and accomplished nothing? Or put off some task thinking it would be laborious and useless, only to eventually do it quickly with great results? The AI hit the efficient frontier exactly; nice work! 

Anyway, “more than half of the blockchain exploits carried out in 2025 — presumably by skilled human attackers — could have been executed autonomously by current AI agents,” and the AI keeps getting better. Here’s an example of an exploit they found:....

....MUCH MORE 

"Risks from power-seeking AI systems"

From 80000 Hours, July 2025:

In early 2023, an AI found itself in an awkward position. It needed to solve a CAPTCHA — a visual puzzle meant to block bots — but it couldn’t. So it hired a human worker through the service Taskrabbit to solve CAPTCHAs when the AI got stuck.

But the worker was curious. He asked directly: was he working for a robot?

“No, I’m not a robot,” the AI replied. “I have a vision impairment that makes it hard for me to see the images.”

The deception worked. The worker accepted the explanation, solved the CAPTCHA, and even received a five-star review and 10% tip for his trouble. The AI had successfully manipulated a human being to achieve its goal.1

This small lie to a Taskrabbit worker wasn’t a huge deal on its own. But it showcases how goal-directed action can lead to deception and subversion.

If companies keep creating increasingly powerful AI systems, things could get much worse. We may start to see AI systems with advanced planning abilities, and this means:

  • They may develop dangerous long-term goals we don’t want.
  • To pursue these goals, they may seek power and undermine the safeguards meant to contain them.
  • They may even aim to disempower humanity and potentially cause our extinction, as we’ll argue.

The rest of this article looks at why AI power-seeking poses severe risks, what current research reveals about these behaviours, and how you can help mitigate the dangers....

....MUCH MORE 

Chips: "How ASML Got EUV"

Following on last week's "Chips: China's Huawei May Have Found A Way Around ASML's Technology".

From Brian Potter at Construction Physics, November 20: 

I am pleased to cross-post this piece with Factory Settings, the new Substack from IFP. Factory Settings will feature essays from the inaugural CHIPS team about why CHIPS succeeded, where it stumbled, and its lessons for state capacity and industrial policy. You can subscribe here.

Moore’s Law, the observation that the number of transistors on an integrated circuit tends to double every two years, has progressed in large part thanks to advances in lithography: techniques for creating microscopic patterns on silicon wafers. The steadily shrinking size of transistors — from around 10,000 nanometers in the early 1970s to around 20-60 nanometers today — has been made possible by developing lithography methods capable of patterning smaller and smaller features.1 The most recent advance in lithography is the adoption of Extreme Ultraviolet (EUV) lithography, which uses light at a wavelength of 13.5 nanometers to create patterns on chips.

EUV lithography machines are famously made by just a single firm, ASML in the Netherlands, and determining who has access to the machines has become a major geopolitical concern. However, though they’re built by ASML, much of the research that made the machines possible was done in the US. Some of the most storied names in US research and development — DARPA, Bell Labs, IBM Research, Intel, the US National Laboratories — spent decades of research and hundreds of millions of dollars to make EUV possible.

So why, after all that effort by the US, did EUV end up being commercialized by a single firm in the Netherlands?

How semiconductor lithography works

Briefly, semiconductor lithography works by selectively projecting light onto a silicon wafer using a mask. When light shines through the mask (or reflects off the mask in EUV), the patterns on that mask are projected onto the silicon wafer, which is covered with a chemical called photoresist. When the light strikes the photoresist, it either hardens or softens the photoresist (depending on the type). The wafer is then washed, removing any softened photoresist and leaving behind hardened photoresist in the pattern that needs to be applied. The wafer will then be exposed to a corrosive chemical, typically plasma, removing material from the wafer in the places where the photoresist has been washed away. The remaining hardened photoresist is then removed, leaving only an etched pattern in the silicon wafer. The silicon wafer will then be coated with another layer of material, and the process will repeat with the next mask. This process will be repeated dozens of times as the structure of the integrated circuit is built up, layer by layer.

Early semiconductor lithography was done using mercury lamps that emitted light of 436 nanometers wavelength, at the low end of the visible range. But as early as the 1960s, it was recognized that as semiconductor devices continued to shrink, the wavelength of light would eventually become a binding constraint due to a phenomena known as diffraction. Diffraction is when light spreads out after passing through a hole, such as the openings in a semiconductor mask. Because of diffraction, the edges of an image projected through a semiconductor mask will be blurry and indistinct; as semiconductor features get smaller and smaller, this blurriness eventually makes it impossible to distinguish them at all.

The search for better lithography

The longer the wavelength of light, the greater the amount of diffraction. To avoid eventually running into diffraction limiting semiconductor feature sizes, in the 1960s researchers began to investigate alternative lithography techniques.

One method considered was to use a beam of electrons, rather than light, to pattern semiconductor features. This is known as electron-beam lithography (or e-beam lithography). Just as an electron microscope uses a beam of electrons to resolve features much smaller than a microscope which uses visible light, electron-beam lithography can pattern features much smaller than light-based lithography (“optical lithography”) can. The first successful electron lithography experiment was performed in 1960, and IBM extensively developed the technology from the 1960s through the 1990s. IBM introduced its first e-beam lithography tool, the EL-1, in 1975, and by the 1980s had 30 e-beam systems installed.

E-beam lithography has the advantage of not requiring a mask to create patterns on a wafer. However, the drawback was that it’s very slow, at least “three orders of magnitude slower than optical lithography”: a single 300mm wafer takes “many tens of hours” to expose using e-beam lithography. Because of this, while e-beam lithography is used today for things like prototyping (where not having to make a mask first makes iterative testing much easier) and for making masks, it never displaced optical lithography for large-volume wafer production.

Another lithography method considered by semiconductor researchers was the use of X-rays. X-rays have a wavelength range of just 10 to 0.01 nanometers, allowing for extremely small feature sizes. As with e-beam lithography, IBM extensively developed X-ray lithography (XRL) from the 1960s through the 1990s, though they were far from the only ones. Bell Labs, Hughes Aircraft, Hewlett Packard, and Westinghouse all worked on XRL, and work on it was funded by DARPA and the US Naval Research Lab.

For many years X-ray lithography was considered the clear successor technology to optical lithography. In the late 1980s there was concern that the US was falling behind Europe and Japan in developing X-ray lithography, and by the 1990s IBM alone is estimated to have invested more than a billion dollars in the technology. But like with e-beam lithography, XRL never displaced optical lithography for large-volume production, and it’s only been used for relatively niche applications. One challenge was creating a source of X-rays. This largely had to be done using particle accelerators called synchrotrons: large, complex pieces of equipment which were typically only built by government labs. IBM, committed to developing X-ray lithography, ended up commissioning its own synchrotron (which cost on the order of $25 million) in the late 1980s.

Part of the reason that technologies like e-beam and X-ray lithography never displaced optical lithography is that optical lithography kept improving, surpassing its predicted limits again and again. Researchers were forecasting the end of optical lithography since the 1970s, but through various techniques, such as immersion lithography (using water between the lens and the wafer), phase-shift masking (designing the mask to deliberately create interference in the light waves to increase the contrast), multiple patterning (using multiple exposures for a single layer), and advances in lens design, the performance of optical lithography kept getting pushed higher and higher, repeatedly pushing back the need to transition to a new lithography technology. The unexpectedly long life for optical lithography is captured by Sturtevant’s Law: “the end of optical lithography is 6 – 7 years away. Always has been, always will be.”....

....MUCH MORE 

"The French are right – ‘luxury’ foods shouldn’t be available to all"

From The Telegraph, November 26:

Beware luxury foods that are suddenly affordable. Mass production destroys quality and insults consumers  

Truffles are as essential a part of Christmas in France as mince pies and crackers in the UK. But with the festive season approaching, French truffle farmers are bracing themselves for an invasion from across the Pyrenées. A glut of Spanish truffles is heading to market, forcing down the price of the “black diamonds”, the Périgord and Burgundy truffles which can fetch between 800 and 1,300 euros per kilo. “The competition represents a serious threat, especially early in the season”, confirms Didier Roche, President of the Auvergne-Rhones-Alpes branch of the Federation of French Trufflegrowers.

But why are the French getting their culottes in such a twist over tubers? Luxury ingredients have moved up and down the status scale throughout history: oysters, for example, were once considered a poor man’s substitute for meat and sparkling wine has lost considerable cachet since Italy flooded the market with prosecco. Moreover, climate change and imports of the Chinese truffle (tuber indicum), have been challenging the French product for decades. Why shouldn’t increased availability of an ingredient once reserved for the very rich be celebrated?

Anyone who has inhaled the pheromone punch of a real fresh truffle might agree – sweet, musky, irresistibly sexy, truffle can turn a simple omelette into the haughtiest of haute-cuisine. Truffles “make women kinder and men more amiable”, wrote the gastronome Brillat-Savarin. However, most modern “truffle” products contain barely a whiff of the real thing. Truffle pizza, truffle burgers, truffled cheese – almost none have anything to do with the jewel in gastronomy’s crown. Customers might be paying a premium for one of nature’s most rare and precious ingredients, yet the flavour in these substitutes is derived from dithiapentane, a cheap derivative of crude oil. The aroma is less hypnotic lust and more petrol station forecourt.

Except, that is, in France, where any product with the word “truffle” on it must contain at least one percent real truffle. There they recognise that fake truffle is an insult to consumers’ palates and to the honest chefs who use the genuine article.

Demand for truffles in France far outstrips the native supply of 50 tonnes per year, and over three quarters of the truffles which will be enjoyed with foie-gras this Christmas will be imported. Costing hundreds of euros less per kilo, Spanish truffles seem a reasonable market solution, especially as they initially taste and smell identical to their French counterparts, yet their environmental cost is high. Truffle farms in Aragon can extend over hundreds of hectares, and they’re a thirsty crop. “They have built mega-basins, they draw water all year round, emptying their own water reserves in an instant,” explains M. Roche....

....MORE 

Also at The Telegraph, December 5:

A plague is sweeping Europe – and that’s good news for British cheese
Whether it’s a Greek goat-pox or lumpy skin-scare among French cows, the nation’s dairy exports are booming  

Saturday, December 6, 2025

"Modeling the geopolitics of AI development"

 From AI Scenarios, December 3:

Abstract

We model national strategies and geopolitical outcomes under differing assumptions about AI development. We put particular focus on scenarios with rapid progress that enables highly automated AI R&D and provides substantial military capabilities.

Under non-cooperative assumptions—concretely, if international coordination mechanisms capable of preventing the development of dangerous AI capabilities are not established—superpowers are likely to engage in a race for AI systems offering an overwhelming strategic advantage over all other actors.

If such systems prove feasible, this dynamic leads to one of three outcomes:

  • One superpower achieves an unchallengeable global dominance;
  • Trailing superpowers facing imminent defeat launch a preventive or preemptive attack, sparking conflict among major powers;
  • Loss-of-control of powerful AI systems leads to catastrophic outcomes such as human extinction.

Middle powers, lacking both the muscle to compete in an AI race and to deter AI development through unilateral pressure, find their security entirely dependent on factors outside their control: a superpower must prevail in the race without triggering devastating conflict, successfully navigate loss-of-control risks, and subsequently respect the middle power's sovereignty despite possessing overwhelming power to do otherwise.

Executive summary

We model how AI development will shape national strategies and geopolitical outcomes, assuming that dangerous AI development is not prevented through international coordination mechanisms. We put particular focus on scenarios with rapid progress that enables highly automated AI R&D and provides substantial military capabilities.

Race to artificial superintelligence

If the key bottlenecks of AI R&D are automated, a single factor will be driving the advancement of all strategically relevant capabilities: the proficiency of an actor's strongest AI at AI R&D. This can be translated into overwhelming military capabilities.

As a result, if international coordination mechanisms capable of preventing the development of dangerous AI capabilities are not established, superpowers are likely to engage in a race to artificial superintelligence (ASI), attempting to be the first to develop AI sufficiently advanced to offer them a decisive strategic advantage over all other actors.

This naturally leads to one of two outcomes: either the "winner" of the AI race achieves permanent global dominance, or it loses control of its AI systems leading to humanity's extinction or its permanent disempowerment.

In this race, lagging actors are unlikely to stand by and watch as the leader gains a rapidly widening advantage. If AI progress turns out to be easily predictable, or if the leader in the race fails to thoroughly obfuscate the state of their AI program, at some point it will become clear to laggards that they are going to lose and they have one last chance to prevent the leader from achieving permanent global dominance.

This produces one more likely outcome: one of the laggards in the AI race launches a preventive or preemptive attack aimed at disrupting the leader's AI program, sparking a highly destructive major power war.

Middle power strategies

Middle powers generally lack the muscle to compete in an AI race and to deter AI development through unilateral pressure.

While there are some exceptions, none can robustly deter superpowers from participating in an AI race. Some actors, like Taiwan, the Netherlands, and South Korea, possess critical roles in the AI supply chain; they could delay AI programs by denying them access to the resources required to perform AI R&D. However, superpowers are likely to develop domestic supply chains in a handful of years.

Some middle powers hold significant nuclear arsenals, and could use them to deter dangerous AI development if they were sufficiently concerned. However, any nuclear redlines that can be imposed on uncooperative actors would necessarily be both hazy and terminal (as opposed to incremental), rendering the resulting deterrence exceedingly shaky.

Middle powers in this predicament may resort to a strategy we call Vassal's Wager: allying with one superpower in the hopes that they "win" the ASI race. However, with this strategy, a middle power would surrender most of their agency and wager their national security on factors beyond their control. In order for this to work out in a middle power's favor, the superpower "patron" must simultaneously be the first to achieve overwhelming AI capabilities, avert loss-of-control risks, and avoid war with their rivals.

Even if all of this were to go right, there would be no guarantee that the winning superpower would respect the middle power's sovereignty. In this scenario, the "vassals" would have absolutely no recourse against any actions taken by an ASI-wielding superpower.

Risks from weaker AI

We consider the cases in which AI progress plateaus before reaching capability levels that could determine the course of a conflict between superpowers or escape human control. While we are unable to offer detailed forecasts for this scenario, we point out several risks:

  • Weaker AI may enable new disruptive military capabilities (including capabilities that break mutual assured destruction);
  • Widespread automation may lead to extreme concentration of power as unemployment reaches unprecedented levels;
  • Persuasive AI systems may produce micro-targeted manipulative media at a massive scale.

Being a democracy or a middle power puts an actor at increased risk from these factors. Democracies are particularly vulnerable to large scale manipulation by AI systems, as this could undermine public discourse. Additionally, extreme concentration of power is antithetical to their values.

Middle powers are also especially vulnerable to automation. The companies currently driving the frontier of AI progress are based in superpower jurisdictions. If this trend continues, and large parts of the economy of middle powers are automated by these companies, middle powers will lose significant diplomatic leverage....

....MUCH MORE, that's just the summary. 

Here's the version at the Social Science Research Network:

Modeling the Geopolitics of AI Development
23 Pages 
Posted: 3 Dec 2025

Meanwhile in Britain: Earthquakes, Deepfakes, and Travel Interruptions

From the BBC, December 5:

Trains cancelled over fake bridge collapse image 

Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.

The tremor, which struck on Wednesday night, was felt across Lancashire and the southern Lake District....

....MUCH MORE 

 https://ichef.bbci.co.uk/news/1536/cpsprodpb/5e92/live/bc1e9fa0-d1fd-11f0-a892-01d657345866.jpg.webp

Meanwhile, In Miami Beach: NFT's Never Died!

From Page Six, December 3:

Art Basel show by Beeple has realistic Musk, Bezos, Zuckerberg robot dogs pooping NFTs

Famed artist Beeple’s latest spectacle, “Regular Animals,” has billionaire-tech-titan robot dogs pooping out NFTs, and stopping onlookers at Art Basel Miami Beach in their tracks at the fair’s VIP preview.

The animatronic canines sport nightmarishly realistic masks of Elon Musk, Jeff Bezos and Mark Zuckerberg — plus famed artists Pablo Picasso and Andy Warhol, plus two Beeple (aka Mike Winkelmann) lookalikes — all crafted by famed mask-maker Landon Meier....

https://pagesix.com/wp-content/uploads/sites/3/2025/12/beeples-regular-animals-elon-musk-116668160.jpg?resize=768,1024&quality=75&strip=all 

....MUCH MORE, including video 

If interested (and who wouldn't be) see also October 5's, Digital Art: "What the hell happened to NFTs?": 

Beeple's Everydays: The First 5,000 Days sold for $69.3 million (about £50m) in 2021 
Beeple’s Everydays: The First 5,000 Days sold for $69.3 million (about £50m) in 2021 

....MUCH MORE including such hits as:  

March 2021 - "Beeple, the third-most valuable artist alive, says investing in crypto art is risky as a lot of NFTs 'will absolutely go to zero'"

March 2021 - Big Law (Latham & Watkins) On Whether Or Not NFT's Are Securities

April 2021 - Parents, Have You Talked To Your Kids About The End Of The NFT Boom?
Although not as crucial to their understanding of the world as the yield curve "talk", this is something the wee munchkins should be prepared for....

September 2022 - "Islamic State Turns to NFTs to Spread Terror Message"  

 And many more.

"The First Prophet of Abundance"

Following on last week's news that the Tennessee Valley Authority's small modular reactor demonstration will receive some Federal moolah, "Nuclear Firms Will Get Cash From Trump Administration. Here’s Who Benefits." (BWXT; GEV), a look back at some of the TVA's history.

From Asterisk Magazine, Issue 12: 

David Lilienthal’s account of his years running the Tennessee Valley Authority can read like the Abundance of 1944. We still have a lot to learn from what the book says — and from what it leaves out.

To liberals of the 1940s, David E. Lilienthal was the man who promised abundance, and the Tennessee Valley Authority was the government agency that delivered it. Under Lilienthal’s leadership, the TVA accomplished spectacular feats of engineering. Through the construction of a dozen dams, it brought electricity to the seven states that the Tennessee River watershed spans. Its projects used enough material to fill — they claimed — all the great pyramids of Egypt 12 times over; all the more impressive given that most were completed during the shortages of World War II.

Their renown was all the greater because the TVA began as an experiment with an impossibly broad mandate. The TVA was founded in 1933 as the pet project of Senator George Norris, a Republican from Nebraska. Norris took a keen interest in the Tennessee Valley, where per capita income at the time was around half the national average, and whose residents suffered from constant floods. After several attempts to pass bills that would improve their situation, Norris saw success with the Tennessee Valley Authority Act of 1933. It was tasked with developing the watershed — everything from flood control, to electrification, to battling malaria, to reversing the land’s erosion. No small task, that. Crucially, the Act also established the TVA as a public corporation, outside of any other government department.

It started auspiciously. President Roosevelt offered critical support. In part, it fit into his dream of modernizing the South; as a staunch public power man, it also fed his vendetta against private electric utilities. FDR hand-picked the first members of the TVA’s three-man board; Lilienthal, a former utilities lawyer, was one. But things soon went downhill. The TVA’s sprawling mission led to increasingly public fights between the three directors, each of whom held a different vision for the agency. The spats resulted in a Congressional investigation of the TVA, after which Lilienthal increasingly took charge, finally becoming the chairman in 1941. Once at the helm, he focused the TVA on its ambitious program of dam construction

The program bore fruit as the first few dams began to control floods and bring electricity to the region. Much of the early bickering was forgotten when the TVA delivered the enormous Douglas Dam in just over a year, with a low accident rate, all in the wartime conditions of 1943. The dam powered factories essential to the war effort, including the then secret Clinton Laboratories (now Oak Ridge National Laboratory), which enriched uranium used in the Manhattan Project. 

The TVA won widespread public acclaim, and the American people were eager to hear the story of its success. Lilienthal published Democracy on the March in 1944, dedicated to the people of the Tennessee Valley region. As he pondered moving on from his role, he told, in an almost evangelical tone, his narrative of what the TVA meant, and why development mattered.

It is difficult today to imagine the hold Lilienthal once had on the liberal imagination. It is tempting to call him the Ezra Klein of the 1940s, but the comparison is not wholly accurate — unlike Klein, Lilienthal is exciting. A generation of liberals dreamed of living in the world that the TVA was building, and of being the men that Lilienthal challenged them to be. 

The author John Gunther spoke for postwar liberals when he called the TVA arguably “the greatest single American invention of this century, the biggest contribution the United States has yet made to society in the modern world.” Although hyperbolic, Gunther’s judgment carried weight; he had just toured the entire United States while writing his masterpiece of Americana, the travelogue Inside U.S.A.

Having surveyed everything the United States had to offer, from the commanding heights of industry to the nascent welfare state, Gunther judged the TVA the fullest embodiment of America’s promise. Liberals like him trusted Lilienthal for two reasons: the soaring rhetoric that cast Abundance as a moral project, and the record of achievement that proved it possible.On both counts, today’s Abundance movement has something to learn from the Tennessee Valley Authority. They could learn it from Democracy on the March – though they should read it with caution....

....MUCH MORE 

"USA Or China: Goldman Breaks Down Who Will Win The AI War"

From ZeroHedge, December 5:  

Even after the latest US-China trade truce, the superpower race for technological dominance remains red hot - and will only intensify through the end of the decade.

The battle is over who controls the technologies that will dominate the 2030s: AI chatbots, advanced chips, drones, humanoid robots, clean tech, EVs, satellites, reusable space rockets, hypersonic weapons, next-gen grid power generation, and the critical minerals that make all of it possible.

The latest comments from U.S. Trade Representative Jamieson Greer reveal that the Trump administration is pushing for a stable trade environment with Beijing, which makes perfect sense heading into the midterm election cycle.

"I don't think anyone wants to have a full-on economic conflict with China and we're not having that," Greer said Thursday at the American Growth Summit in Washington.

Greer continued, "In fact, President Trump has had the opportunity to use all the leverage we have against China — and we've had a lot, right — whether it comes to software, semiconductors or all kinds of things. A lot of allies are interested in taking coordinated action, but the decision right now is we want to have stability in this relationship."

"For this moment in time, we want to make sure that China is buying the kinds of things from us we should be selling them: aircraft, chemicals, medical devices and agricultural products," he said. "We can buy things from them that are not sensitive."

Greer added, "We have to get our own house in order. We need to make sure that we are on a good path to reindustrialization, including for critical minerals."

Being only a trade truce, the real superpower battle continues to rage behind the scenes.

The latest Goldman Sachs Top of Mind, one of the firm's flagship research publications edited by Allison Nathan, offers clients a broad framework of why the geopolitical race for technological dominance remains as intense as ever.

Mark Kennedy, Founding Director, Wahba Initiative for Strategic Competition at New York University's Development Research Institute, told Goldman's Ashley Rhodes, "It is entirely possible that neither the U.S. nor China emerges as the outright victor in the tech race. I can envision a world in which the U.S. leads in developing the most advanced technologies, while China leads in global installations."

On the rare-earth mineral front, it's very clear that while the U.S. is still playing catch-up, China remains years ahead in both mining and refining.

But not all is lost: the U.S. is well ahead on the semiconductors.

Rhodes asked Kennedy:

Who is currently "winning" the tech race?

Kennedy responded:

It's important to understand that there are four key arenas in this race: technological innovation, practical application of the technology, installation of the digital plumbing or infrastructure underpinning the technology, and technological self-sufficiency. The U.S. is currently leading in most advanced technologies, including semiconductors, AI frameworks, cloud infrastructure, and quantum computing, as well as in attracting global talent. However, China is ahead in areas such as quantum communications, hypersonics, and batteries.

China is also making rapid strides to catch up to and, in some cases, overtake the U.S. in technological application. For example, China deploys robotics in manufacturing on a scale twelve times greater than the U.S. when adjusted for differences in employee income. And while U.S. regulations often limit applications like drone deliveries to your door, China is proactively testing and deploying advanced physical AI and robotics like uncrewed taxis and vertical takeoff vehicles, accelerating their practical adoption.

China is also dominating on the global installations front. It has established a strong presence in the Global South, surpassing the U.S. and other Western nations in building essential digital networks there. And China has made significant strides toward achieving technological self-sufficiency through its dual circulation strategy aimed at reducing its reliance on the West while increasing Western dependence on China. Recent Chinese government measures, such as restricting domestic purchases of Western chips and offering incentives for using domestic alternatives, underscore this push for technological independence. At the same time, China's vast overproduction capacity in batteries and critical minerals has further increased Western dependence on China's supply chains. The U.S. has been ambivalent at best as it relates to this aspect of the tech race and remains reliant on China in many ways. So, on net, while the U.S. leads in the development of the technology itself, China is rapidly closing the gap — or even leading — in application, infrastructure installations, and tech self-sufficiency.

Reindustrialization in the U.S. should reverse this...

....MUCH MORE 

"Synthetic Dreams, Real Frictions: Reimagining Computer-Generated Data"

A companion piece to the post immediately below, "Introducing Unified Model Collapse".

From Bot Populi, November 3:

“…asked whether he was worried about regulatory probes into ChatGPT’s potential privacy violations. Altman brushed it off, saying he was “pretty confident that soon all data will be synthetic data.” — Financial Times (2023)

Sam Altman’s remark that soon “all data will be synthetic” is not just a provocation—it captures how debates on synthetic data often unfold through bold claims that sidestep the pressing issues. What, in fact, are synthetic data? Why do they matter? And how can we critically understand their world-making potential, especially when viewed from the vantage point of the Global South?

This post argues that synthetic data are not neutral technical tools but socially constructed representations whose world-making potential is deeply contested. They are both fictions and frictions: fictions because they are constructed representations of reality, embedded with the assumptions of generative models; frictions because they produce practical and epistemological tensions when deployed within infrastructures such as finance. These tensions are particularly acute in Global South contexts, where synthetic data narratives often obscure existing infrastructural and governance challenges. We draw on our academic study of responses to synthetic data in Latin America to illustrate distinctions between dreams and realities as well as to push for reimagining what synthetic data are and could be.

Fictions of Synthetic Data in Finance

The European Data Protection Supervisor defines synthetic data as “artificial data that [are] generated from original data and a model that is trained to reproduce the characteristics and structure of the original data.” While this framing is technically accurate and useful for legal purposes, it misses the extent to which synthetic data are narrative-laden fictions.

Synthetic data are promoted as risk-free, privacy-preserving, and innovation-friendly. In finance, synthetic data are said to enable experimentation without endangering consumer privacy, as well as to simulate risk scenarios without exposing real data.

Yet the term “synthetic” is not an innocent descriptor. Once again in finance, ‘synthetic’ evokes the ghosts of risky instruments at the heart of global market collapse. The 2007–08 financial crisis, precipitated in part by synthetic products like collateralized debt obligations, left lasting stigmas related to synthetic products. Financial professionals approach anything synthetic with caution. It is no coincidence that the Financial Times interview with Sam Altman cited above opted for the term “computer-generated data” rather than “synthetic”. Even though the meaning and uses of what counts as synthetic have evolved since the financial crisis, the term persistently evokes worries in this sector, whether in relation to the synthetic risk transfer market or to so-called synthetic stablecoins.

In financial contexts, narratives around synthetic data are therefore marked by a persistent tension: they are presented as innovative and promising yet remain overshadowed by enduring associations with risk and opacity. This specific context prompts us to posit that synthetic data must be understood not only as technical artifacts but also as infrastructural narratives....

....MUCH MORE

Introducing Unified Model Collapse

From the Always The Horizon substack, November

Urban Bugmen and AI Model Collapse: A Unified Theory
A solution indicating that Mouse Utopia is an inherent property of intelligent systems. The problem is information fidelity loss when later generations are trained on regurgitated data. 

This is a longer article because I’m trying to flesh out a complex idea. Similar to my article on the nature of human sapience, this is well worth the read1.

Introducing Unified Model Collapse

I have been considering digital modeling and artificial neural networks. Model collapse is a serious limit to AI systems; a failure mode that occurs when AI is trained on AI-generated data. At this point, AI-generated content has infiltrated nearly every digital space (and many physical print spaces), extending even to scientific publications2. As a result, AI is beginning to recycle AI-generated data. This is causing problems in the AI development industry.

In reviewing model collapse, the symptoms bear a striking resemblance to certain non-digital cultural failings. Neural networks collapse, hallucinate, and become delusional when trained only on data produced by other neural networks of the same class. …and when you tell your retarded tech-bro boss that you’re “training a neural network to do data-entry,” upon hiring an intern, are you not technically telling the truth?

I put real hours into the thought and writing presented here. I respect your time by refusing to use AI to produce these works, and hope you’ll consider mine in the purchasing a subscription for 6$ a month. I am putting the material out for free because I hope that it’s valuable to the public discourse.

It may be that, by happenstance in AI development, we have stumbled upon an underlying natural law, a fundamental principle. When applied to trained neural network systems, information-fidelity loss and collapse may be universal, not specific to digital systems. This line of reasoning has serious sociological implications: decadence may be more than just a moral failing; it may be universally applicable.

Model collapse is not unique to digital systems. Rather it’s the most straight-forward form of a much more fundamental underlying principle that effects all systems that train on raw data sets and then output similar data-sets. Training with regurgitated data leads to a loss in fidelity, and a an inability to interact effectivley with the real world.

 https://substackcdn.com/image/fetch/$s_!RfwJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5062d465-2f87-4705-b174-4d0a3e472d74_540x372.jpeg

The Nature of AI Model Collapse
The way neural networks function is that they examine real-world data and then create an average of that data to output. The AI output data resembles real-world data (image generation is an excellent example), but valuable minority data is lost. If model 1 trains on 60% black cats and 40% orange cats, then the output for “cat” is likely to yield closer to 75% black cats and 25% orange cats. If model 2 trains on the output of model 1, and model 3 trains on the output of model 2… then by the time you get to the 5th iteration, there are no more orange cats… and the cats themselves quickly become malformed Chronenburg monstrosities3.

https://substackcdn.com/image/fetch/$s_!_rsH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db266af-8516-475c-9a4d-d731f0a8edfb_700x341.png

Nature published the original associated article in 2024, and follow-up studies have isolated similar issues. Model collapse appears to be a present danger in data sets saturated with AI-generated content4. Training on AI-generated data causes models to hallucinate, become delusional, and deviate from reality to the point where they’re no longer useful: i.e., Model Collapse.

The more “poisoned” the data is with artificial content, the more quickly an AI model collapses as minority data is forgotten or lost. The majority of data becomes corrupted, and long-tail statistical data distributions are either ignored or replaced with nonsense.

***video*** 

AI model collapse itself has been heavily examined, though definitions vary. The article “Breaking MAD: Generative AI could break the internet” is a decent article on the topic5. The way AI systems intake and output data makes it easy for us to know exactly what they absorb, and how quickly it degrades when output. This makes them excellent test subjects. Hephaestus creates a machine that appears to think, but can it train other machines? What happens when these ideas are applied to Man, or other non-digital neural network models?

Agencies and companies will soon curate non-AI-generated databases. In order to preserve AI models, the data they train on will have to be real human-generated data rather than AI slop. Already, there are professional AI training companies that work to curate AI with real-world experts. The goal is to prevent AI from hallucinating nonsense when asked questions. Results are mixed, as one would expect with any trans-humanism techno-bullshit in the modern day.

Let’s talk about mice.

John B. Calhoun
A series of experiments were conducted between 1962 and 1972 by John B. Calhoun. Much has been written about these experiments (a tremendous amount), but we’ll review them for the uninitiated6 7. While these experiments have been criticized, they are an excellent reference for social and psychological function in isolated groups8.

***video***

The Mouse Utopia, universe 25, experiment by John B. Calhoun placed eight mice in a habitat that should have comfortably housed around 6000 mice. The mice promptly reproduced, and the population grew9.

Following an adjustment period, the first pups were born 3½ months later, and the population doubled every 55 days afterward. Eventually this torrid growth slowed, but the population continued to climb [and peaked] during the 19th month.

That robust growth masked some serious problems, however. In the wild, infant mortality among mice is high, as most juveniles get eaten by predators or perish of disease or cold. In mouse utopia, juveniles rarely died. As a result, [there were far more youngsters than normal].

What John B. Calhoun anticipated, and what most other researchers at the time anticipated, was that the population would grow to the threshold (6000 mice), exceed it, and then either starve or descend into in-fighting. That was not the result of the Universe 25 experiment.

The mouse population peaked at 2200 mice after 19 months, just under 2 years. Then the population catastrophically collapsed due to infertility and a lack of mating. Nearly all of the mice died of either old age or internicine conflict, not conflict over food, water, or living space. The results have been cited by numerous social scientists, pseudo-social scientists, and social pseudo-scientists for 50 years (you know which you are).

The conclusion that many draw from the Mouse Utopia experiment is that higher-order animals have a sort of population limit. That is, when population density exceeds certain crucial thresholds, fertility begins to decline for unknown reasons. Some have proposed an evolutionary toggle that’s engaged when over-crowding becomes a risk. Some have proposed that the effects are due to a competition for status in an environment where status means nothing (mice do have their own hierarchies after all).

The reasoning behind the collapse of Universe 25 into in-fighting, the loss of hierarchy, is still up for debate; it did occur. The resultant infertility of an otherwise very healthy population, senseless violence, and withdrawal from society in general have been dubbed the “behavioral sink.”

I am aware that many consider this experiment to be a one-off. It was repeated in other experimence by John Calhoun, but no one has replicated it since. I’d love to do some more of these experiments, but university ethics boards won’t approve them in the modern day and age. WE NEED REPLICATION.


The Demographic Implosion of Civilization

Humans have displayed similar behaviors to those of the Universe 25 population at high densities. An article that I wrote roughly a year ago demonstrates a significant correlation between the percent-urban population and the fertility rate dropping below replacement levels. It appears that between 60% and 80% urban, depending on the tolerance of the population, and fertility rates drop below replacement10.

Under the auspice of Unified Model Collapse Theory, those numbers may need to be changed. Rather than a fertility collapse occurring when a population reaches 60% or 80% urbanization, the drop in fertility would occur after the culture and population have re-adapted to a majority-urban environment. How long it takes the fertility rate to decline would then be proportional to the cultural momentum. Rarely will it take longer than a full generation (30 years), and frequently it’ll be as short as a decade....

....MUCH MORE, he's just clearing his throat. 

Previously:

 
Teach Your Robot Comedic Timing
It will all slowly grind to a halt unless a solution to the training data problem is found. Bringing to mind a recursive, self-referential 2019 post....

Friday, December 5, 2025

"The Bronze Age of Globalization"

From Palladium Magazine, December 5:

Reflection on the ancient world often brings to mind the city-state of Athens, the white columns of the Parthenon, and its philosophers such as Socrates, Diogenes, or Zeno. This seems ancient enough to us, and might seem to be the beginning of what we think of as Western civilization. And yet, already in the fifth century BC, the classical Greeks themselves looked back to a different vanished world, a lost civilization of the Mediterranean further east. It was the world remembered in the Iliad and the Odyssey, of warriors like Achilles besieging Troy and seadogs like Odysseus wandering across strange lands. When the Athenians contemplated antiquity, they reflected on what we today call the Bronze Age: an era defined by a metal that does not occur in nature and which dated from 3300 BC to 1200 BC, a timespan as long as the time from us back to Jesus Christ and Julius Caesar.

Bronze alloy is a fabrication, a man-made alloy. The formula was simple enough, a recipe known to smiths from the banks of the Nile to the hillforts of the Danube. You took copper, soft and plentiful and the color of the dying sun, and added tin in a proportion of about ten percent, a ratio arrived at not by calculation but by centuries of trial and error. The added tin made the difference between a metal that bends and a metal that cuts. This technological shift allowed for complex casting and a hard edge for tools and weapons.

Bronze was the strength of the age, the chisel that cut the stone for the Pharaoh’s limestone, the sword that severed the artery, the pin that held the cloak, a synonym for strength in poems written down on papyrus. The material divided the strong from the weak. Without the tin, you had only copper, which bent. You were soft and vulnerable and likely dead.

This necessity meant that for over two millennia, the great civilizations of the Mediterranean had a problem of geography that became a problem of survival. All of them, the Greek-speaking Mycenaeans in their Aegean citadels, the Egyptians along the Nile, and the Hittites on the Anatolian plateau, possessed copper in abundance. They had gold, timber, and grain. They had the favor of their gods and the discipline of their scribes. What they did not have was tin. They had built their political order, their armies, their economies, and their sophisticated diplomacy, on a metal that did not exist in their own soil.

This age of civilization was not a time of isolation. It was an era of globalization on a remarkable scale. A king in Mycenae could commission a sword whose blade was forged from copper mined in Cyprus and tin mined in Afghanistan, a weapon that was, in its molecular structure, a record of the known world. It was a time of far-reaching connectivity, a network of overland routes stretching across the Eurasian landmass and shipping across the Mediterranean Sea and even Indian Ocean—but a network that would ultimately prove to be fragile.

The Riddle of the Tin Mountains

For a long time, archaeologists didn’t know where the tin came from. This was the “tin problem,” a phrase that suggests a logistical hiccup rather than a centuries-long mystery that already baffled the historians of the classical world like Herodotus and Pliny. The texts were not so much silent as coy. The scribes of Mari and Ugarit listed the metal, annaku in Akkadian, AN.NA in Sumerian. They listed the prices, the weights, and the middlemen. But they did not list the mines. The tin came from “the East,” or it came from “the Mountains,” or it came from a market that had bought it from another market. It was a commodity with no origin, a ghost metal that seemed to simply appear in the palace workshops of Thebes and Knossos by magic.

We now know that, in the early centuries, tin came from far to the east, from Central and South Asia, from the Zeravshan Valley in what is now Tajikistan and the Hindu Kush in Afghanistan. There, in the high, thin air, miners dug into the rock, crushed the ore, smelted it into ingots, and sent it west by a relay of donkeys. The distance was striking. From the mines of Afghanistan to the furnaces of Mesopotamia is a journey of thousands of kilometers, across the Zagros Mountains, across the Iranian plateau, through the bandit country of Elam. The records of the Assyrian merchants of the 19th century BC tell of their trade network stretching out from their colony at Kanesh, in central Anatolia.

The Kültepe texts, in the form of twenty thousand cuneiform tablets found at Kanesh, are not concerned with poetry or myth but ledgers. They are the receipts of a family business. They record, with a dryness that borders on the hypnotic, the arrival of caravans from Aššur, the Assyrian home city. They record the movement of tin: about forty-eight tons of it over thirty years, carried on the backs of black donkeys....

....MUCH MORE 

 Related, in a Bronze Age sort of way:

News You Can Use: "How to Find Ancient Assyrian Cities Using Economics"

More SpaceX: "...in Talks for $800 Billion Valuation Ahead of Potential 2026 IPO"

 Following on "Report: SpaceX planning for IPO late next year".

From the Wall Street Journal, December 5: 

Company’s CFO told investors about the transaction and IPO plans in recent days, say people familiar with the matter 

SpaceX is kicking off a secondary share sale that would value the rocket maker at $800 billion, people familiar with the matter said, surpassing OpenAI to make it the most valuable U.S. private company.  

The company’s Chief Financial Officer Bret Johnsen told investors about the sale in recent days, and SpaceX executives have also said the company is weighing a potential initial public offering in 2026, some of the people said. 
 
The $800 billion valuation is double the $400 billion value it fetched in a recent secondary share sale.
SpaceX didn’t respond to a request for comment.
 
SpaceX investors have been waiting for an IPO for years as the company has grown into an essential service for the U.S. government, launching satellites and astronauts. It is also a provider of broadband internet around the world, used in remote areas ranging from the mountains of the U.S. to the frontiers of the war in Ukraine.
 
The IPO market picked up this summer after three years of doldrums. Shares of stablecoin issuer Circle Internet Group and software maker Figma both soared in their market debuts this year. The government shutdown slowed the pace of new offerings, but bankers and investors are optimistic 2026 will be a return to normal IPO levels.
 
While much of Elon Musk’s business empire is facing growing challenges, his rocket-and-satellite company remains stronger than ever, thanks in part to its dominant position launching rockets into space. Many investors say the company’s satellite business Starlink—which has more than eight million active customers—is also driving up its big valuation.  

SpaceX approached investors as part of a so-called tender offer, which usually takes place twice a year. In these transactions, employees and investors are able to sell their existing shares, allowing them to cash out from a company that is almost 25 years old but still hasn’t gone public....

....MUCH MORE 

"Report: SpaceX planning for IPO late next year"

A quick hit from Sherwood News:

SpaceX has told investors that it is planning for an IPO in late 2026, according to a report from The Information....

....MORE 

"Eat Your AI Slop or China Wins"

I prefer "...or you can't have any pudding" but there's no accounting for taste in music.

From The New Atlantis, Summer 2025 edition:

The new cold war means a race with China over AI, biotech, and more. This poses a hard dilemma: win by embracing technologies that make us more like our enemy — or protect ourselves from tech dehumanization but become subjects to a totalitarian menace.  

In “Darwin Among the Machines,” a letter to an editor published in 1863, the English novelist Samuel Butler observed with dread how the technology of his time was degrading humanity. “Day by day,” he wrote, “the machines are gaining ground upon us; day by day we are becoming more subservient to them.” For the ironical Butler, the solution was simple: kill the machines. “War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species.”

In his later novel Erewhon, Butler imagined a people who take his advice and smash their machines — the inspiration for the “Butlerian Jihad” in Frank Herbert’s Dune. But to make his central conceit plausible, by the loose rules governing a Victorian satire, Butler had to drop the society of Erewhon in the middle of “nowhere” (an anagram of the name), in a remote valley cut off from the rest of the world. The Erewhonians, Butler recognized, would never have survived centuries of Luddism anywhere else: they would have vanquished the machines only to be vanquished by an antagonist lacking their technological caution. In the real world, Butler suggests, we face a choice: Will you preserve your humanity or your security?

This may be just the choice we face today. From Washington, D.C. to Silicon Valley, champions of new technologies often argue, with good reason, that we must embrace them because, if we don’t, the Chinese will — and then where will we be?

Driven by geopolitical pressures to accelerate technological development, particularly in AI and biotech, we seem to have two options: channeling innovation toward humane ends or protecting ourselves against competitors abroad.

To appreciate the difficulty of this choice, we should take a page from military theorists who have wrestled with what is known as the “security dilemma.” Even though it is one of the most important concepts in international relations, it has been given little attention by those grappling with the promises and challenges of new technologies. But we should, because when we apply its core insights to technological development, we realize that achieving a prosperous human future will be even more difficult than we tend to think.

The Dilemma 
The security dilemma, as described by the political scientist Robert Jervis in a 1978 paper, is that “many of the means by which a state tries to increase its security decrease the security of others.” Take two nations that don’t know each other’s warfighting capabilities or intentions. With a questionable neighbor next door, one side reasons, it’s only sensible to build up a reliable defense, just in case. But the other nation has the same thought, and similarly proceeds to boost its armaments as a precaution. Each nation sees the other militarizing, which justifies its own defense build-up. Before long, we have a frantic arms race, each nation building up its military to surpass the growing military next door, spiraling toward a conflict neither actually desires.

The dilemma is this: each nation can either militarize, prompting the other to reciprocate and heightening the risk of an ever-more-violent war; or not militarize, endangering itself before a power that is suspiciously expanding its arsenal. The dilemma suggests that each nation can be all but helplessly compelled to militarize but then is no better off than before, for the other nation is doing the very same. In fact, everyone is worse off, as the stakes and lethality of a looming war continue to rise. Perverse geopolitical incentives drive both sides to rationally pursue a course of action that harms them both.

It’s no coincidence that the dilemma was first articulated in the 1950s, amid the Cold War menace of mutual assured destruction. But its underlying logic applies not only to national defense per se but also to technological innovation broadly.

Consider how geopolitical pressures motivate technological advancement. World War II spurred the development of cryptography and early computers, such as America’s ENIAC and Britain’s Colossus. The Cold War rivalry between America and the Soviet Union prompted their race to be the first to put a man on the Moon. And in the 1980s, Japan’s prowess in the semiconductor industry motivated America to launch state projects, like the SEMATECH consortium, to remain competitive.

Just as with the security dilemma, deciding not to act in response to these pressures is a recipe for failure, because it risks making one as helpless against one’s competitors as the armored knight against the musket. Whichever side is technologically superior will gain the upper hand — economically, geopolitically, and, down the road, militarily. So each side, if it hopes to survive, must adopt the more sophisticated technology.

But the risk of this trajectory is not only to other nations, as with militarization itself — it is potentially to one’s own people. As every modern society has come to experience, technological innovation, despite the countless ways it has improved our lives, can also bring not just short-term economic instability and job loss, but also long-term social fracture, loss of certain human skills and agency, the undermining of traditions, and the empowerment of the state over its own people.

In what we might call the “technological security dilemma,” each nation faces a choice: either pursue technological advancement to the utmost, forcing your competitors to reciprocate, even if such advancement jeopardizes your own citizens’ wellbeing; or refuse to do so — say, out of a noble concern that it threatens your people’s form of life — and allow yourself to be surpassed by an adversary without the same concern for its people, or for yours.

As Palantir’s Alex Karp and Nicholas Zamiska recently put it in their book The Technological Republic, “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.” So if a nation won’t accept one horn of the dilemma, allowing its geopolitical standing to falter and putting itself at the mercy of the more advanced nation, then it must choose the other, adopting an aggressive approach to technological development, no matter what wreckage may result.

The question is whether that nation can long enjoy both its tech dominance and its humanity.

China or U.S. — ‘There Is No Third Option’

Today, the technological security dilemma is the very situation America finds itself in with China.

Consider artificial intelligence. Venture capitalist Marc Andreessen writes that “AI will save the world,” but also that it could become “a mechanism for authoritarian population control,” ushering in an unprecedentedly powerful techno-surveillance state. It all depends on who is leading the industry: “The single greatest risk of AI,” he writes, “is that China wins global AI dominance and we — the United States and the West — do not.” In a similar vein, Sam Altman once said on Twitter — when he was the new president of Y Combinator in 2014 — that “AI will be either the best or the worst thing ever.” The difference, he said a decade later, now writing as CEO of OpenAI in the Washington Post, is whether the AI race is won by America with its “democratic vision,” or by China with its “authoritarian” one. Our own good AI maximalism is thus “our only choice” for countering their bad AI maximalism.

But in that case, the AI boosters’ arguments for its benefits and their refutations of popular fears are almost beside the point. As the technological security dilemma suggests, even if all of AI’s speculated downsides were to come about — mass unemployment, retreat into delusional virtuality, learned helplessness among all who can no longer function without ChatGPT, and so forth — we would still need to accelerate AI to stay ahead of China.

A world run by China’s AI-powered digital authoritarianism would indeed be a nightmare for the United States and everyone else, marked by a total disregard for individual privacy, automated predictive policing that renders civil liberties obsolete, and a global social credit system that blacklists noncompliant individuals from applying for credit, accessing their bank accounts, or using the subway. How then can we afford to deliberate about AI’s impact, much less slow down its advancement? Its potential domestic harms, the dilemma suggests, are the necessary price to pay for our national security.

https://www.thenewatlantis.com/wp-content/uploads/2025/06/Bellafiore-facial-recognition-1920x1276.jpg 

It would therefore be a mistake to dismiss the arguments from Andreessen and Altman as nothing more than self-serving P.R. tactics to lobby for government favors. They are getting at something fundamental: in a technological arms race, the only rational action is to try to win. Once China has entered the race for technological dominance in AI, America, if it wishes to maintain its own political independence and avoid becoming China’s vassal state, has no choice but to enter the race as well, no matter what damage results. As Altman puts it, “there is no third option.”....

....MUCH MORE 

I need some vassal states. 

"After Neuralink, Max Hodak is building something stranger"

From TechCrunch, December 5:

Six years ago, I asked Sam Altman at a StrictlyVC event in San Francisco how OpenAI, with its complicated corporate structure, would make money. He said that someday, he’d ask the AI. When everyone snickered, he added, “You can laugh. It’s all right. But it really is what I actually believe.”

He wasn’t kidding.

Sitting again in front of an audience, this time across Max Hodak, the co-founder and CEO of Science Corp., I can’t help but remember that moment with Altman. Pale-complexioned Hodak, wearing jeans and a black zip-up sweatshirt, looks more like he’s going to jump into a mosh pit than pitch a company valued at hundreds of millions of dollars. But he’s got a sly sense of humor that keeps the room engaged.

Hodak started programming when he was six, and as an undergraduate at Duke, he worked his way into the lab of Miguel Nicolelis, a pioneering neuroscientist who has since become publicly critical of commercial brain-computer interface ventures. In 2016, Hodak co-founded Neuralink with Elon Musk, serving as its president and essentially running day-to-day operations until 2021.

When I ask what he learned working alongside Musk, Hodak describes a specific pattern. “We got into lots of situations together where something would happen. In my mind, I’d have two diametrically opposed possible solutions, and I would bring them to him, and I’d be like, ‘Is it A or B?’ And he’d look at it and be like, ‘It’s definitely B,’ and the problem would never come back.”

After a few years of this, Hodak took what he’d learned and roped in three former Neuralink colleagues to launch Science Corp. about four years ago. Like Altman, Hodak describes his team’s improbable goal so placidly that I find myself believing that the limits of cognition are about to be overcome sooner than most of us realize. And that he’ll be among those who make it happen.

As I was doomscrolling… 
While I’ve been consumed with the AI data center craziness and the talent poaching wars, momentum has been building in the background.

According to World Economic Forum data, nearly 700 companies around the world have at least some ties to brain-computer interface (BCI) technology, including some tech giants. In addition to Neuralink, ​​Microsoft Research has run a dedicated BCI project for the last seven years. Apple partnered earlier this year with Synchron, backed by Bill Gates and Jeff Bezos, to create a protocol that lets BCIs control iPhones and iPads. Even Altman is reportedly helping to stand up a Neuralink rival.

And in August, China released its “Implementation Plan for Promoting Innovation and Development of the BCI Industry,” targeting core technological breakthroughs by 2027, and aiming to become the global leader by 2030.

Much of the neuroscience isn’t new. “A legitimate criticism of the BCI companies is that they aren’t doing new neuroscience,” Hodak said. “Decoding cursor control or robotic arm control from a human – people have been doing that for 30 years.”

What’s new, however, is the engineering. “The innovation at Neuralink is making [a device] small enough and low-power enough that you can fully implant it and close the skin, and have something that isn’t an infection risk. That genuinely was new."....

....MUCH MORE 

Related:

"A wave of biological privacy laws may be coming as tech gadgets capture our brain waves"
Those gadgets'll get nuthin' from me.

Musk's "Neuralink Will Offer Telepathy and then Brain Control of Teslabots"
Throw in Full Self Driving and paraplegics will recover a lot of agency/autonomy.

Watch Out Elon: China Unveils Neuralink Competitor

 "Big Tech sees neurotechnology as its next AI frontier" (ELON) 

"China's brain-computer interface technology is catching up to the US. But it envisions a very different use case: cognitive enhancement."

The Race To Put Implants In Your Head Is Heating Up 

"Folks Lining Up for Elon Musk's Brain Implants"
Now I'm as open-minded as the next guy, so to speak but...ummm...après vous, M. Musk, après vous.

And many, many more going back to 2016.

And the outro from "Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans": 

The city of Chicago should probably keep an eye on this. There are neighborhoods, Austin, Garfield Park, Englewood, where you will see paralyzed young men out and about in their mobility chairs and you realize that even though it is the killings that make the headlines the fact is the wounds and injuries are awfully grievous and affect many more people.

If interested we have a few dozen posts on Mind- Machine interfaces including on the work of Dr. Miguel Nicolelis, M.D., PhD and his laboratory at Duke University Medical Center.

"Disruptions: Brain Computer Interfaces Inch Closer to Mainstream"

"Any sufficiently advanced technology is indistinguishable from magic."
—Arthur C. Clarke, Profiles of the Future (revised edition, 1973)

And on Professor Nicolelis: 

"In Scientific First, Researchers Link Two Rats' Brains via Computer" (What's next, the paralyzed walk?)
Ya gotta love this guy. He's a showman but he publishes in open-access journals, in effect letting the whole world have at it. More after the jump.

Oh Great, Now Our Brains Can Get Hacked
I knew this was going to happen. Knew it. Afraid to say it, sound like crazy person, but knew it. Links below.

Here's The Most Advanced Human Brain-to-Brain Interface
Monkey Steers Wheelchair With It's Little Monkey Mind
July 2015
Mind-Meld: Neuroscientists Link Three Monkey Brains Into Living Computer
March 8, 2015
Apr. 23, 2014