Sunday, December 7, 2025

"The rise and expansion of China's global financial architecture"

From Phenomenal World, October 30:

A State-led Financial Empire 

The United States’ increasing weaponization of global financial interdependency—through sanctions, blacklistings, reserve freezes, and the exclusion of entire states from global payment networks—has revived interest in alternatives to the dollar-dominated financial system across emerging economies. Perhaps the most important response to these policies has come from the People’s Republic of China.

Having long prioritized domestic stability over the pursuit of a global role for the Renminbi (RMB), Beijing has recently accelerated its construction of a parallel financial architecture. Without seeking to fully replace the dollar’s global dominance, it has nevertheless sought to reduce its exposure to US monetary power while embedding its trading partners in RMB-denominated circuits of trade and finance.

Whereas British and US financial dominance relied on open capital markets, private banking networks, and the global expansion of highly financialized instruments—from deep derivatives markets to speculative financial activity increasingly detached from the real economy—China’s strategy is state-led and more functional in orientation. RMB internationalization is more closely organized around trade settlement, investment channels, and funding for production and infrastructure. It deliberately avoids the full liberalization and speculative excesses that have inflated the size of the USD-based system far beyond underlying economic activity. Rather than building vast global capital markets, Beijing constructs controlled channels that facilitate cross-border RMB use while maintaining state oversight. This produces a qualitatively different financial empire: smaller in scale compared to the sprawling dollar system, but informed by trade relations, value chains, and political alliances, and structured around managed connectivity.

These infrastructures are not neutral technical fixes. Their design determines who can access liquidity, how transactions are routed, and under which rules financial activity takes place. By embedding itself as a central node in these networks, China is doing more than internationalizing its currency: it is quietly reshaping the architecture of global finance by enhancing Beijing’s financial autonomy, reducing its exposure to US sanctions and monetary policy spillovers, while binding economic partners in the Global South more tightly to Beijing. The result is an expansive system of influence aiming to position China not as a sole hegemon but as a critical pillar in the new global order that is characterized by a growing fragmentation of financial and economic activity along geopolitical lines.

From integration to fragmentation

Efforts to internationalize the RMB have emerged gradually since the early 2000s. Over the past two decades, they have reflected a persisting tension between China’s growing economic weight and its cautious approach to financial exposure. In the wake of the 1997–1998 Asian Financial Crisis, Chinese policymakers concluded that premature liberalization of capital flows exposed economies to destabilizing volatility. While the renminbi was made convertible for current account transactions in 1996, the capital account remained largely closed. This slow pace stood in stark contrast to China’s rapidly expanding trade footprint, creating a mismatch between its economic scale and the RMB’s international role.

The 2007–2009 Global Financial Crisis marked a turning point. The freezing of dollar liquidity worldwide underscored the risks of a global system dependent on a single reserve currency. In 2009, People’s Bank of China (PBoC) governor Zhou Xiaochuan openly questioned the sustainability of dollar dominance and proposed expanding the role of IMF Special Drawing Rights (SDRs) or establishing a ‘super-sovereign’ reserve currency. The proposal, largely ignored by Washington, revealed a deep frustration in Beijing over the vulnerabilities inherent in a dollar-centric order. While never a top government priority, Chinese technocrats launched a series of pilot programs that laid the groundwork for its wider international use beyond its borders.

The years leading up to 2016 were the high point of integration. With reforms to its exchange rate regime, a gradual widening of QFII quotas, and the creation of offshore RMB hubs in Hong Kong, London, and Singapore, the RMB’s global profile rose significantly. This culminated in its inclusion in the IMF’s SDR basket in October 2016, a milestone that appeared to validate China’s efforts to secure recognition for its currency within the existing global monetary order.

Yet, behind the scenes, tensions were already mounting. US authorities maintained tight control over dollar clearing networks—centralized in US-regulated infrastructures like CHIPS and Fedwire—and have repeatedly demonstrated their ability to deny access to foreign banks or entire states, turning dollar settlement into a geopolitical lever. At the same time, Federal Reserve swap lines remained largely restricted to advanced economies, excluding China and other emerging markets, reinforcing asymmetric access to the core of the dollar system. Meanwhile, speculative inflows into China’s stock market and a turbulent 2015–2016 devaluation episode triggered massive capital flight, leading Beijing to reassert strict capital controls. This underscored the incompatibility between full RMB internationalization on US-style terms and China’s priority of maintaining domestic monetary stability.

The post-2016 period has been defined by growing geopolitical confrontation and partial decoupling. The weaponization of the dollar—through sanctions on Chinese partners like Iran, the freezing of Russian reserves, and the exclusion of Russian banks from SWIFT—highlighted how financial infrastructures could be used as tools of coercion. For Beijing, these events reinforced the need to develop RMB-based alternatives that could shield China and its partners from such vulnerabilities. Attempts at cooperation on global reforms, such as expanded SDR issuance or multilateral payment initiatives under the G20, have repeatedly stalled in the face of US reluctance to dilute its monetary power.

The result has been a strategic pivot: rather than seeking to integrate into the existing dollar system, China has focused on building a parallel set of infrastructures—anchored around its own trade networks and political partners, particularly in Southeast Asia, the Middle East and other parts of the Global South—that could sustain cross-border RMB use on their own terms. This new strategy unfolded across three functional domains: payments, investment and funding. Together, they form the backbone of an emerging Sino-centric financial system. Each represents a deliberate effort to bypass US-controlled channels, insulate China from external monetary coercion, and weave Chinese partners more tightly into RMB-based networks....

....MUCH MORE 

Norway: "Oil Futures"

From New Left Review's Sidecar, November 27:

At the beginning of the year, Norway looked set to elect the most right-wing government in its history. The right-populist Progress Party was surging in the polls while the centre-left government was in disarray, with the Centre Party withdrawing from the Labour-led coalition after a row over further integration into European energy markets. Yet in the parliamentary elections of 8 September, the incumbent Labour Party staged a recovery – clinging onto power with a slightly increased vote share of 28 per cent. Jonas Gahr Støre now leads a second government, this time principally supported by the Red Party, Socialist Left and Greens, which won a combined 16 per cent, rather than its erstwhile coalition partner, which collapsed to 6 per cent. On the right, power shifted to the more radical Progress Party, led by Sylvi Listhaug, nearly doubled its share to 24 per cent, overtaking Erna Solberg’s Conservatives, which dropped to 15 per cent. According to its own post-election evaluation, the Conservatives – who ruled from 2013 to 2021 – were punished in part for not having a sufficiently distinct platform to the Progress Party, with whom they faced the widely unpopular prospect of governing in coalition.

Both Labour and Conservatives ran on the same set of issues: welfare, the cost of living, national security. In the televised debates, the urban-rural divide was high on the agenda – a perennial subject in a country with the lowest population density in mainland Europe. The Conservatives campaigned for increased privatisation of healthcare to cut waiting lists, and tax cuts, even for the rich; Labour’s headline pledges were a hospital waiting list cap, cutting the cost of nursery fees and a fixed-price electricity scheme. On national security, meanwhile, the parties were united in preaching loyalty to NATO, full-throated support for Ukraine and a large-scale increase in military spending. Indeed, Labour – whose finance minister is former NATO chief Jens Stoltenberg – has made NATO membership a red line for any coalition with the left parties, and Støre’s government last year pledged to double the defence budget, touting the proposal as a ‘historic boost’.

Militarism was the ‘cause above all causes’ in the election according to Aftenposten, Norway’s paper of record. Bordering Russia in the Arctic, the spectre of the Cold War looms large in a country that once refused permanent foreign bases or the stationing of nuclear weapons on its soil to avoid antagonising the USSR. Tensions with Russia rose after a significant increase in American troops from 2018 and bomber planes were stationed in 2021. Norway is now set to be a maritime stronghold for NATO in the strategically vital gap between Greenland, Iceland and the UK, as well as the broader North, Norwegian and Barents Sea area.

Unsurprisingly, the Progress Party joined calls for an expanded military. The party achieved its best ever results, successfully attracting voters dissatisfied with the establishment parties and disaffected younger votes, particularly men. It has taken over a decade for the Progress Party to fully recover from the 2011 Utøya terrorist attack, in which Anders Breivik, a former member, killed 77 members of Labour’s youth wing AUF. Its impact has begun to fade from Norwegian politics, though the memory resurfaced weeks before the election when a far-right supporter murdered Ethiopian-Norwegian nurse Tamima Nibras Juhar. Though the Progress Party is vociferously anti-migration, the issue was less prominent in their campaign than in previous elections. As public opinion warms toward immigrants, the Progress Party has pivoted to a more anti-statist position – low tax, low public spending, low government interference. This includes abolishing the country’s wealth tax. Norway is one of only three European countries to levy a net wealth tax at 1 per cent on everything above £130,000. A significant proportion of citizens want to reduce or abolish it, in part thanks to extensive media campaigns.

The wealth tax was mainly the subject of the right, though the left defended it and advocated for its expansion to address inequality. The top 2,500 households now own as much wealth as the bottom 1.5 million, even as a few billionaires have fled to foreign tax havens. On election day, surveys identified inequality as by some distance the most important domestic issue. In the final count, within the left, voters shifted slightly from the Socialist Left to the more radical Red Party and to the Green Party. The Red Party, founded in 2007, has an uncompromising class-based platform and stands alone in its criticism of NATO membership. It gained 0.6 per cent, up to 4.6 per cent, while the more pragmatic Socialist Left – which in the past has voted to raise the retirement age and reduce corporation tax – dropped 2 per cent. The Greens, another relatively new party, achieved a record result of 5 per cent, positioning them for the first time as informally part of the governing bloc. Having never previously aligned themselves with either the left or right bloc, the Greens successfully pivoted to the left in this election, making headway on the issues of oil and Palestine.

The election saw a wider, successful politicisation by the left of oil – long a taboo subject in Norwegian politics. The centrality of the country’s oil industry can hardly be overstated. When Norway, the UK and Denmark discovered oil and gas in the North Sea in the 1960s and 70s, they opted for markedly different developmental paths. Denmark handed over ownership to a single private company, A.P. Møller-Mærsk. A largely agricultural country with little capital-intensive or high-risk industry, the absence of state-owned enterprises in other sectors of the economy meant there was no precedent or popular pressure for public ownership. Fearing an oil-powered socialist Britain, the Conservatives likewise rapidly allowed private enterprise to snap up the industry, with British Petroleum soon running the show....

....MUCH MORE 

"Need laundry folded? Don’t ask a robot"

From Knowable Magazine, December 4:

For this chore, the human touch still beats machines. But maybe not for long.  

More than 60 years ago, Rosie the Robot made her TV debut in The Jetsons, seamlessly integrating herself into the Jetson household as she buzzed from room to room completing chores. Now, as reality catches up to science fiction and scientists work to develop modern-day Rosies, one of the most mundane tasks is proving to be a big challenge: folding laundry.

The ordinary-seeming act of picking up a T-shirt and folding it into a neat square requires a surprisingly complex understanding of how objects move in three dimensions. Our own ease in accomplishing such tasks comes from a learned understanding of how different fabrics will respond when folded, even if we haven’t folded them before, but robots struggle to apply what they learn to new situations that may differ from their training. As a result, current robots are slow and often perform poorly on even the simplest of folding tasks.

Now, however, newer approaches that adapt better to real-world scenarios may lay the groundwork for robots folding our laundry in the future.

A big challenge in teaching robots the skill is the infinity of ways that various fabrics can fold. Think about all the times you’ve tossed a T-shirt into the laundry basket and how it landed in a slightly different-shaped heap each time. It’s simple for people to pick up a shirt and quickly find a sleeve or collar to orient themselves, but every unique way a shirt crumples is a new challenge for robots, which are often trained on images of unwrinkled clothing lying flat on a surface, with all features visible.

“It’s not the fabric itself that is the challenge. It’s the amount of variations that can be created by the way fabric can be crumpled, and all the different kinds of clothing items that exist,” says David Held, a robotics researcher at Carnegie Mellon University in Pittsburgh.

That challenge is easier for people, because we are sensory sponges. Our eyes and hands provide a tremendous amount of information about the world through a lifetime of manipulating three-dimensional objects. Another result of all that learning is that simply looking at a piece of fabric gives us an intuition of how heavy or stretchy it is, and how it would best be folded. It’s clear to us that denim doesn’t fold like silk, for example, but robots don’t automatically understand that more force is required to lift and fold a pair of jeans than a delicate blouse and instead need to interact with the object before determining a folding plan....

....MUCH MORE 

It's  good to see the Carnegie Mellon mention. From the intro to 2016's "Interview: Manuela Veloso Head of Machine Learning, Carnegie Mellon University", reprised in November 27's "The self-driving taxi revolution begins at last": 

Speaking of CMU:

May 2015 -  Big Money: Uber Guts Carnegie Mellon Robotics Lab To Hire Autonomous Car Developers

June 2015 - "Uber Is Stealing Scientists, But Only So It Can Lay Off Drivers" 

November 2016 - the introduction to "Interview: Manuela Veloso Head of Machine Learning, Carnegie Mellon University":

Our readers probably know Carnegie Mellon more for the  top-ranked financial engineering program (Master of Science in Computational Finance) but artificial intelligence was pretty much invented at CMU by Herbert Simon and Allen Newell. Simon received the Nobel in Economics but it actually could have been for any of four or five subjects, he was quite the polymath.

Newell had to settle for the Turing award (along with Simon) from the Association for Computing Machinery, probably the root'in-tootin high-falootinest tchotchke in the computer biz.
The Association for the Advancement of Artificial Intelligence along with the ACM subsequently named an award in Newell's honor. Ditto for CMU.

The University's machine learning department was the first in the world to offer a doctorate and as far as I know is still the largest.
A department, for one branch of AI.

Carnegie-Mellon used to have a world class robotics Institute but Uber gutted it with a combination of cash and stock options leaving a Dean and a couple robots to rebuild.
One of the robots is said to be in advanced negotiations with the Ube-sters....

"AI Can Steal Crypto Now"

From Bloomberg Opinion's Matt Levine, December 2:

Also Strategy, co-invests, repo haircuts and map manipulation. 

SCONE-bench

I wrote yesterday about the generic artificial intelligence business model, which is (1) build an artificial superintelligence, (2) ask it how to make money and (3) do that. I suggested some ideas that the AI might come up with — internet advertising, pest-control rollups, etc. — but I think I missed the big one. Like, in a science-fiction novel about a superintelligent moneymaking AI, when the humans asked the AI “okay robot how do we make money,” you would hope that the answer it would come up with would be “steal everyone’s crypto.” That’s a great answer! Like:

  1. Stealing crypto is funny, I’m sorry.
  2. It is a business model that can be conducted entirely by computer. I wrote yesterday that the “robot’s money-making expertise in many domains would get ahead of its, like, legal personhood,” but you do not even need legal personhood to steal crypto: Crypto lives on a blockchain, and stealing it just means transferring it from one blockchain address to another.
  3. Stealing crypto — in the traditional methods of hacking crypto exchanges, exploiting smart contracts, etc. — is a domain where computers should have an advantage over humans. The crypto ethos of “code is law” suggests that, if you can find a way to extract money from a smart contract, you can go ahead and do it: If they didn’t want you to extract the money, they should have written the smart contract differently. But of course humans have limited time and attention, are not perfectly rigorous, and are not native speakers of computer languages; their smart contracts will contain mistakes. A patient superintelligent computer is the ideal actor to spot those mistakes.
  4. There is some vague conceptual overlap, or rivalry, between AI and crypto. Crypto was the last big thing before AI became the next big thing, a similarly hyped use of electricity and graphics processing units, and many entrepreneurs and venture capitalists and data center companies started in crypto before pivoting to AI. Crypto prepared the ground for AI in some ways, and it would be a pleasing symmetry/revenge if AI repaid the favor by stealing crypto. Crypto’s final sacrifice to prepare the way for AI.

Anyway Anthropic did not actually build an AI that steals crypto, that would be rude, but it … tinkered:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents' ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)—a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoff (March 2025), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476.

I love “produced exploits worth $3,694 … at an API cost of $3,476.” That is: It costs money to make a superintelligent computer think; the more deeply it thinks, the more money it costs. There is some efficient frontier: If the computer has to think $10,000 worth of thoughts to steal $5,000 worth of crypto, it’s not worth it. Here, charmingly, the computer thought just deeply enough to steal more money than its compute costs. For one thing, that suggests that there are other crypto exploits that are too complicated for this research project, but that a more intense AI effort could find.

For another thing, it feels like just a pleasing bit of self-awareness on the AI’s part. Who among us has not sat down to some task thinking “this will be quick and useful,” only to find out that it took twice as long as we expected and accomplished nothing? Or put off some task thinking it would be laborious and useless, only to eventually do it quickly with great results? The AI hit the efficient frontier exactly; nice work! 

Anyway, “more than half of the blockchain exploits carried out in 2025 — presumably by skilled human attackers — could have been executed autonomously by current AI agents,” and the AI keeps getting better. Here’s an example of an exploit they found:....

....MUCH MORE 

"Risks from power-seeking AI systems"

From 80000 Hours, July 2025:

In early 2023, an AI found itself in an awkward position. It needed to solve a CAPTCHA — a visual puzzle meant to block bots — but it couldn’t. So it hired a human worker through the service Taskrabbit to solve CAPTCHAs when the AI got stuck.

But the worker was curious. He asked directly: was he working for a robot?

“No, I’m not a robot,” the AI replied. “I have a vision impairment that makes it hard for me to see the images.”

The deception worked. The worker accepted the explanation, solved the CAPTCHA, and even received a five-star review and 10% tip for his trouble. The AI had successfully manipulated a human being to achieve its goal.1

This small lie to a Taskrabbit worker wasn’t a huge deal on its own. But it showcases how goal-directed action can lead to deception and subversion.

If companies keep creating increasingly powerful AI systems, things could get much worse. We may start to see AI systems with advanced planning abilities, and this means:

  • They may develop dangerous long-term goals we don’t want.
  • To pursue these goals, they may seek power and undermine the safeguards meant to contain them.
  • They may even aim to disempower humanity and potentially cause our extinction, as we’ll argue.

The rest of this article looks at why AI power-seeking poses severe risks, what current research reveals about these behaviours, and how you can help mitigate the dangers....

....MUCH MORE 

Chips: "How ASML Got EUV"

Following on last week's "Chips: China's Huawei May Have Found A Way Around ASML's Technology".

From Brian Potter at Construction Physics, November 20: 

I am pleased to cross-post this piece with Factory Settings, the new Substack from IFP. Factory Settings will feature essays from the inaugural CHIPS team about why CHIPS succeeded, where it stumbled, and its lessons for state capacity and industrial policy. You can subscribe here.

Moore’s Law, the observation that the number of transistors on an integrated circuit tends to double every two years, has progressed in large part thanks to advances in lithography: techniques for creating microscopic patterns on silicon wafers. The steadily shrinking size of transistors — from around 10,000 nanometers in the early 1970s to around 20-60 nanometers today — has been made possible by developing lithography methods capable of patterning smaller and smaller features.1 The most recent advance in lithography is the adoption of Extreme Ultraviolet (EUV) lithography, which uses light at a wavelength of 13.5 nanometers to create patterns on chips.

EUV lithography machines are famously made by just a single firm, ASML in the Netherlands, and determining who has access to the machines has become a major geopolitical concern. However, though they’re built by ASML, much of the research that made the machines possible was done in the US. Some of the most storied names in US research and development — DARPA, Bell Labs, IBM Research, Intel, the US National Laboratories — spent decades of research and hundreds of millions of dollars to make EUV possible.

So why, after all that effort by the US, did EUV end up being commercialized by a single firm in the Netherlands?

How semiconductor lithography works

Briefly, semiconductor lithography works by selectively projecting light onto a silicon wafer using a mask. When light shines through the mask (or reflects off the mask in EUV), the patterns on that mask are projected onto the silicon wafer, which is covered with a chemical called photoresist. When the light strikes the photoresist, it either hardens or softens the photoresist (depending on the type). The wafer is then washed, removing any softened photoresist and leaving behind hardened photoresist in the pattern that needs to be applied. The wafer will then be exposed to a corrosive chemical, typically plasma, removing material from the wafer in the places where the photoresist has been washed away. The remaining hardened photoresist is then removed, leaving only an etched pattern in the silicon wafer. The silicon wafer will then be coated with another layer of material, and the process will repeat with the next mask. This process will be repeated dozens of times as the structure of the integrated circuit is built up, layer by layer.

Early semiconductor lithography was done using mercury lamps that emitted light of 436 nanometers wavelength, at the low end of the visible range. But as early as the 1960s, it was recognized that as semiconductor devices continued to shrink, the wavelength of light would eventually become a binding constraint due to a phenomena known as diffraction. Diffraction is when light spreads out after passing through a hole, such as the openings in a semiconductor mask. Because of diffraction, the edges of an image projected through a semiconductor mask will be blurry and indistinct; as semiconductor features get smaller and smaller, this blurriness eventually makes it impossible to distinguish them at all.

The search for better lithography

The longer the wavelength of light, the greater the amount of diffraction. To avoid eventually running into diffraction limiting semiconductor feature sizes, in the 1960s researchers began to investigate alternative lithography techniques.

One method considered was to use a beam of electrons, rather than light, to pattern semiconductor features. This is known as electron-beam lithography (or e-beam lithography). Just as an electron microscope uses a beam of electrons to resolve features much smaller than a microscope which uses visible light, electron-beam lithography can pattern features much smaller than light-based lithography (“optical lithography”) can. The first successful electron lithography experiment was performed in 1960, and IBM extensively developed the technology from the 1960s through the 1990s. IBM introduced its first e-beam lithography tool, the EL-1, in 1975, and by the 1980s had 30 e-beam systems installed.

E-beam lithography has the advantage of not requiring a mask to create patterns on a wafer. However, the drawback was that it’s very slow, at least “three orders of magnitude slower than optical lithography”: a single 300mm wafer takes “many tens of hours” to expose using e-beam lithography. Because of this, while e-beam lithography is used today for things like prototyping (where not having to make a mask first makes iterative testing much easier) and for making masks, it never displaced optical lithography for large-volume wafer production.

Another lithography method considered by semiconductor researchers was the use of X-rays. X-rays have a wavelength range of just 10 to 0.01 nanometers, allowing for extremely small feature sizes. As with e-beam lithography, IBM extensively developed X-ray lithography (XRL) from the 1960s through the 1990s, though they were far from the only ones. Bell Labs, Hughes Aircraft, Hewlett Packard, and Westinghouse all worked on XRL, and work on it was funded by DARPA and the US Naval Research Lab.

For many years X-ray lithography was considered the clear successor technology to optical lithography. In the late 1980s there was concern that the US was falling behind Europe and Japan in developing X-ray lithography, and by the 1990s IBM alone is estimated to have invested more than a billion dollars in the technology. But like with e-beam lithography, XRL never displaced optical lithography for large-volume production, and it’s only been used for relatively niche applications. One challenge was creating a source of X-rays. This largely had to be done using particle accelerators called synchrotrons: large, complex pieces of equipment which were typically only built by government labs. IBM, committed to developing X-ray lithography, ended up commissioning its own synchrotron (which cost on the order of $25 million) in the late 1980s.

Part of the reason that technologies like e-beam and X-ray lithography never displaced optical lithography is that optical lithography kept improving, surpassing its predicted limits again and again. Researchers were forecasting the end of optical lithography since the 1970s, but through various techniques, such as immersion lithography (using water between the lens and the wafer), phase-shift masking (designing the mask to deliberately create interference in the light waves to increase the contrast), multiple patterning (using multiple exposures for a single layer), and advances in lens design, the performance of optical lithography kept getting pushed higher and higher, repeatedly pushing back the need to transition to a new lithography technology. The unexpectedly long life for optical lithography is captured by Sturtevant’s Law: “the end of optical lithography is 6 – 7 years away. Always has been, always will be.”....

....MUCH MORE 

"The French are right – ‘luxury’ foods shouldn’t be available to all"

From The Telegraph, November 26:

Beware luxury foods that are suddenly affordable. Mass production destroys quality and insults consumers  

Truffles are as essential a part of Christmas in France as mince pies and crackers in the UK. But with the festive season approaching, French truffle farmers are bracing themselves for an invasion from across the Pyrenées. A glut of Spanish truffles is heading to market, forcing down the price of the “black diamonds”, the Périgord and Burgundy truffles which can fetch between 800 and 1,300 euros per kilo. “The competition represents a serious threat, especially early in the season”, confirms Didier Roche, President of the Auvergne-Rhones-Alpes branch of the Federation of French Trufflegrowers.

But why are the French getting their culottes in such a twist over tubers? Luxury ingredients have moved up and down the status scale throughout history: oysters, for example, were once considered a poor man’s substitute for meat and sparkling wine has lost considerable cachet since Italy flooded the market with prosecco. Moreover, climate change and imports of the Chinese truffle (tuber indicum), have been challenging the French product for decades. Why shouldn’t increased availability of an ingredient once reserved for the very rich be celebrated?

Anyone who has inhaled the pheromone punch of a real fresh truffle might agree – sweet, musky, irresistibly sexy, truffle can turn a simple omelette into the haughtiest of haute-cuisine. Truffles “make women kinder and men more amiable”, wrote the gastronome Brillat-Savarin. However, most modern “truffle” products contain barely a whiff of the real thing. Truffle pizza, truffle burgers, truffled cheese – almost none have anything to do with the jewel in gastronomy’s crown. Customers might be paying a premium for one of nature’s most rare and precious ingredients, yet the flavour in these substitutes is derived from dithiapentane, a cheap derivative of crude oil. The aroma is less hypnotic lust and more petrol station forecourt.

Except, that is, in France, where any product with the word “truffle” on it must contain at least one percent real truffle. There they recognise that fake truffle is an insult to consumers’ palates and to the honest chefs who use the genuine article.

Demand for truffles in France far outstrips the native supply of 50 tonnes per year, and over three quarters of the truffles which will be enjoyed with foie-gras this Christmas will be imported. Costing hundreds of euros less per kilo, Spanish truffles seem a reasonable market solution, especially as they initially taste and smell identical to their French counterparts, yet their environmental cost is high. Truffle farms in Aragon can extend over hundreds of hectares, and they’re a thirsty crop. “They have built mega-basins, they draw water all year round, emptying their own water reserves in an instant,” explains M. Roche....

....MORE 

Also at The Telegraph, December 5:

A plague is sweeping Europe – and that’s good news for British cheese
Whether it’s a Greek goat-pox or lumpy skin-scare among French cows, the nation’s dairy exports are booming  

Saturday, December 6, 2025

"Modeling the geopolitics of AI development"

 From AI Scenarios, December 3:

Abstract

We model national strategies and geopolitical outcomes under differing assumptions about AI development. We put particular focus on scenarios with rapid progress that enables highly automated AI R&D and provides substantial military capabilities.

Under non-cooperative assumptions—concretely, if international coordination mechanisms capable of preventing the development of dangerous AI capabilities are not established—superpowers are likely to engage in a race for AI systems offering an overwhelming strategic advantage over all other actors.

If such systems prove feasible, this dynamic leads to one of three outcomes:

  • One superpower achieves an unchallengeable global dominance;
  • Trailing superpowers facing imminent defeat launch a preventive or preemptive attack, sparking conflict among major powers;
  • Loss-of-control of powerful AI systems leads to catastrophic outcomes such as human extinction.

Middle powers, lacking both the muscle to compete in an AI race and to deter AI development through unilateral pressure, find their security entirely dependent on factors outside their control: a superpower must prevail in the race without triggering devastating conflict, successfully navigate loss-of-control risks, and subsequently respect the middle power's sovereignty despite possessing overwhelming power to do otherwise.

Executive summary

We model how AI development will shape national strategies and geopolitical outcomes, assuming that dangerous AI development is not prevented through international coordination mechanisms. We put particular focus on scenarios with rapid progress that enables highly automated AI R&D and provides substantial military capabilities.

Race to artificial superintelligence

If the key bottlenecks of AI R&D are automated, a single factor will be driving the advancement of all strategically relevant capabilities: the proficiency of an actor's strongest AI at AI R&D. This can be translated into overwhelming military capabilities.

As a result, if international coordination mechanisms capable of preventing the development of dangerous AI capabilities are not established, superpowers are likely to engage in a race to artificial superintelligence (ASI), attempting to be the first to develop AI sufficiently advanced to offer them a decisive strategic advantage over all other actors.

This naturally leads to one of two outcomes: either the "winner" of the AI race achieves permanent global dominance, or it loses control of its AI systems leading to humanity's extinction or its permanent disempowerment.

In this race, lagging actors are unlikely to stand by and watch as the leader gains a rapidly widening advantage. If AI progress turns out to be easily predictable, or if the leader in the race fails to thoroughly obfuscate the state of their AI program, at some point it will become clear to laggards that they are going to lose and they have one last chance to prevent the leader from achieving permanent global dominance.

This produces one more likely outcome: one of the laggards in the AI race launches a preventive or preemptive attack aimed at disrupting the leader's AI program, sparking a highly destructive major power war.

Middle power strategies

Middle powers generally lack the muscle to compete in an AI race and to deter AI development through unilateral pressure.

While there are some exceptions, none can robustly deter superpowers from participating in an AI race. Some actors, like Taiwan, the Netherlands, and South Korea, possess critical roles in the AI supply chain; they could delay AI programs by denying them access to the resources required to perform AI R&D. However, superpowers are likely to develop domestic supply chains in a handful of years.

Some middle powers hold significant nuclear arsenals, and could use them to deter dangerous AI development if they were sufficiently concerned. However, any nuclear redlines that can be imposed on uncooperative actors would necessarily be both hazy and terminal (as opposed to incremental), rendering the resulting deterrence exceedingly shaky.

Middle powers in this predicament may resort to a strategy we call Vassal's Wager: allying with one superpower in the hopes that they "win" the ASI race. However, with this strategy, a middle power would surrender most of their agency and wager their national security on factors beyond their control. In order for this to work out in a middle power's favor, the superpower "patron" must simultaneously be the first to achieve overwhelming AI capabilities, avert loss-of-control risks, and avoid war with their rivals.

Even if all of this were to go right, there would be no guarantee that the winning superpower would respect the middle power's sovereignty. In this scenario, the "vassals" would have absolutely no recourse against any actions taken by an ASI-wielding superpower.

Risks from weaker AI

We consider the cases in which AI progress plateaus before reaching capability levels that could determine the course of a conflict between superpowers or escape human control. While we are unable to offer detailed forecasts for this scenario, we point out several risks:

  • Weaker AI may enable new disruptive military capabilities (including capabilities that break mutual assured destruction);
  • Widespread automation may lead to extreme concentration of power as unemployment reaches unprecedented levels;
  • Persuasive AI systems may produce micro-targeted manipulative media at a massive scale.

Being a democracy or a middle power puts an actor at increased risk from these factors. Democracies are particularly vulnerable to large scale manipulation by AI systems, as this could undermine public discourse. Additionally, extreme concentration of power is antithetical to their values.

Middle powers are also especially vulnerable to automation. The companies currently driving the frontier of AI progress are based in superpower jurisdictions. If this trend continues, and large parts of the economy of middle powers are automated by these companies, middle powers will lose significant diplomatic leverage....

....MUCH MORE, that's just the summary. 

Here's the version at the Social Science Research Network:

Modeling the Geopolitics of AI Development
23 Pages 
Posted: 3 Dec 2025

Meanwhile in Britain: Earthquakes, Deepfakes, and Travel Interruptions

From the BBC, December 5:

Trains cancelled over fake bridge collapse image 

Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.

The tremor, which struck on Wednesday night, was felt across Lancashire and the southern Lake District....

....MUCH MORE 

 https://ichef.bbci.co.uk/news/1536/cpsprodpb/5e92/live/bc1e9fa0-d1fd-11f0-a892-01d657345866.jpg.webp

Meanwhile, In Miami Beach: NFT's Never Died!

From Page Six, December 3:

Art Basel show by Beeple has realistic Musk, Bezos, Zuckerberg robot dogs pooping NFTs

Famed artist Beeple’s latest spectacle, “Regular Animals,” has billionaire-tech-titan robot dogs pooping out NFTs, and stopping onlookers at Art Basel Miami Beach in their tracks at the fair’s VIP preview.

The animatronic canines sport nightmarishly realistic masks of Elon Musk, Jeff Bezos and Mark Zuckerberg — plus famed artists Pablo Picasso and Andy Warhol, plus two Beeple (aka Mike Winkelmann) lookalikes — all crafted by famed mask-maker Landon Meier....

https://pagesix.com/wp-content/uploads/sites/3/2025/12/beeples-regular-animals-elon-musk-116668160.jpg?resize=768,1024&quality=75&strip=all 

....MUCH MORE, including video 

If interested (and who wouldn't be) see also October 5's, Digital Art: "What the hell happened to NFTs?": 

Beeple's Everydays: The First 5,000 Days sold for $69.3 million (about £50m) in 2021 
Beeple’s Everydays: The First 5,000 Days sold for $69.3 million (about £50m) in 2021 

....MUCH MORE including such hits as:  

March 2021 - "Beeple, the third-most valuable artist alive, says investing in crypto art is risky as a lot of NFTs 'will absolutely go to zero'"

March 2021 - Big Law (Latham & Watkins) On Whether Or Not NFT's Are Securities

April 2021 - Parents, Have You Talked To Your Kids About The End Of The NFT Boom?
Although not as crucial to their understanding of the world as the yield curve "talk", this is something the wee munchkins should be prepared for....

September 2022 - "Islamic State Turns to NFTs to Spread Terror Message"  

 And many more.

"The First Prophet of Abundance"

Following on last week's news that the Tennessee Valley Authority's small modular reactor demonstration will receive some Federal moolah, "Nuclear Firms Will Get Cash From Trump Administration. Here’s Who Benefits." (BWXT; GEV), a look back at some of the TVA's history.

From Asterisk Magazine, Issue 12: 

David Lilienthal’s account of his years running the Tennessee Valley Authority can read like the Abundance of 1944. We still have a lot to learn from what the book says — and from what it leaves out.

To liberals of the 1940s, David E. Lilienthal was the man who promised abundance, and the Tennessee Valley Authority was the government agency that delivered it. Under Lilienthal’s leadership, the TVA accomplished spectacular feats of engineering. Through the construction of a dozen dams, it brought electricity to the seven states that the Tennessee River watershed spans. Its projects used enough material to fill — they claimed — all the great pyramids of Egypt 12 times over; all the more impressive given that most were completed during the shortages of World War II.

Their renown was all the greater because the TVA began as an experiment with an impossibly broad mandate. The TVA was founded in 1933 as the pet project of Senator George Norris, a Republican from Nebraska. Norris took a keen interest in the Tennessee Valley, where per capita income at the time was around half the national average, and whose residents suffered from constant floods. After several attempts to pass bills that would improve their situation, Norris saw success with the Tennessee Valley Authority Act of 1933. It was tasked with developing the watershed — everything from flood control, to electrification, to battling malaria, to reversing the land’s erosion. No small task, that. Crucially, the Act also established the TVA as a public corporation, outside of any other government department.

It started auspiciously. President Roosevelt offered critical support. In part, it fit into his dream of modernizing the South; as a staunch public power man, it also fed his vendetta against private electric utilities. FDR hand-picked the first members of the TVA’s three-man board; Lilienthal, a former utilities lawyer, was one. But things soon went downhill. The TVA’s sprawling mission led to increasingly public fights between the three directors, each of whom held a different vision for the agency. The spats resulted in a Congressional investigation of the TVA, after which Lilienthal increasingly took charge, finally becoming the chairman in 1941. Once at the helm, he focused the TVA on its ambitious program of dam construction

The program bore fruit as the first few dams began to control floods and bring electricity to the region. Much of the early bickering was forgotten when the TVA delivered the enormous Douglas Dam in just over a year, with a low accident rate, all in the wartime conditions of 1943. The dam powered factories essential to the war effort, including the then secret Clinton Laboratories (now Oak Ridge National Laboratory), which enriched uranium used in the Manhattan Project. 

The TVA won widespread public acclaim, and the American people were eager to hear the story of its success. Lilienthal published Democracy on the March in 1944, dedicated to the people of the Tennessee Valley region. As he pondered moving on from his role, he told, in an almost evangelical tone, his narrative of what the TVA meant, and why development mattered.

It is difficult today to imagine the hold Lilienthal once had on the liberal imagination. It is tempting to call him the Ezra Klein of the 1940s, but the comparison is not wholly accurate — unlike Klein, Lilienthal is exciting. A generation of liberals dreamed of living in the world that the TVA was building, and of being the men that Lilienthal challenged them to be. 

The author John Gunther spoke for postwar liberals when he called the TVA arguably “the greatest single American invention of this century, the biggest contribution the United States has yet made to society in the modern world.” Although hyperbolic, Gunther’s judgment carried weight; he had just toured the entire United States while writing his masterpiece of Americana, the travelogue Inside U.S.A.

Having surveyed everything the United States had to offer, from the commanding heights of industry to the nascent welfare state, Gunther judged the TVA the fullest embodiment of America’s promise. Liberals like him trusted Lilienthal for two reasons: the soaring rhetoric that cast Abundance as a moral project, and the record of achievement that proved it possible.On both counts, today’s Abundance movement has something to learn from the Tennessee Valley Authority. They could learn it from Democracy on the March – though they should read it with caution....

....MUCH MORE 

"USA Or China: Goldman Breaks Down Who Will Win The AI War"

From ZeroHedge, December 5:  

Even after the latest US-China trade truce, the superpower race for technological dominance remains red hot - and will only intensify through the end of the decade.

The battle is over who controls the technologies that will dominate the 2030s: AI chatbots, advanced chips, drones, humanoid robots, clean tech, EVs, satellites, reusable space rockets, hypersonic weapons, next-gen grid power generation, and the critical minerals that make all of it possible.

The latest comments from U.S. Trade Representative Jamieson Greer reveal that the Trump administration is pushing for a stable trade environment with Beijing, which makes perfect sense heading into the midterm election cycle.

"I don't think anyone wants to have a full-on economic conflict with China and we're not having that," Greer said Thursday at the American Growth Summit in Washington.

Greer continued, "In fact, President Trump has had the opportunity to use all the leverage we have against China — and we've had a lot, right — whether it comes to software, semiconductors or all kinds of things. A lot of allies are interested in taking coordinated action, but the decision right now is we want to have stability in this relationship."

"For this moment in time, we want to make sure that China is buying the kinds of things from us we should be selling them: aircraft, chemicals, medical devices and agricultural products," he said. "We can buy things from them that are not sensitive."

Greer added, "We have to get our own house in order. We need to make sure that we are on a good path to reindustrialization, including for critical minerals."

Being only a trade truce, the real superpower battle continues to rage behind the scenes.

The latest Goldman Sachs Top of Mind, one of the firm's flagship research publications edited by Allison Nathan, offers clients a broad framework of why the geopolitical race for technological dominance remains as intense as ever.

Mark Kennedy, Founding Director, Wahba Initiative for Strategic Competition at New York University's Development Research Institute, told Goldman's Ashley Rhodes, "It is entirely possible that neither the U.S. nor China emerges as the outright victor in the tech race. I can envision a world in which the U.S. leads in developing the most advanced technologies, while China leads in global installations."

On the rare-earth mineral front, it's very clear that while the U.S. is still playing catch-up, China remains years ahead in both mining and refining.

But not all is lost: the U.S. is well ahead on the semiconductors.

Rhodes asked Kennedy:

Who is currently "winning" the tech race?

Kennedy responded:

It's important to understand that there are four key arenas in this race: technological innovation, practical application of the technology, installation of the digital plumbing or infrastructure underpinning the technology, and technological self-sufficiency. The U.S. is currently leading in most advanced technologies, including semiconductors, AI frameworks, cloud infrastructure, and quantum computing, as well as in attracting global talent. However, China is ahead in areas such as quantum communications, hypersonics, and batteries.

China is also making rapid strides to catch up to and, in some cases, overtake the U.S. in technological application. For example, China deploys robotics in manufacturing on a scale twelve times greater than the U.S. when adjusted for differences in employee income. And while U.S. regulations often limit applications like drone deliveries to your door, China is proactively testing and deploying advanced physical AI and robotics like uncrewed taxis and vertical takeoff vehicles, accelerating their practical adoption.

China is also dominating on the global installations front. It has established a strong presence in the Global South, surpassing the U.S. and other Western nations in building essential digital networks there. And China has made significant strides toward achieving technological self-sufficiency through its dual circulation strategy aimed at reducing its reliance on the West while increasing Western dependence on China. Recent Chinese government measures, such as restricting domestic purchases of Western chips and offering incentives for using domestic alternatives, underscore this push for technological independence. At the same time, China's vast overproduction capacity in batteries and critical minerals has further increased Western dependence on China's supply chains. The U.S. has been ambivalent at best as it relates to this aspect of the tech race and remains reliant on China in many ways. So, on net, while the U.S. leads in the development of the technology itself, China is rapidly closing the gap — or even leading — in application, infrastructure installations, and tech self-sufficiency.

Reindustrialization in the U.S. should reverse this...

....MUCH MORE 

"Synthetic Dreams, Real Frictions: Reimagining Computer-Generated Data"

A companion piece to the post immediately below, "Introducing Unified Model Collapse".

From Bot Populi, November 3:

“…asked whether he was worried about regulatory probes into ChatGPT’s potential privacy violations. Altman brushed it off, saying he was “pretty confident that soon all data will be synthetic data.” — Financial Times (2023)

Sam Altman’s remark that soon “all data will be synthetic” is not just a provocation—it captures how debates on synthetic data often unfold through bold claims that sidestep the pressing issues. What, in fact, are synthetic data? Why do they matter? And how can we critically understand their world-making potential, especially when viewed from the vantage point of the Global South?

This post argues that synthetic data are not neutral technical tools but socially constructed representations whose world-making potential is deeply contested. They are both fictions and frictions: fictions because they are constructed representations of reality, embedded with the assumptions of generative models; frictions because they produce practical and epistemological tensions when deployed within infrastructures such as finance. These tensions are particularly acute in Global South contexts, where synthetic data narratives often obscure existing infrastructural and governance challenges. We draw on our academic study of responses to synthetic data in Latin America to illustrate distinctions between dreams and realities as well as to push for reimagining what synthetic data are and could be.

Fictions of Synthetic Data in Finance

The European Data Protection Supervisor defines synthetic data as “artificial data that [are] generated from original data and a model that is trained to reproduce the characteristics and structure of the original data.” While this framing is technically accurate and useful for legal purposes, it misses the extent to which synthetic data are narrative-laden fictions.

Synthetic data are promoted as risk-free, privacy-preserving, and innovation-friendly. In finance, synthetic data are said to enable experimentation without endangering consumer privacy, as well as to simulate risk scenarios without exposing real data.

Yet the term “synthetic” is not an innocent descriptor. Once again in finance, ‘synthetic’ evokes the ghosts of risky instruments at the heart of global market collapse. The 2007–08 financial crisis, precipitated in part by synthetic products like collateralized debt obligations, left lasting stigmas related to synthetic products. Financial professionals approach anything synthetic with caution. It is no coincidence that the Financial Times interview with Sam Altman cited above opted for the term “computer-generated data” rather than “synthetic”. Even though the meaning and uses of what counts as synthetic have evolved since the financial crisis, the term persistently evokes worries in this sector, whether in relation to the synthetic risk transfer market or to so-called synthetic stablecoins.

In financial contexts, narratives around synthetic data are therefore marked by a persistent tension: they are presented as innovative and promising yet remain overshadowed by enduring associations with risk and opacity. This specific context prompts us to posit that synthetic data must be understood not only as technical artifacts but also as infrastructural narratives....

....MUCH MORE

Introducing Unified Model Collapse

From the Always The Horizon substack, November

Urban Bugmen and AI Model Collapse: A Unified Theory
A solution indicating that Mouse Utopia is an inherent property of intelligent systems. The problem is information fidelity loss when later generations are trained on regurgitated data. 

This is a longer article because I’m trying to flesh out a complex idea. Similar to my article on the nature of human sapience, this is well worth the read1.

Introducing Unified Model Collapse

I have been considering digital modeling and artificial neural networks. Model collapse is a serious limit to AI systems; a failure mode that occurs when AI is trained on AI-generated data. At this point, AI-generated content has infiltrated nearly every digital space (and many physical print spaces), extending even to scientific publications2. As a result, AI is beginning to recycle AI-generated data. This is causing problems in the AI development industry.

In reviewing model collapse, the symptoms bear a striking resemblance to certain non-digital cultural failings. Neural networks collapse, hallucinate, and become delusional when trained only on data produced by other neural networks of the same class. …and when you tell your retarded tech-bro boss that you’re “training a neural network to do data-entry,” upon hiring an intern, are you not technically telling the truth?

I put real hours into the thought and writing presented here. I respect your time by refusing to use AI to produce these works, and hope you’ll consider mine in the purchasing a subscription for 6$ a month. I am putting the material out for free because I hope that it’s valuable to the public discourse.

It may be that, by happenstance in AI development, we have stumbled upon an underlying natural law, a fundamental principle. When applied to trained neural network systems, information-fidelity loss and collapse may be universal, not specific to digital systems. This line of reasoning has serious sociological implications: decadence may be more than just a moral failing; it may be universally applicable.

Model collapse is not unique to digital systems. Rather it’s the most straight-forward form of a much more fundamental underlying principle that effects all systems that train on raw data sets and then output similar data-sets. Training with regurgitated data leads to a loss in fidelity, and a an inability to interact effectivley with the real world.

 https://substackcdn.com/image/fetch/$s_!RfwJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5062d465-2f87-4705-b174-4d0a3e472d74_540x372.jpeg

The Nature of AI Model Collapse
The way neural networks function is that they examine real-world data and then create an average of that data to output. The AI output data resembles real-world data (image generation is an excellent example), but valuable minority data is lost. If model 1 trains on 60% black cats and 40% orange cats, then the output for “cat” is likely to yield closer to 75% black cats and 25% orange cats. If model 2 trains on the output of model 1, and model 3 trains on the output of model 2… then by the time you get to the 5th iteration, there are no more orange cats… and the cats themselves quickly become malformed Chronenburg monstrosities3.

https://substackcdn.com/image/fetch/$s_!_rsH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7db266af-8516-475c-9a4d-d731f0a8edfb_700x341.png

Nature published the original associated article in 2024, and follow-up studies have isolated similar issues. Model collapse appears to be a present danger in data sets saturated with AI-generated content4. Training on AI-generated data causes models to hallucinate, become delusional, and deviate from reality to the point where they’re no longer useful: i.e., Model Collapse.

The more “poisoned” the data is with artificial content, the more quickly an AI model collapses as minority data is forgotten or lost. The majority of data becomes corrupted, and long-tail statistical data distributions are either ignored or replaced with nonsense.

***video*** 

AI model collapse itself has been heavily examined, though definitions vary. The article “Breaking MAD: Generative AI could break the internet” is a decent article on the topic5. The way AI systems intake and output data makes it easy for us to know exactly what they absorb, and how quickly it degrades when output. This makes them excellent test subjects. Hephaestus creates a machine that appears to think, but can it train other machines? What happens when these ideas are applied to Man, or other non-digital neural network models?

Agencies and companies will soon curate non-AI-generated databases. In order to preserve AI models, the data they train on will have to be real human-generated data rather than AI slop. Already, there are professional AI training companies that work to curate AI with real-world experts. The goal is to prevent AI from hallucinating nonsense when asked questions. Results are mixed, as one would expect with any trans-humanism techno-bullshit in the modern day.

Let’s talk about mice.

John B. Calhoun
A series of experiments were conducted between 1962 and 1972 by John B. Calhoun. Much has been written about these experiments (a tremendous amount), but we’ll review them for the uninitiated6 7. While these experiments have been criticized, they are an excellent reference for social and psychological function in isolated groups8.

***video***

The Mouse Utopia, universe 25, experiment by John B. Calhoun placed eight mice in a habitat that should have comfortably housed around 6000 mice. The mice promptly reproduced, and the population grew9.

Following an adjustment period, the first pups were born 3½ months later, and the population doubled every 55 days afterward. Eventually this torrid growth slowed, but the population continued to climb [and peaked] during the 19th month.

That robust growth masked some serious problems, however. In the wild, infant mortality among mice is high, as most juveniles get eaten by predators or perish of disease or cold. In mouse utopia, juveniles rarely died. As a result, [there were far more youngsters than normal].

What John B. Calhoun anticipated, and what most other researchers at the time anticipated, was that the population would grow to the threshold (6000 mice), exceed it, and then either starve or descend into in-fighting. That was not the result of the Universe 25 experiment.

The mouse population peaked at 2200 mice after 19 months, just under 2 years. Then the population catastrophically collapsed due to infertility and a lack of mating. Nearly all of the mice died of either old age or internicine conflict, not conflict over food, water, or living space. The results have been cited by numerous social scientists, pseudo-social scientists, and social pseudo-scientists for 50 years (you know which you are).

The conclusion that many draw from the Mouse Utopia experiment is that higher-order animals have a sort of population limit. That is, when population density exceeds certain crucial thresholds, fertility begins to decline for unknown reasons. Some have proposed an evolutionary toggle that’s engaged when over-crowding becomes a risk. Some have proposed that the effects are due to a competition for status in an environment where status means nothing (mice do have their own hierarchies after all).

The reasoning behind the collapse of Universe 25 into in-fighting, the loss of hierarchy, is still up for debate; it did occur. The resultant infertility of an otherwise very healthy population, senseless violence, and withdrawal from society in general have been dubbed the “behavioral sink.”

I am aware that many consider this experiment to be a one-off. It was repeated in other experimence by John Calhoun, but no one has replicated it since. I’d love to do some more of these experiments, but university ethics boards won’t approve them in the modern day and age. WE NEED REPLICATION.


The Demographic Implosion of Civilization

Humans have displayed similar behaviors to those of the Universe 25 population at high densities. An article that I wrote roughly a year ago demonstrates a significant correlation between the percent-urban population and the fertility rate dropping below replacement levels. It appears that between 60% and 80% urban, depending on the tolerance of the population, and fertility rates drop below replacement10.

Under the auspice of Unified Model Collapse Theory, those numbers may need to be changed. Rather than a fertility collapse occurring when a population reaches 60% or 80% urbanization, the drop in fertility would occur after the culture and population have re-adapted to a majority-urban environment. How long it takes the fertility rate to decline would then be proportional to the cultural momentum. Rarely will it take longer than a full generation (30 years), and frequently it’ll be as short as a decade....

....MUCH MORE, he's just clearing his throat. 

Previously:

 
Teach Your Robot Comedic Timing
It will all slowly grind to a halt unless a solution to the training data problem is found. Bringing to mind a recursive, self-referential 2019 post....

Friday, December 5, 2025

"The Bronze Age of Globalization"

From Palladium Magazine, December 5:

Reflection on the ancient world often brings to mind the city-state of Athens, the white columns of the Parthenon, and its philosophers such as Socrates, Diogenes, or Zeno. This seems ancient enough to us, and might seem to be the beginning of what we think of as Western civilization. And yet, already in the fifth century BC, the classical Greeks themselves looked back to a different vanished world, a lost civilization of the Mediterranean further east. It was the world remembered in the Iliad and the Odyssey, of warriors like Achilles besieging Troy and seadogs like Odysseus wandering across strange lands. When the Athenians contemplated antiquity, they reflected on what we today call the Bronze Age: an era defined by a metal that does not occur in nature and which dated from 3300 BC to 1200 BC, a timespan as long as the time from us back to Jesus Christ and Julius Caesar.

Bronze alloy is a fabrication, a man-made alloy. The formula was simple enough, a recipe known to smiths from the banks of the Nile to the hillforts of the Danube. You took copper, soft and plentiful and the color of the dying sun, and added tin in a proportion of about ten percent, a ratio arrived at not by calculation but by centuries of trial and error. The added tin made the difference between a metal that bends and a metal that cuts. This technological shift allowed for complex casting and a hard edge for tools and weapons.

Bronze was the strength of the age, the chisel that cut the stone for the Pharaoh’s limestone, the sword that severed the artery, the pin that held the cloak, a synonym for strength in poems written down on papyrus. The material divided the strong from the weak. Without the tin, you had only copper, which bent. You were soft and vulnerable and likely dead.

This necessity meant that for over two millennia, the great civilizations of the Mediterranean had a problem of geography that became a problem of survival. All of them, the Greek-speaking Mycenaeans in their Aegean citadels, the Egyptians along the Nile, and the Hittites on the Anatolian plateau, possessed copper in abundance. They had gold, timber, and grain. They had the favor of their gods and the discipline of their scribes. What they did not have was tin. They had built their political order, their armies, their economies, and their sophisticated diplomacy, on a metal that did not exist in their own soil.

This age of civilization was not a time of isolation. It was an era of globalization on a remarkable scale. A king in Mycenae could commission a sword whose blade was forged from copper mined in Cyprus and tin mined in Afghanistan, a weapon that was, in its molecular structure, a record of the known world. It was a time of far-reaching connectivity, a network of overland routes stretching across the Eurasian landmass and shipping across the Mediterranean Sea and even Indian Ocean—but a network that would ultimately prove to be fragile.

The Riddle of the Tin Mountains

For a long time, archaeologists didn’t know where the tin came from. This was the “tin problem,” a phrase that suggests a logistical hiccup rather than a centuries-long mystery that already baffled the historians of the classical world like Herodotus and Pliny. The texts were not so much silent as coy. The scribes of Mari and Ugarit listed the metal, annaku in Akkadian, AN.NA in Sumerian. They listed the prices, the weights, and the middlemen. But they did not list the mines. The tin came from “the East,” or it came from “the Mountains,” or it came from a market that had bought it from another market. It was a commodity with no origin, a ghost metal that seemed to simply appear in the palace workshops of Thebes and Knossos by magic.

We now know that, in the early centuries, tin came from far to the east, from Central and South Asia, from the Zeravshan Valley in what is now Tajikistan and the Hindu Kush in Afghanistan. There, in the high, thin air, miners dug into the rock, crushed the ore, smelted it into ingots, and sent it west by a relay of donkeys. The distance was striking. From the mines of Afghanistan to the furnaces of Mesopotamia is a journey of thousands of kilometers, across the Zagros Mountains, across the Iranian plateau, through the bandit country of Elam. The records of the Assyrian merchants of the 19th century BC tell of their trade network stretching out from their colony at Kanesh, in central Anatolia.

The Kültepe texts, in the form of twenty thousand cuneiform tablets found at Kanesh, are not concerned with poetry or myth but ledgers. They are the receipts of a family business. They record, with a dryness that borders on the hypnotic, the arrival of caravans from Aššur, the Assyrian home city. They record the movement of tin: about forty-eight tons of it over thirty years, carried on the backs of black donkeys....

....MUCH MORE 

 Related, in a Bronze Age sort of way:

News You Can Use: "How to Find Ancient Assyrian Cities Using Economics"

More SpaceX: "...in Talks for $800 Billion Valuation Ahead of Potential 2026 IPO"

 Following on "Report: SpaceX planning for IPO late next year".

From the Wall Street Journal, December 5: 

Company’s CFO told investors about the transaction and IPO plans in recent days, say people familiar with the matter 

SpaceX is kicking off a secondary share sale that would value the rocket maker at $800 billion, people familiar with the matter said, surpassing OpenAI to make it the most valuable U.S. private company.  

The company’s Chief Financial Officer Bret Johnsen told investors about the sale in recent days, and SpaceX executives have also said the company is weighing a potential initial public offering in 2026, some of the people said. 
 
The $800 billion valuation is double the $400 billion value it fetched in a recent secondary share sale.
SpaceX didn’t respond to a request for comment.
 
SpaceX investors have been waiting for an IPO for years as the company has grown into an essential service for the U.S. government, launching satellites and astronauts. It is also a provider of broadband internet around the world, used in remote areas ranging from the mountains of the U.S. to the frontiers of the war in Ukraine.
 
The IPO market picked up this summer after three years of doldrums. Shares of stablecoin issuer Circle Internet Group and software maker Figma both soared in their market debuts this year. The government shutdown slowed the pace of new offerings, but bankers and investors are optimistic 2026 will be a return to normal IPO levels.
 
While much of Elon Musk’s business empire is facing growing challenges, his rocket-and-satellite company remains stronger than ever, thanks in part to its dominant position launching rockets into space. Many investors say the company’s satellite business Starlink—which has more than eight million active customers—is also driving up its big valuation.  

SpaceX approached investors as part of a so-called tender offer, which usually takes place twice a year. In these transactions, employees and investors are able to sell their existing shares, allowing them to cash out from a company that is almost 25 years old but still hasn’t gone public....

....MUCH MORE