Sunday, July 25, 2021

"Scientists aim to build a detailed seafloor map by 2030 to reveal the secrets of the deep"

Norway is all up in this, as the kids used to say:

August 2018
"Norway to Map Deep Sea Mineral Deposits"
September 2019
Norway's Petroleum Directorate Completes Second Seabed MINERALS Expedition
August 2020
"The rush to claim an undersea mountain range"

As is the World Economic Forum, host to this piece from The Conversation, July 20:

  • A single survey ship would take about 350 years to adequately map most of the seabed deeper than 200 metres.
  • However, Ocean mapping is now central to two major international initiatives.
  • The Nippon Foundation-GEBCO Seabed 2030 project aims to see all of the ocean floor mapped by 2030 through voluntary data contributions.

Marine scientists often feel like they’re fumbling in the dark. The global ocean covers about 71 per cent of our planet and is central to life as it exists on Earth. But only about 20 per cent of the seafloor has been directly mapped so far.

Survey ships equipped with sonars called multibeam echo sounders are being used to measure the depth of the seafloor to better understand it. But the size of the job is enormous. A single survey ship would take about 350 years to adequately map most of the seabed deeper than 200 metres, and it would take another 620 years to map the shallower areas.

We must map the ocean faster. Today, marine surveying, or hydrography, is central to major international initiatives, including one that aims to see all of the ocean floor mapped in unprecedented detail by 2030.

A more detailed and accurate global model of water depth would reveal the seafloor’s shape, and the data can be used to understand seabed composition. This will increase the safety of marine navigation, inform security and defence operations, improve oceanographic and climate studies, support various sectors of the sustainable ocean economy and guide decisions on habitat conservation. But it could also come with risks and costs.

Unknown sea

In 2007, as an undergraduate co-op student working at the Geological Survey of Canada’s Pacific Geoscience Centre near Victoria, B.C., I helped map seabed habitats and hazards off the West Coast....

Are You a Conformist? Y Combinator's Paul Graham On The Four Quadrants of Conformism

From Mr. Graham's personal website:

July 2020

One of the most revealing ways to classify people is by the degree and aggressiveness of their conformism. Imagine a Cartesian coordinate system whose horizontal axis runs from conventional-minded on the left to independent-minded on the right, and whose vertical axis runs from passive at the bottom to aggressive at the top. The resulting four quadrants define four types of people. Starting in the upper left and going counter-clockwise: aggressively conventional-minded, passively conventional-minded, passively independent-minded, and aggressively independent-minded.

I think that you'll find all four types in most societies, and that which quadrant people fall into depends more on their own personality than the beliefs prevalent in their society. [1]

Young children offer some of the best evidence for both points. Anyone who's been to primary school has seen the four types, and the fact that school rules are so arbitrary is strong evidence that the quadrant people fall into depends more on them than the rules.

The kids in the upper left quadrant, the aggressively conventional-minded ones, are the tattletales. They believe not only that rules must be obeyed, but that those who disobey them must be punished.

The kids in the lower left quadrant, the passively conventional-minded, are the sheep. They're careful to obey the rules, but when other kids break them, their impulse is to worry that those kids will be punished, not to ensure that they will.

The kids in the lower right quadrant, the passively independent-minded, are the dreamy ones. They don't care much about rules and probably aren't 100% sure what the rules even are.

And the kids in the upper right quadrant, the aggressively independent-minded, are the naughty ones. When they see a rule, their first impulse is to question it. Merely being told what to do makes them inclined to do the opposite.

When measuring conformism, of course, you have to say with respect to what, and this changes as kids get older. For younger kids it's the rules set by adults. But as kids get older, the source of rules becomes their peers. So a pack of teenagers who all flout school rules in the same way are not independent-minded; rather the opposite.

In adulthood we can recognize the four types by their distinctive calls, much as you could recognize four species of birds. The call of the aggressively conventional-minded is "Crush <outgroup>!" (It's rather alarming to see an exclamation point after a variable, but that's the whole problem with the aggressively conventional-minded.) The call of the passively conventional-minded is "What will the neighbors think?" The call of the passively independent-minded is "To each his own." And the call of the aggressively independent-minded is "Eppur si muove."

The four types are not equally common. There are more passive people than aggressive ones, and far more conventional-minded people than independent-minded ones. So the passively conventional-minded are the largest group, and the aggressively independent-minded the smallest....


The Extent of the Worldwide Protests Has To Be Seen To Be Believed

The fact that corporate media is not covering this stuff tells you all you need to know about corporate media.

From Investment Hulk an assortment of videos from this weekend. Important notes: because he is posting in near-real time he has made some errors on locations, mixing up one city for another. In addition the attributions are sparse-to-non-existant but taken as a whole what is going on in the world right now may be unique in history:


Our Friend, The Mangrove

From Knowable Magazine, July 22:

Many mangrove restorations fail. Is there a better way?
These carbon-hoarding, coastline-protecting forests are sponges for greenhouse gases. Doing plantings right and involving local communities are key to saving them.

If any single event was a watershed for conservation of the world's mangrove forests, it was the Indian Ocean tsunami of 2004. The day after Christmas that year, a magnitude 9.1 earthquake thundered along a fault line on the ocean floor with a force that sent waves — some a hundred feet high — surging toward the densely populated coasts encircling the Indian Ocean. The disaster took more than 225,000 lives.

In the aftermath of the tsunami, some scientists reported that settlements behind swampy, shoreline mangrove forests often suffered less damage, and fewer casualties, than areas where the forests had been cleared for aquaculture or coastal developments. Although the mangroves provided only modest protection against such a devastating tsunami, the ordeal was nevertheless a powerful reminder that mangroves can be vital buffers against storm surges, floods and the normal hazards of coastal life.

Many took the lesson to heart: Mangroves had to return.

In several affected countries, nonprofits and government agencies swiftly began planting mangrove seedlings; in Sri Lanka, plantings were made at more than 20 sites around the island’s rim. But when University of Ruhuna botanist Sunanda Kodikara visited those sites between 2012 and 2014, he was shocked to find mangroves regrowing on only about 20 percent of the area planted. Elsewhere, just a few saplings persevered, or none at all. “I saw so many dead plants,” Kodikara recalls. Especially disheartening, he says, was the fact that some $13 million had been spent on the efforts. 

Such results are particularly frustrating to experts, as the need for protecting and restoring the world’s “blue forests” is greater than ever. Mangroves are mighty sponges for climate-warming gases — which makes large companies increasingly eager to pay for mangrove conservation to offset their own emissions. Mangroves are also havens for biodiversity, and living dikes that help shield against storms and waves that are growing ever stronger in a warming climate. And yet, they remain one of the world’s most threatened tropical ecosystems; we’ve lost over 35 percent of the world’s total in two recent decades, largely due to clearing of mangroves for aquaculture, agriculture, urban development and timber....


 Perhaps it is time to get reacquainted with our friends, the halophytes

Rising Sea Levels? Get to Know the Halophyte Crops

The Reason We Do Science Is To Be Able To Predict

 Sure, you can conceptualize science as a philosophy, a way to think.
And you can think of science as a discipline, the classic "scientific method":

High school science course

In the last twenty years or so the pesky issue of 'Falsifiability' has been subjected to a full-frontal attack, partly because the elimination of the concept is required to make something like psychology a science and partly to address a major criticism of climate study: that much of what is called science can't be proven using the high-school template and thus, be it right or wrong in its conjectures is closer to religion (a belief in things unseen) than to scientific method. 

This has given rise to what's called "Post-normal science"*, an approach favored by futurists among others who can't prove what they are saying is true but still want to be called 'scientists.' This framework has also been adopted by folks looking at the most complicated and arcane bits of complexity 'science.'

Be all that as it may be here is a snip from a 2013 post: 

The Next Time Someone Tells You Economics is a Science Remind Them of Mendeleev

From the Royal Society of Chemistry:

What is a mark of a great scientist? Good scientists discover new information and make sense of it, linking it to other data. They may go further by giving an explanation of this linked data which, maybe not immediately, other scientists accept as a correct explanation. However the outstanding scientist goes further in predicting consequences of his ideas which can be tested. This boldness identifies the great scientist if the predictions are later found to be accurate. One such person was Russian chemist Dmitri Mendeleev.....
If an economist can't make falsifiable predictions then what he's doing is something between religion and mental masturbation. From the ridiculous to the sublime, back to the RSC:

....Correct predictions

The greatness of Mendeleev was that not only did he leave spaces for elements that were not yet discovered but he predicted properties of five of these elements and their compounds. How foolish he would have seemed if these predictions had been incorrect but fortunately for him three of these missing elements were discovered by others within 15 years (ie within his lifetime). The first of these Mendeleev had called eka-aluminium because it was the one after aluminium (eka = 1 in Sanskrit) and was identified in Paris (1875) by Paul Emile Lecoq de Boisbaudran who named it gallium after the Latin name for France.....MORE

And today's link, from Shtetl Optimized, July 24, another of the great predictors: 

Steven Weinberg (1933-2021): a personal view

Steven Weinberg was, perhaps, the last truly towering figure of 20th-century physics. In 1967, he wrote a 3-page paper saying in effect that as far as he could see, two of the four fundamental forces of the universe—namely, electromagnetism and the weak nuclear force—had actually been the same force until a tiny fraction of a second after the Big Bang, when a broken symmetry caused them to decouple. Strangely, he had developed the math underlying this idea for the strong nuclear force, and it didn’t work there, but it did seem to work for the weak force and electromagnetism. Steve noted that, if true, this would require the existence of a new particle that hadn’t yet been seen — the Z boson — and would also require the existence of the previously-proposed Higgs boson.

By 1979, enough of this picture (in particular, the Z boson) had been found that Steve shared the Nobel Prize in Physics with Sheldon Glashow—Steve’s former high-school classmate—as well as with Abdus Salam, both of whom had separately developed pieces of the same puzzle. As arguably the central architect of what we now call the Standard Model of elementary particles, Steve was in the ultra-rarefied class where, had he not won the Nobel Prize, it would’ve been a stain on the prize rather than on him.

Steve once recounted in my hearing that Richard Feynman initially heaped scorn on the electroweak proposal. Late one night, however, Steve was woken up by a phone call. It was Feynman. “I believe your theory now,” Feynman announced. “Why?” Steve asked. Feynman, being Feynman, gave some idiosyncratic reason that he’d worked out for himself.....

 *Here's post-normal science showing up in medicine, March 2020:

Post-normal pandemics: Why COVID-19 requires a new approach to science

I'm not sure it helped,

Governance:"De Gaulle’s State of Tomorrow"

From Palladium Magazine:

Technocratic power has become the core backbone of industrial civilization. Incapable of managing complex modern forms of social organization, legacy political structures have outsourced their responsibilities to an army of credentialed bureaucrats. The abdication of the statesman marks the rise of the expert, a political player who wields a different kind of authority. Where the democratic leader derives his mandate from the supposed will of the people, the expert’s legitimacy comes from his supposedly superior knowledge of technical matters. A product of the meritocratic machine, the expert translates epistemic credibility into influence: He rules because he knows more.

Modern states created expert-led administrations to serve specific, subordinate functions. Certain questions required technical assessments, for which rulers had to employ competent advisers. But the Industrial Revolution expanded the need for this kind of structure far beyond its initial conception. The centralization of private power demanded the centralization of public power.  Faced with ever-expanding firms with clear objectives and coordinated hierarchies, the state found itself dealing with unprecedented legal, administrative, and industrial complexity at scale. Institution-builders restructured state power to deal with these challenges, one step at a time. Once a subordinate extension of conventional authority, technocracy metastasized into a large, independent authority of its own. The statesman-expert hierarchy has become a partnership upon which the functionality of the state depends. Just as the preservation of order in medieval kingdoms required symbiosis between priest and knight, so the preservation of state capacity in industrial societies requires symbiosis between expert and statesman.

Political and technocratic modes of power can no longer operate on their own. Without direction, bureaucracies degenerate into cold-hearted machines that reduce the human experience to a set of metrics devoid of higher purpose. Without structure and expertise, politicians simply cannot manage the social systems of modern life to carry out any particular vision. Both modes, therefore, need each other to thrive and survive. Working in unison, political leaders provide the end to the technocracy’s means and the moral purpose of the machine’s inner workings. Conversely, experts can enlighten the decisions of statesmen, temper their dynamism, and bring detachment to the chaotic whims of the moment. Ultimately, modern, industrial states are faced with this central challenge: to re-establish the primacy of the political over the technocratic without destroying the necessary symbiosis between the two.

Few regimes, if any, have found the right balance. In the West and beyond, political representatives delight in delegating their power to administrators. Two incentives explain this love of abdication: First, outsourcing represents a handy way to avoid policy-making, responsibility, and personal accountability. Second, outsourcing frees up time to fundraise, campaign, and build a media-savvy persona. On the other end of the trade, technocrats welcome their ever-expanding authority by consolidating their status and influence without worrying about elections. Over time, the unchanging administration captures the influence that elected offices once possessed; every time, power flows from the temporary and fragile to the permanent and secure.

This trend results in an unbalanced structure wherein political leaders turn themselves into actors in a televised pantomime while experts, internalizing the hubristic idea that technical skill and statesmanship are one and the same, rule behind the scenes. In theory, this model applies the division of labor to politics. Theatrical players, selected for their charisma, dominate surface-level institutions while technical players, selected for their brute-force competence, control the superstructure. In practice, however, this imbalanced order fuses the worst of both technocracy and mob rule. Unelected, directionless bureaucrats reign supreme while demagogues distract the masses. Devoid of discipline, coherence, and moral purpose, technocrats cannot even deal with strictly technical issues like pandemic management, even as they undermine and absorb the whole political structure.

Industrial societies thus face a dilemma. On the one hand, modern forms of social organization demand the symbiotic alignment of technocratic and political modes of authority. On the other hand, bureaucracies as we know them tend to accumulate power while the political center becomes the concierge of its own abdication....


"Concierge of its own abdication". I like that.

Plus, it brings to mind a line from one of the great actors, re-posted June 2, 2020:

"He worships at the temple of his own narcissism."
No, not Cuomo on de Blasio, although the mayor's irresponsibility is approaching that of Jacob Frey of Minneapolis.
Rather the comment is from Marlon Brando when Burt Reynolds was being considered for the role of Michael Corleone in The Godfather.
The line of thought was something like: Governor Cuomo > Fredo > Michael > Bill de Blasio ( né Warren Wilhelm Jr.) > Brando.

The full quote on Burt Reynolds was:

"He is the epitome of something that makes me want to throw up. 
He is the epitome of everything that is disgusting about the thespian, 
he worships at the temple of his own narcissism."

If the reader has stuck with me this far I feel I owe you something. Here's our last use of the quote:
People, People "That Time The National Security Agency Invented Bitcoin"

Interesting story.

Governor Cuomo Threatens To Remove Mayor de Blasio, Send Him To Nursing Home

A couple more Brandoisms:
"I don't want to spread the peanut butter of my personality on the mouldy bread of the commercial press."

"If you're successful, acting is about as soft a job as anybody could ever wish for. But if you're unsuccessful, it's worse than having a skin disease."
Source for all: IMDb

Saturday, July 24, 2021

Coinbase: Here Come the Securities Law Class Action Attorneys (COIN)

One of the ads currently being run, this one via Yahoo Finance:

LAWSUIT FILED: Coinbase Global Sued for Securities Fraud; Investors Should Contact Block & Leviton for More Information

BOSTON, July 23, 2021 (GLOBE NEWSWIRE) -- Block & Leviton announces that a class action lawsuit has been filed against Coinbase Global Inc. (NASDAQ: COIN) and certain of its officers for securities fraud. Investors who purchased shares on or after April 14, 2021 and lost money are encouraged to contact the firm to learn more about how they might recover those losses. For more details, visit

What is this all about?

Coinbase “powers the cryptoeconomy” through its “trusted platform” used to send and receive Bitcoin and other digital assets built using blockchain technology. The platform is used throughout the world, with approximately 43 million retail users, 7,000 institutional users, and 115,000 ecosystem partners in over 100 countries.

On April 14, 2021, Coinbase filed its Registration Statement and related prospectus with the SEC in connection with its direct offering of over 114 million shares of class A common stock. In its Registration Statement, the Company represented that its operations would continue to be financed with operating cash flow and the sale of convertible preferred stock – i.e. it did not need to raise capital through the direct offering to fund operations.

Little more than a month later, Coinbase conceded the need to raise capital and revealed performance issues that prevented users’ ability to trade cryptocurrencies. On May 17, 2021, the Company announced plans to raise about $1.25 billion via a convertible bond sale. And on May 19, 2021, the Company revealed technical problems, including delays “due to network congestion” affecting those who want to get their money out.

On this news, Coinbase’s share price fell $23.44 per share, or nearly 10%, closing at $224.80 per share on May 19, 2021. Shares today trade as low as $208.00 per share, far below the April 14, 2021 opening price of $381.00....


Thorium: "China is gearing up to activate the world's first 'clean' commercial nuclear reactor"

This is a very big deal.

The promise of thorium has been a siren song for the last couple decades, back in 2008 I dropped this comment at the WSJ's Environmental Capital blog:  

8:37 am July 22, 2008 
Climateer wrote: 

C’mon guys, get with it!
Global warming is so last year.
Everybody, from Al Gore to the blogs you link to are reinventing themselves and talking energy.
Energy production
Energy cost.
Energy security.
It’s all about framing and re-framing.
Low impact man’s time has come and gone. The eco-soirée has moved on to erudite discussions of thorium between nibbles at the canapés.
By this winter the only references to carbon among the salon crowd might be Carbonic acid (H2CO3).
You watch. 

 Okay, so I was early.

From LiveScience, July 23:

Chinese government scientists have unveiled plans for a first-of-its-kind, experimental nuclear reactor that does not need water for cooling.

The molten-salt nuclear reactor, which runs on liquid thorium rather than uranium, is expected to be safer than traditional reactors because the molten salt cools and solidifies quickly when exposed to the air, insulating the thorium, so that any potential leak would spill much less radiation into the surrounding environment compared with leaks from traditional reactors. 

The prototype reactor is expected to be completed next month, with the first tests beginning as early as September. This will pave the way for the building of the first commercial reactor, slated for construction by 2030.

As this type of reactor doesn't require water, it will be able to operate in desert regions. The location of the first commercial reactor will be in the desert city of Wuwei, and the Chinese government has plans to build more across the sparsely populated deserts and plains of western China, as well as up to 30 in countries involved in China's "Belt and Road" initiative — a global investment program that will see China invest in the infrastructure of 70 countries.

Chinese government officials view nuclear energy exports to be a key part of the Belt and Road program.

"'Going out' with nuclear power has already become a state strategy, and nuclear exports will help optimize our export trade and free up domestic high-end manufacturing capacity," Wang Shoujun, a standing committee member of the China People's Political Consultative Conference (CPPCC) — a political advisory body which acts as a link between the Chinese government and business interests, said in a report on the CPPCC's website.

Thorium — a silvery, radioactive metal named after the Norse god of thunder — is much cheaper and more abundant than uranium, and cannot easily be used to create nuclear weapons. The new reactor is a part of Chinese President Xi Jinping's drive to make China carbon-neutral by 2060, according to the team at the Shanghai Institute of Applied Physics that developed the prototype. China currently contributes 27% towards total global carbon emissions, the largest amount from any individual country and more than the entire developed world combined, according to a 2019 report by the US-based Rhodium Group. 

"Small-scale reactors have significant advantages in terms of efficiency, flexibility and economy," Yan Rui, a physics professor at the Shanghai Institute of Applied Physics, and colleagues wrote in a paper about the project published July 15 in the journal Nuclear Techniques. "They can play a key role in the future transition to clean energy. It is expected that small-scale reactors will be widely deployed in the next few years."....


Not just big, potentially huge.

And India should have done it first.

Here's a poem about thorium by Roald Hoffman:

In the beach sands of Kerala,
abraded from the gneiss,
in the stream sands of North Carolina
one finds monazite, the solitary mineral.
In its crystalline beginning there was order,
there was a lattice. 
And the atoms - cerium, lanthanum, 
thorium, yttrium, phosphate - danced 
round their predestined sites, 
tethered by the massless springs of electrostatics 
and by their neighbors' bulk. 
They vibrated, and sang
 in quantized harmony. 
to absent listeners, to me. 
But the enemy is within. 
The radioactive thorium's 
nervous nuclei explode 
in the random thrum 
of a hammer of no Norse god. 
The invisible searchlights 
of hell, gamma rays, 
flash down the lattice. 
Alpha particles, crazed nuclear 
debris, are thrust on megavolt 
missions of chance destruction. 
The remnant atom, transmuted, recoils,
 freeing itself from its lattice point, 
cannonballs awry through 
a crowded dance floor. 
There are no exits to run to. 
In chain collisions of disruption 
neighbors are knocked from their sites. 
The crystal swells from once limpid 
long-range, short-range order 
to yellow-brown amorphousness. 
the metamict state

The author used to give readings at the Cornelia Street Cafe in New York City.

Sometimes, if badgered, he would talk about some of the tchotchkes he's gathered and some awards he’s collected::

* Nobel Prize
* Priestley Medal
* Arthur C. Cope Award in Organic Chemistry
* Inorganic Chemistry Award (American Chemical Society)
* Pimentel Award in Chemical Education
* Award in Pure Chemistry
* Monsanto Award
* National Medal of Science
* National Academy of Sciences
* American Academy of Arts and Sciences Fellow
* American Philosophical Society Fellow
* Foreign Member, Royal Society
A lot of people have been thinking about thorium for a while now.

Simplify Your Life, Ron Perelman Style

From the New York Post, April 27:

Billionaire Ron Perelman lists $60M NYC home 

Revlon billionaire Ronald Perelman has publicly listed his Lenox Hill townhouse at 36 E. 63rd St. for $60 million.

Last fall, Perelman unofficially shopped the property with a few “quiet” showings for around $65 million, along with a smaller, connected townhouse for a total of around $75 million.

That was part of an extraordinary sell-off that included art, one of his Gulfstream jets and a yacht — part of a strategy to “simplify” his life, Perelman said at the time, all while his business had to react to the pandemic-stricken economy....


Also at the Post:
Inside Louis Bacon’s mysterious $500M private island in the Hamptons

And one of our previous links on Mr. Bacon:
The Billionaire Battle in the Bahamas

"The Polyopticon: Data Gathering and State Technopower"

From the Georgetown Security Studies Review, Georgetown University, August 28, 2019:


The world is on the cusp of a fourth industrial revolution.[i] This revolution is driven by the increased connectivity, reactivity, and converging nature of modern technologies. It is enabled by constant data-streams analyzed through ultramodern techniques, which allow vendors to immediately integrate analysis into product and service development with minimal delays and to maximum effect. This process results in iterative development and design, updating every successive product or service to better meet evolving market demands. The relationships among producers, consumers, the market, and ‘final’ products will fundamentally change, creating mutually-influencing networks that will benefit integrated commercial enterprise to a greater degree than ever before. 5G’s massively-broadened bandwidth, reliability in data collection, and resolution of real-time data will enable faster, more reactive innovation and design.

Governments, too, might adapt and synergize the coexistence of decentralized connected data streams and analytical technologies to increase their own reactivity. The fourth industrial revolution is a model for the future of massive and individualized surveillance, analysis, and reactivity. The market has already normalized transponders, cameras, and microphones in billions of pockets and on billions of desks across the world.[ii] The major problem facing authorities interested in real-time surveillance is no longer the propagation of hardware but the centralization and analysis of surveillance data.[iii] Big data requires big analysis, and this is where the AI-revolution can support authorities’ efforts in ways that human-only analysis never could. This process—the marriage of massively-disseminated surveillance hardware streaming through modernizing super-high bandwidth infrastructure to AI-assisted analytics for the purpose of creating actionable knowledge, and all the policies required to make that happen—is the Polyopticon.

How Statistics Leverage State Power and the Tension Between Discipline and Citizenship

The Polyopticon is a play on the panopticon—a cylindrical prison in which a beehive of cells opens inward to a central guard tower, creating an architecture in which no single prisoner is ever assured of their privacy regardless of how lax surveillance over them might be.[iv] For many reasons, the panopticon has an incredibly limited application for governance over large populations, not least because it requires the literal reconstruction of physical reality. The panopticon is expensive, unscalable, and therefore impractical for governance. It underscores the classic constraints to states’ intelligence activities, namely: hardware, human capital, and scalability. But the Polyopticon, as a technopolitical tool, leverages various existing technologies to provide states with surveillance data streams and the tools required to convert them into actionable statistics that could not have existed even a decade ago.

Innovation in technology will not change the core elements of sovereignty: exercising a monopoly of violence among a population within a territory.[v] Much as hardware innovations alone have never been enough to create more actionable intelligence, innovations in violence have never been sufficient to secure sovereignty. States are always pressured to use the violence—and all the resources of their population and territory—available to them strategically, requiring them to match violence with knowledge of their problems and possible solutions. In their quest to build better strategies and leverage the resources available to them, states invented one crucial science: statistics.[vi] As Michel Foucault argued, the majority of statistics were aimed at leveraging biopower—the biologically-bound resources of the state such as human bodies, crops, and the energy extractable from various fuels available within a state—to expand the domestic and geopolitical reach of ‘state forces’ over more people, territory, and time.[vii] Whether they create positive externalities or not, biopolitics (policies and politics introduced to leverage biopower) like the census, public health programs, and personalized identification always serve to create or multiply the capabilities and statistics states see as key to increasing the projection of their power domestically and abroad for both high- and low-political aims.[viii]

Statistics, as Foucault explains, were married with the scientific process to create institutions and systems that simultaneously amplified the capabilities of the state and controlled human populations by subjecting them to discipline.[ix] Discipline was disassociated from biopolitics as populations wrested control—through what Foucault terms ‘counter-conduct’—from states by demanding rights, expressing freedoms, and normalizing citizenship.[x] State-discipline no longer forced resource productivity (human labor) as citizenship and free enterprise incentivized production and the creation of wealth, which themselves multiplied state-power. Discipline, instead, was directed at 1) leveraging internal resources in times of crisis (i.e., the draft, rationing, and nationalization) and 2) deviant behavior (i.e., criminality, revolutionary politics, non-conformity, treason).[xi] This tension, between the disciplining tendencies of the state and the liberalizing demands of citizenship exists to this day. The Polyopticon requires coordination—the art of state-strategy—and abets state-directed discipline by providing it with the actionable statistics it needs to surveil, analyze behavior, and identify deviant individuals, movements, and populations.

The Polyopticon

The individual pieces of the Polyopticon were not conspiratorially created, disseminated, or activated. The devices that can stream data in real-time, the ultramodern networks that connect them, the analytical techniques that synthesize knowledge from data were not devised for the purpose of surveilling, analyzing, or controlling populations and individuals. Smartphones, fiberoptic cables, web servers, big-data crunching, and artificial intelligence were all created to solve various commercial, private, and public problems. The fourth industrial revolution depends on the synergy of these technologies to create better services and products, faster. The Polyopticon will likewise leverage the same technologies to more rapidly create better actionable statistics.

This sounds innocuous enough, but to put it sarcastically: What have states ever done with more knowledge? Let it lie and gather dust? No. States utilize knowledge. As Foucault points out, knowledge—especially knowledge about a subject that the subject itself does not know—is an especially potent weapon against enemies both foreign and domestic.[xii]

A description—detailed though not technical—of how the Polyopticon turns surveilled data into actionable statistics clarifies the above argument. It is especially useful to add the additional frame of how the Polyopticon solves the three core problems of state-intelligence gathering: hardware, human capital, and scalability.

First, in search of actionable knowledge, the state must surveil. Here the Polyopticon taps into personalized devices and technologies, voluntarily employed across a broad spectrum of private, corporate, and public sectors. Smartphones with microphones, cameras, and transponders; electronic communications; web and cloud-based information storage; social and commercial media—all of these are sources of surveilable data once connected. States leverage mundane technologies that are nonetheless integral to modern life. Liberal and authoritarian governments alike might access these data-sources. In this way, the Polyopticon records citizens’ behavior within reality rather than reconstructing reality to influence behavior....


A Review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

A repost from 2017.

Mr. Kasparov was a pretty good chess player:

"From 1986 until his retirement in 2005, Kasparov was ranked world No. 1 for 225 out of 228 months. His peak rating of 2851,achieved in 1999, was the highest recorded until being surpassed by Magnus Carlsen in 2013." -Wikipedia
But he should probably also be known for his twitter feed, last seen in our post:

The Challenges And Triumphs Of Expanding A Family-Owned Winery
Tell me about it.*
*Just kidding, despite making installment payments that could have bought France, I don't own a vineyard.

I wanted a chance to reprise one of the all-time greatest retweet comments, this one from former chess World Champion Garry Kasparov in response to The Onion: Prompting a world-weary:

And from the Los Angeles Review of Books, June 29, 2017 the headline story:

A Brutal Intelligence: AI, Chess, and the Human Mind

CHESS IS THE GAME not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. It’s hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling theory, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions — about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the influential Bell Labs engineer Claude Shannon published a paper in Philosophical Magazine called “Programming a Computer for Playing Chess.” The creation of a “tolerably good” computerized chess player, he argued, was not only possible but would also have metaphysical consequences. It would force the human race “either to admit the possibility of a mechanized thinking or to further restrict [its] concept of ‘thinking.’” He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they influenced the course of the game. He laid out two very different approaches to programming the function. “Type A” would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. “Type B” would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are “over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all.” In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. “Unfortunately,” Shannon concluded, “a machine operating according to the Type A strategy would be both slow and a weak player.”  
Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. Fascination with digital computers intensified during the 1950s, and the so-called “thinking machines” began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a digital computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are “machines that think” and, in the near future, “the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” With the right programming, a computer would turn sapient.

¤ It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.
One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasn’t much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computer’s victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach — the intelligence strategy — that ended up being the dead end. Despite their early optimism, AI researchers utterly failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. “It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind,” Kasparov reflects. “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

The history of computer chess is the history of artificial intelligence....


HT: Rough Type who we will be visiting again tomorrow.

Previously on Mr. Kasparov;
How human traders will beat the machines

And on Claude Shannon the Bell Labs polymath genius:
"Claude Shannon, the Las Vegas Shark"
"How Information Got Re-Invented"
The Bit Bomb: The True Nature of Information
"How did Ed Thorp Win in Blackjack and the Stock Market?"
How Big Data and Poker Playing Bots Are Taking the Luck Out of Gambling

There was also a shout out to Shannon from the quants at Ruffer in July 17's Ruffer Review: "Navigating information" 

Friday, July 23, 2021

The FT's Izabella Kaminska Looks at Decentralized Finance

 From FT Alphaville, July 21:

DeFi paradoxes
Why using crypto smart-contracts to remove financial intermediaries comes with its own trade-offs.
You might have heard the term “DeFi” making the rounds in the cryptosphere. Like us, you may even have tried to understand what it was all about but given up due to the complexity, opacity and jargon surrounding its key aspects. 

Others have tried to shed light on its mysterious operations. The FT’s Miles Kruppa and Hannah Murphy, for example, noted as far back as the heady pre-pandemic days of December 2019 that DeFi operates as a broad umbrella term for blockchain projects that aim to eject human involvement in financial services by using smart contracts. To accomplish this, DeFi depends on the creation of “liquidity pools” and overcollateralised repo-style arrangements.

Even so, DeFi, which stands for “decentralised finance”, remains a murky if increasingly hyped up corner of the crypto world. 
To be blunt, FT Alphaville’s initial knee-jerk reaction — based on what we should confess was not that much research at all — was that the very idea of decentralised finance being something new or exciting was hugely naive.

The story of financial markets, after all, is one of continuous decentralisation, innovation and engineering to synthesise seemingly risk-reduced returns and free lunches, which usually turn out to be anything but. Crypto is just another part of that story.

From repo markets to eurodollars and commodities, the story always starts with decentralised frameworks arising quite organically, being heralded as amazing wealth generators, then turning risky, then blowing up, then having regulatory forces battle to contain them — usually by centralising their processes and slowing down their growth by forcing expensive checks and balances on the systems.

Were the alchemic solutions being offered by DeFi really immune to that age-old pattern?

We somehow doubted it....

....MUCH MORE, first rate.

And here with a contrary opinion, Bloomberg journo, aggrieved counterparty and former Alphavillain, Tracy Alloway:

I mean contrary to received guac-sqawk wisdom, not contrary to Izabella..

What are underwater farms? And how do they work?

From the World Economic Forum, July 15:

  • Industrialized farming was once seen as a solution for a rapidly-growing global population, but it is taking its toll on the environment.
  • The UN estimates that the world could easily be fed if just 2% of oceans were used for sustainable farming.
  • Underwater agriculture has the potential to eliminate the need for pesticides, reduce water use and cut carbon emissions.

Could underwater strawberries and deep-sea herbs provide a more sustainable alternative to land-based farming?

Industrial agriculture is struggling to meet the needs of a rapidly growing population. And decades of intensive farming has taken a heavy toll on the environment.

An over-reliance on pesticides, displacement of wildlife, the wasting of gallons of water and the generation of harmful emissions are damaging our world.

So, scientists and entrepreneurs are hoping underwater farming could address these issues by growing crops under the ocean, eliminating the need for pesticides, while also reducing water use and carbon emissions.

Indeed, the UN estimates the world could easily be fed if we used just 2% of the oceans for sustainable farming.


Boosting sustainability with underwater crops

Aquaculture has long been used to grow and harvest foodstuffs such as seafood, but several companies are now looking at ways of farming traditional crops such as strawberries and herbs under the sea.

Nemo’s Garden is an underwater farming project consisting of six air-filled plastic pods, or biospheres, anchored at the bottom of the sea off the coast of Noli, Italy.

a picture of the plastic pods used in the Nemo’s Garden

The plastic pods are suspended between 4.5 to 11 metres below the surface. 
Image: Nemo’s Garden 

The plastic pods are suspended at different depths – from between 4.5 to 11 metres – below the water’s surface, and each is equipped with sensors to measure carbon dioxide and oxygen levels; humidity, air temperature and illumination.

Created by diving company Ocean Reef Group in 2012, the project has already yielded everything from tomatoes, to courgettes, beans, mushrooms, lettuce, orchids and aloe vera plants using hydroponic techniques.

This means that plants are grown, without soil, in a nutrient-rich solution to deliver water and minerals to their roots, in a controlled environment....

"The Day the Good Internet Died"

 From The Ringer, July 21:

For a small slice of time, being online was a thrilling mix of discovery, collaboration, creativity, and chaotic potential. Then Google Reader disappeared.

....From my desk in a high-rise office building at the southern end of Manhattan, I click and scroll and scroll and click. Sometimes idly, with one eye on the clock; sometimes desperately, in lieu of the work I know I ought to be doing. I skim the sweaty Getty images on the celebrity fashion blog Go Fug Yourself and peruse the latest tidy musings from Felix Salmon, an arch Reuters blogger who covers high and low finance alike. I read everything published on The Awl (tagline: “Be less stupid”) and most things published on Consumerist. (Emboldened by that site’s recurring pieces of advice, I decide to push back one day, out there in the real world, against one of those little “$10 card minimum” signs at a grocer in SoHo. It does not go well and I will never attempt it again.) It’s the year 2011, and I can’t get enough of the internet.

I stare at The Big Picture’s gripping photos of deadly catastrophes around the globe. I parse cryptic, confusingly formatted bursts of internecine drama between tiny yet mighty Tumblr accounts helmed by people whose various blog iterations I have parasocially followed since I was in college. I read posts about ConLaw and SantaCon. I mostly keep a poker face, but when I do slip up and accidentally snicker or whisper “huh!” out loud, I play it off as though I’m reacting to something Jim Cramer or Maria Bartiromo just said on CNBC. With a critical eye I scan my own sub-rosa Tumblr as if it belongs to another, trying to imagine how my squirrely curio of online fascinations—Jason Kottke reblogs; slideshows of Martha Stewart getting stitches; links to my own unhinged and unpaid rants about concepts like “preemptive irritation”—must come across in the eyes of another person.

All of this is facilitated by Google Reader, a slim workhorse of a site launched in 2005 that uses pre-existing RSS feed protocols to turn the chaos of the web into a pleasant lazy river of content. Google Reader is not the world’s first RSS newsreader, nor will it be the last, and over the years plenty of internet power-users will sniff that it’s not even the best. But it’s the one that caught on. And using it requires little effort to yield satisfying, orderly rewards, kind of like tossing spare coins and crumpled bills into an old ceramic piggy bank and finding out, in return, that you have been granted access to a sleek, organized, and free Swiss bank account.

Google Reader never judges, nor does it showboat. It swans under the radar, with a URL that isn’t blocked by my office computer system the way louder social networks like Twitter and Facebook are. It has a look that is intentionally left blank. There are ads here and there, but way fewer. Even its black box functionality, introduced in 2009, is labeled with wry charm: A user can sift through feeds in chronological order or can choose to, in Google’s words, “sort by magic.”

I decide to see what happens if I sort my life by magic, too, and I leave that career for a totally new one. None of my loved ones are surprised, even if they can’t completely relate: the majority of them use the internet almost exclusively for work emails, online shopping, fantasy football, and/or keeping tabs on exes. They say well-meaning things to me like, “Ah, you and your blogs!” In my new job, Google Reader becomes less of a diversion and more of a vital resource. I add a sub-category just for hockey blogs; I use the search function almost daily to resurface things I know I’ve read somewhere and want to quote in my work; I comment on the links shared by my colleagues. I can’t possibly know it yet, but life online is about as good as it will ever be.

It’s the year 2021, and I can’t get enough of the internet. This is an admission of defeat. It’s an acknowledgment of my worst tic, the one where I lie in bed until 3 or 4 a.m. and pull-to-refresh, pull-to-refresh, pull-to-refresh, until Facebook or Twitter or Instagram or my email—or, in my lowest moments, Nextdoor—brings me something, anything new....


Meanwhile, in London...

At least the yobs weren't doing the "Singing in the Rain" Stompy McStompface

"Inflation Pushes Consumer-Goods Giant Unilever to Accelerate Price Increases"

Ummm.... companies raising prices in anticipation of rising prices is something we haven't seen in a while.

If you start hearing learned discussions of LIFO vs. FIFO accounting treatment grab yourself some of that quirky little instrument, the Series I bond and get familiar with ag futures and depreciation schedules for asset-heavy enterprises.

And bar soap. For some reason soap (a dandy way to store fat) has shown the tendency to retain value.

But maybe that was just Weimar, 1923 and Chicago, 1979 and Hungary, 1946 (prices rising 350% per day in July, 75 years ago this month). Still though, think storables, and price elasticity and black market value and....

Anyhoo, on to the Walls Street Journal, July 22:

Warning from maker of Dove soap and Ben & Jerry’s ice cream follows similar moves by Procter & Gamble and General Mills
The maker of Dove soap and Hellmann’s mayonnaise warned of accelerating price increases across a range of products, as it seeks to counter cost inflation across its business.

Unilever PLC said Thursday that it was grappling with higher costs for ingredients, packaging and transportation, which would likely lower its full-year profitability—a warning that sent shares down 5% in early trading.

The London-listed consumer-goods giant said it would step up price increases across the world, having already raised prices 1.6% in the second quarter.

“We are going to have to take a little higher levels of price increase,” Chief Financial Officer Graeme Pitkethly told reporters.

Inflation has continued to pick up pace, rising at the fastest pace in 13 years in the U.S. last month as the recovery from the pandemic gained steam and consumer demand drove up prices of everything from autos to clothes and restaurant meals. Other packaged-food manufacturers, including Procter & Gamble Co. and General Mills Inc.,have also warned of rising prices this year.

Mr. Pitkethly said Unilever’s large scale and strong inventories would help to mitigate the price rises but that several costs were out of the company’s control and rising more than expected. The price of ingredients such as palm oil, crude oil and soybean oil all rose sharply in the quarter.

Rising costs of commodities, increased marketing spend compared with last year and expenses linked to the Covid-19 pandemic have reduced Unilever’s profitability, with the company saying its underlying operating margin in the first six months of the year fell 1 percentage point to 18.8% from a year earlier. The company now expects a slightly lower profit margin for the year.

“We’re very focused on our pricing actions, which we think are landing well but inflation has been even higher than we anticipated,” Mr. Pitkethly said.

He added that the company had already been able to quickly increase prices in places such as Brazil and Argentina, but that doing so in Europe, for example, can take more time because the sales contracts it signs are often for longer periods.

The comments came as Unilever reported a 5.4% rise in first-half underlying sales growth to 25.8 billion euros, equivalent to $30.4 billion, boosted by strong sales of its food and refreshment products. It attributed 4% of that growth to higher sales volumes, with 1.3% coming from higher prices.
Net profit for the first six months of the year fell 5% to €3.12 billion because of a negative impact from currency fluctuations....


 And remember: "The Inflation Is Transitory, The Loss of Purchasing Power Is Permanent"

"Futures Exchanges Gear Up For EV Boom"

Two via OilPrice. First up, the headline story from Ag Metal Miner:

The CME’s Comex and London Metal Exchange (LME) are squaring up for the industrial revolution that is electrification, according to recent posts by Bloomberg and the Financial Times.

Both exchanges are busy developing and, more importantly, marketing products that cater to industry’s need to hedge exposure to forward prices for key battery ingredients. Whether for car batteries, electronic goods or power grid storage, the key metals are demanded by a common technology: lithium-ion batteries. 

Futures exchanges launch lithium hydroxide contracts

Both exchanges have launched identical lithium hydroxide cash settled contracts based on the Fastmarkets prices for China, Japan and South Korea – the key battery-producing regions.

So far, volumes are light. But with lithium hydroxide prices up some 86% this year, the market is arguably crying out for a hedging mechanism.

Initially, miners were said to be reluctant to support such a product, preferring long-term mine-to-consumer contracts. The same is the case for aluminum.

Eventually, the industry came round.

The LME is reported by the FT to have included industry leaders like Albemarle and Tesla in the design and development of the product to ensure its acceptability. 

LME cobalt retrospective

The LME certainly has unhappy memories in this respect.

Its launched a physically delivered cobalt contract way back in 2010. The contract initially gathered supporters, but worries about the ethical credentials of some of the deliverable brands hit uptake. The exchange then had to relaunch a cash-settled contract in 2019, yet uptake remained poor.

On this one, the CME could be said to be eating the LME’s lunch.

Since its December launch, the COMEX cobalt contract has enjoyed a steady rise in uptake, in part due to aggressive marketing.....


And a bit further up the value chain, the rentiers, Wall Street product packagers and the subsidy seekers are mobilizing: 

VC Firms Are Pouring Billions Of Dollars Into Green Tech

ESG investing: it's in every media outlet and on every bank's business plan. A rush to what many call alignment of values with investment goals has led to a flourishing new industry with funds popping up like mushrooms after the rain. Green-tech startups are the new dotcoms, it seems, and the danger of a bubble seems distant—for now.

Interestingly enough, things were very different just a few years ago, as the Wall Street Journal's Scott Patterson noted in a recent article. The past decade, he wrote, saw a pullout of investors from the green energy technology field after a couple of notable demises—one of solar company Solyndra back in 2011 and one of battery maker A123 Systems a year later.

From today's standpoint, this is ancient history. Now, hardly a week goes by without a breakthrough of some sort in batteries, solar power tech, or, say, hydrogen. Most of these breakthroughs have to do with cost and efficiency, which are the two things that can guarantee a product a long life. Yet, most of these breakthroughs never make it to the consumer. They never make the leap across the so-called valley of death between the lab and the market. Especially if funding is scarce and hard to come by.

Venture capital funds are changing this, the WSJ's Patterson writes, citing data from PitchBook, a private capital market research provider. According to PitchBook, venture capital funds are seen completing $7.7 billion worth of green tech deals this year, which would be up from $1 billion ten years ago.

It's not just venture capitalists, either. JP Morgan earlier this month launched not one but three new sustainability investment funds. This was only the latest move in a rush to set up clean energy investment funds to take advantage of growing investor appetite for environmental, social, and governance, commonly known as ESG, investing....


One thing to be aware of, as Mr. Patterson knows, having covered it for the Journal, is that the V.C.'s, in aggregate lost money on the last green go-round. They are going to leave the utility-type returns and moonshots to Bill Gates and, in a few years the private equity vultures will pick over the bones.

Creighton University: "July Rural Mainstreet Index Shows Strength: Bankers Expect Farmland Price Growth to Fall by Half in Next 12 Months"

 From Creighton's Heider College of Business, July 15:

July Survey Results at a Glance:

  • Overall index remains at a high level indicating strong growth for the month.
  • Despite recent solid job gains, U.S. Bureau of Labor Statistics data indicate that the Rural Mainstreet nonfarm employment remains 1.3%, below its pre-COVID-19 level.
  • In three states, Minnesota, Nebraska, and South Dakota, current nonfarm employment exceeds pre-pandemic levels.
  • On average bank CEO’s estimated farmland price growth for the previous 12 months at 5.8%, but project growth at only 2.4% for the next 12 months.
  • Almost half of bankers reported damaging drought conditions for farmers in their area.

OMAHA, Neb. (July 15, 2021) – For the eighth straight month, the Creighton University Rural Mainstreet Index (RMI) remained above growth neutral, according to the monthly survey of bank CEOs in rural areas of a 10-state region dependent on agriculture and/or energy.

Overall: The overall index for July fell to a healthy 65.6 from June’s strong 70.0. The index ranges between 0 and 100 with a reading of 50.0 representing growth neutral.

Approximately, 31.3% of bank CEOs reported that their local economy expanded between June and July.

“Solid, but somewhat weaker, grain prices, along with the Federal Reserve’s record-low interest rates, and growing exports have underpinned the Rural Mainstreet Economy. Even so, current rural employment remains below pre-pandemic levels,” said Ernie Goss, PhD, Jack A. MacAllister Chair in Regional Economics at Creighton University’s Heider College of Business.

Farming and ranching: For a 10th straight month, the farmland price index advanced significantly above growth neutral. The July reading fell to a strong 71.0 from June’s 75.9. This is first time since 2012-2013 that Creighton’s survey has recorded 10 straight months of farmland prices above growth neutral.

This month bankers were asked to estimate farmland price growth for the previous 12 months and for the next 12 months. On average bank CEO’s estimated farmland price growth for the previous 12 months at 5.8%, but projected growth at only 2.4% for the next 12 months.

Approximately 46.9% of bankers reported damaging drought conditions for farmers in their area. However, there was significant variation among reports. For example, Steve Simon CEO of South Story Bank and Trust in Huxley, Iowa, reported, “Although still under drought conditions, central Iowa has received some timely, much needed rain.”

The July farm equipment-sales index declined to 67.2 from 71.6 in June. Readings over the last several months represent the strongest consistent growth since 2012....


Tesla Strikes Deal With Top Miner BHP Over Nickel Supplies (TSLA; BHP)

This is the second major move to secure nickel supplies that Musk has made this year. March 9: "New Caledonia agrees to Vale nickel mine sale, Tesla to be partner" (TSLA; VALE)

And from Bloomberg:

Tesla Inc. has struck a nickel-supply deal with BHP Group, as the electric-car maker seeks to protect itself from a future supply crunch.

BHP will provide the automaker with the metal from its Nickel West operation in Western Australia, the world’s biggest miner said in a statement. BHP gave few further details, but said the companies would work together to make the battery supply chain more sustainable.

Telsa’s billionaire boss, Elon Musk, has repeatedly expressed concern about future supplies of nickel due to challenges in sustainable sourcing. Musk has pleaded with miners to produce more nickel, with demand set to skyrocket as the world increasingly moves toward electric vehicles.

Nickel is a key component in lithium-ion batteries, used in electric vehicles. It packs more energy into batteries and allows producers to reduce use of cobalt, which is more expensive and has a less transparent supply chain....


Related, March 1:

"10 charts show China’s grip on battery supply chain to last decades"

Which discusses nickel and also has a link to a Reuters Explainer: "How the New Caledonia government collapse may affect the nickel market".

And on China's efforts in the same direction:
ICYMI: "The king of nickel is betting big on a green future in batteries"

Thursday, July 22, 2021

Société Générale's Albert Edwards "Notices 'Something Shocking' In The 'Transitory vs Persistent Inflation' Debate

 From ZeroHedge:

Earlier this week, following the news from NBER that the covid recession lasted all of two months making it the shortest recession in history, we showed that opinions on wall street diverged as to whether the economy was in early cycle (as JPMorgan claims), mid cycle (which has been Morgan Stanley's view) or late cycle, as Deutsche Bank suggested. Picking up on this debate, in his latest note the ever skeptical Albert Edwards writes that one question that bothers him "is whether this recovery should really be considered a new economic cycle or merely a continuation of the old one, briefly interrupted by the pandemic" a view which we wholeheartedly agree with.

As Edwards explains, normally in a recession, especially at the end of a record long 130-month cycle like the one we saw up to February last year, excesses like too much leverage are purged and there is a sort of reset. Of course, this time that did not happen and instead the system doubled down on leverage. So "to the extent that did not happen due to super-sized monetary and fiscal largesse, are those vulnerabilities still lurking with the potential to trigger another recession much earlier than anyone suspects" Edwards cautions adding that "would definitely be a major market shock." There is more on this in the full note which pro subs can find in the usual place.

But while we doubt this debate will be resolved any time soon, what caught our attention was another observation made by Edwards relating to the other key debate raging right now - whether inflation is transitory or persistent.

Edwards writes that while discussing his business cycle observations with his colleague Jitesh Kumar, he noticed "something shocking" - in its latest forecast, the Congressional Budget Office(CBO) have massively revised their calculations for the US output gap which as Edwards explains, "suggests US inflationary pressures are far hotter than previously thought."

For those unfamiliar, the 'Output Gap' is important because many economists, especially Keynesian economists, tend to believe inflation only shows up when the economy is growing beyond potential capacity constraints. Normally in a recession as demand collapses it takes many years for the output gap to return from disinflation/deflation-inducing negative territory back to zero, and then beyond that into positive (inflationary) territory. Not this time though. Indeed, this is an observation also made by Morgan Stanley over the weekend, when the bank's Chief Cross-Asset Strategist Andrew Sheet noted that this has been a remarkable cycle in that we went from Downturn to Repair and then directly to Expansion without spending the requisite 35 or so months in the Recovery phase.

So going back to the Edwards note, he points out that similarly, "the CBO have had a total rethink and believe the economy will be operating 2% beyond full capacity as early as Q1 next year!" Edwards explains that "if unemployment follows its traditional close correlation with the CBO output gap, we could reach record low levels of unemployment far sooner than anyone expects."

There's more: Jitesh also flagged a recent articlefrom the San Francisco Fed which splits core PCE inflation into a cyclical component and an acyclical (uncorrelated to the domestic output gap) inflation time series (link). As Jitesh writes, "to get directly to the point – the following chart shows the cumulative change in cyclical core PCE inflation over the three years following a recession in the US. It has “This time is  different” writ large – a combination of timely monetary/fiscal support has managed to interrupt the “cycle” in the US. Normally 15 months after the beginning of the previous three recessions in the US, cyclical core PCE was down between 0.5% to 1.5% (see chart  below), whereas it is above pre-recession levels this time around.”....