Sunday, October 16, 2022

Risk/Prop Bets: How unlikely is a doomsday catastrophe?

Before we get into the meat of the matter, some things to know upfront. From the introduction to 2018's "Tips And Tricks For Investing In 'End of the World Funds'":

As unauthorized representatives for Long or Short Capital's End of the World Puts this is an area of profound interest from which we have gleaned some insight:

1) Should the world end, collecting on your bet can be a challenge. Know your counterparty!
     And possibly more important, demand collateral!
2) The swings in end of the world product prices can be dramatic.
3) Prognosticators have predicted 100,000 of the last 0 termination events....

And from arXiv (astrophysics) at Cornell:

How unlikely is a doomsday catastrophe?

Max Tegmark (MIT), Nick Bostrom (Oxford)
Numerous Earth-destroying doomsday scenarios have recently been analyzed, including breakdown of a metastable vacuum state and planetary destruction triggered by a "strangelet'' or microscopic black hole. We point out that many previous bounds on their frequency give a false sense of security: one cannot infer that such events are rare from the the fact that Earth has survived for so long, because observers are by definition in places lucky enough to have avoided destruction. We derive a new upper bound of one per 10^9 years (99.9% c.l.) on the exogenous terminal catastrophe rate that is free of such selection bias, using planetary age distributions and the relatively late formation time of Earth....


If interested we mentioned Bostrom in last week's "The Roubini Cascade: Are we heading for a Greater Depression?". His Future of Humanity Institute at Oxford seems to have moved on from the mundane climate cataclysm or cosmic fireball ending everything, to a very real, very serious examination of whether or not Artificial Intelligence may be the nail in humanity's coffin, so to speak. 

Although the writers of these pieces aren't from the FHI they are writing about Bostrom and seem to represent the state of the science.

First up, from MIT's Technology Review, September 20, 2016:

No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity

And two months later, also from MIT's Technology Review:

In the intervening six years the positions on either side have only hardened, such that at one of the conferences the phrase "big poopy-head" was used.

Some of our previous stuff referencing Bostrom:

March 2022
How much value can our decisions create? (there's an upper limit)

October 2021
So, What Are They Thinking About At Oxford's Future of Humanity Institute?

August 2015
Artificial Intelligence Can Be Scary, Artificial Stupidity Can Kill Us All

November 2012

7 Best-Case Scenarios for the Future of Humanity

Here are some of the FHI's publications, with links to the rest of the site. 

And related:

And with that, possibly premature optimism, and with a nod to one of our progenitors we will move on to next week's busy earnings calendar.

Evolution Going Great, Reports Trilobite 
http://www.fossilmall.com/Pangaea/patrilos/tr22/pft757b.JPG
Slowly inching his segmented exoskeleton across the sea floor, a local marine arthropod, class Trilobita, reported that Earth's natural evolution was "progressing quite nicely."

"Things are looking mighty fine," announced the prehistoric invertebrate, taking measure of his surroundings through a series of small, hexagonal eyelets located at the tip of his thorax. "Sulfurous gas seems to be bubbling up to the surface pretty good, and several single-cell organisms appear to be mutating at a rather steady pace. Also, just today, I developed the ability to roll into a small protective shell in order to avoid predators."

Added the trilobite, "Yup, this evolution thing is going great."...
Little did the Trilobite know...