Friday, December 27, 2024

"Looking Back at the Future of Humanity Institute"

 From Asterisk Magazine:

The rise and fall of the influential, embattled Oxford research center that brought us the concept of existential risk.

On April 16, 2024, the website of the Future of Humanity Institute was replaced by a simple landing page and a four-paragraph statement. The institute had closed down, the statement explained, after 19 years. It briefly sketched the institute’s history, appraised its record, and referred to “increasing administrative headwinds” blowing from the University of Oxford’s Faculty of Philosophy, in which it had been housed.

Thus died one of the quirkiest and most ambitious academic institutes in the world. FHI’s mission had been to study humanity’s big-picture questions: our direst perils, our range of potential destinies, our unknown unknowns. Its researchers were among the first to usher concepts such as superintelligent AI into academic journals and bestseller lists alike, and to speak about them before such groups as the United Nations. 

To its many fans, the closure of FHI was startling. This group of polymaths and eccentrics, led by the visionary philosopher Nick Bostrom, had seeded entire fields of study, alerted the world to grave dangers, and made academia’s boldest attempts to see into the far future. But not everyone agreed with its prognostications. And, among insiders, the institute was regarded as needlessly difficult to deal with — perhaps to its own ruin. In fact, the one thing FHI had not foreseen, its detractors quipped, was its own demise.

Why would the university shutter such an influential institute? And, to invert the organization’s forward-looking mission: how should we — and our descendants — look back on FHI? For an institute that set out to find answers, FHI left the curious with a lot of questions. 

***

In 1989, a seventeen-year-old Swede named Niklas Boström (he would later anglicize it Nick Bostrom) borrowed a library book of 19th-century German philosophy, took it to a favorite forest clearing, and experienced what The New Yorker would later describe as “a euphoric insight into the possibilities of learning and achievement.” Damascene moments aren’t generally how people decide to become academics, but from this day forward, Bostrom dedicated his life to intensive study. He retreated from school in order to take his exams at home, and he read widely and manically. At the University of Gothenburg, he received a BA in philosophy, mathematics, mathematical logic, and artificial intelligence. After that, he pursued postgraduate degrees in philosophy, physics, and computational neuroscience. In what little spare time he had, Bostrom emailed and met up with fellow transhumanists: people enthusiastic about radically improving human biology and lifespans. 

As early as 2001, he was studying little-known phenomena called “existential risks,” writing that nanotechnology and machine intelligence could one day interfere with our species’ ascent to a transhuman future. He also, in 2001, formulated the “simulation hypothesis,” advancing in Philosophical Quarterly the theory that we might be living in a computer simulation run by humanity’s hyper-intelligent descendants. 1 By this point, Bostrom had arrived at Oxford as a postdoctoral fellow at the Faculty of Philosophy. 

Some years later, the faculty would become his bête noire. But it was Bostrom’s membership in it that enabled a stroke of luck that would change his life. At some point in the early aughts, Bostrom had met James Martin, an IT entrepreneur. Martin had become a prescient futurist, and in 2006 produced a documentary featuring Bostrom. But Martin was also becoming a deep-pocketed philanthropist. Through Julian Savulescu, another young philosopher interested in human enhancement, Bostrom learnt that Martin was planning to fund future-minded research at Oxford. Hoping that this could encompass work on his interests, Bostrom made his case to the university’s development office.

Twenty years later, the details are hazy. FHI lore has it that, at one of dinners hosted by Oxford for its biggest donors, Bostrom was seated next to Martin, creating the perfect conditions for what we now call a nerdsnipe. Some time later, in 2005, Martin made what was then the biggest benefaction to the University of Oxford in its nine-century history, totaling over £70 million. A small portion of it funded what Bostrom decided to call the Future of Humanity Institute. “It was a little space,” Bostrom told me, “where one could focus full-time on these big-picture questions.”

That seed grant was enough to fund a few people for three years. Because his team would be small, and because it had such an unconventional brief, Bostrom needed to find multidisciplinarians. He was looking, he told me, for “brainpower especially, and then also a willingness and ability to work in areas where there is not yet a very clear methodology or a clear paradigm.” And it would help to be a polymath. 

One of his earliest hires was Anders Sandberg. As well as being a fellow Swede, Sandberg was a fellow member of the Extropians, an online transhumanist community that Bostrom had joined in the Nineties. Where Bostrom is generally ultra-serious, Sandberg is ebullient and whimsical. (He once authored a paper in which he outlined what would happen if the Earth turned into a giant pile of blueberries). But the two men’s differences in personality belied their similarity in outlook. Sandberg, too, was an unorthodox thinker interested in transhumanism and artificial intelligence. (Sandberg was particularly interested in the theoretical practice of whole-brain emulation, i.e. the uploading of a human mind to a digital substrate.)

Sandberg was interviewed in the Faculty of Philosophy’s Ryle room, named for the philosopher Gilbert Ryle. 2 He explained some neuroscience to the faculty staff who were assessing him and communicated his aptitude in another little-known area of human endeavor: web design. He was hired, and he returned in January 2006 to take up a desk at FHI and a “silly little room in Derek’s house.” 

Sandberg was lodging, with Bostrom, in the home of Derek Parfit, a wild-haired recluse who was also one of the most influential moral philosophers of the modern era. Bostrom had the master bedroom and collected rent from the rotating cast of lodgers. 3 Parfit, Sandberg recalled, slept in “a little cubby hole” of a bedroom, and would scuttle at odd hours between it and his office at All Souls, the highly selective graduate college seen as elite even relative to the rest of Oxford.

Including Sandberg, Bostrom hired three researchers, and began to sculpt a research agenda that, in these early years, was primarily concerned with the ethics of human enhancement. An EU-funded project on cognitive enhancement was one of FHI’s main focuses in this period. The institute also organized a workshop that resulted in Sandberg and Bostrom’s influential roadmap for making whole-brain emulation feasible. 

At the same time, FHI staff were beginning to publish work on the gravest perils facing humanity, a topic that was not yet an established academic discipline. An FHI workshop brought together hitherto disparate thinkers such as Eliezer Yudkowsky, who went on to become one of the most prominent theorists concerned by superintelligent AI. Bostrom co-edited the 2008 book Global Catastrophic Risk, a collection of essays on threats such as asteroid impacts, nuclear war, and advanced nanotechnology....

....MUCH MORE

Previously:

Oxford's Future of Humanity Institute Has Shut Down

Well. I guess that's that. It's over. Fin. I wonder if they saw it coming?....

We've visited Bostrom and the Future of Humanity Institute at Oxford off and on for a decade. Although he sometimes comes off as a bit whack-a-doodle I did make this comment on one of our posts:

...If interested we mentioned Bostrom in last week's "The Roubini Cascade: Are we heading for a Greater Depression?". His Future of Humanity Institute at Oxford seems to have moved on from the mundane climate cataclysm or cosmic fireball ending everything, to a very real, very serious examination of whether or not Artificial Intelligence may be the nail in humanity's coffin, so to speak....  

That was in "Risk/Prop Bets: How unlikely is a doomsday catastrophe?"  which also included this bonus advice:

Before we get into the meat of the matter, some things to know upfront. From the introduction to 2018's "Tips And Tricks For Investing In 'End of the World Funds'":

As unauthorized representatives for Long or Short Capital's End of the World Puts this is an area of profound interest from which we have gleaned some insight:

1) Should the world end, collecting on your bet can be a challenge. Know your counterparty!
     And possibly more important, demand collateral!
2) The swings in end of the world product prices can be dramatic.
3) Prognosticators have predicted 100,000 of the last 0 termination events....

And from arXiv (astrophysics) at Cornell:

How unlikely is a doomsday catastrophe?

Max Tegmark (MIT), Nick Bostrom (Oxford)
Numerous Earth-destroying doomsday scenarios have recently been analyzed, including breakdown of a metastable vacuum state and planetary destruction triggered by a "strangelet'' or microscopic black hole. We point out that many previous bounds on their frequency give a false sense of security: one cannot infer that such events are rare from the the fact that Earth has survived for so long, because observers are by definition in places lucky enough to have avoided destruction. We derive a new upper bound of one per 10^9 years (99.9% c.l.) on the exogenous terminal catastrophe rate that is free of such selection bias, using planetary age distributions and the relatively late formation time of Earth....

All in all it is good to see he's flipped 180 degrees and has gone far beyond even the upbeat message of Keynes' "Economic Possibilities for our Grandchildren" and Churchill's 'broad sunlit uplands'. I believe I shall I shall have this bit tattooed  in an appropriately discreet location:

"If humans remain the owners of this capital, the total income received by the human population would grow astronomically, despite the fact that in this scenario humans would no longer receive any wage income. The human species as a whole could thus become rich beyond the dreams of Avarice."

Related:

Puny Human, I Scoff At Your AI, Soon You Will Know The Power Of Artificial 'Super' Intelligence. Tremble and Weep
[insert 'Bwa Ha Ha' here, if desired]