Two from Latecomer Magazine. First up, Weird Truths:
Eric Schwitzgebel is a professor of philosophy at University of California, Riverside, and a prolific blogger. In this interview we discuss spatially distributed consciousness, science fiction, and whether we have too many professional philosophers. See Eric's Latecomer article where he criticizes moral reasoning about the far future.
— Editor
The Future
What are the existential or catastrophic risks you are most worried about?
ES:
I’m most worried about the risks we aren’t aware of. Right now, the odds that superintelligent AI rise up and destroy us in the next few decades, or that humanity completely destroys itself in a war or something like that seems unlikely since we are pretty robust to the kinds of technologies we have right now. But technological power tends to increase over time. My assessment of risk is a little more abstract in the sense that if we assume continued increase in technological power, we're not very good at foreseeing what technological possibilities will exist in the future. If we combine that assumption that we will become increasingly vulnerable as technological power increases, then at some point something is reasonably likely to happen that will make earth uninhabitable. I don’t think we know which thing that will be. It might be as unforeseeable to us as superintelligent AI was to a medieval farmer.
What risks do you think are the most overblown?
ES:
I don't think there's a longtermist consensus that's overblown. I'm on the pessimistic side on our ability to continue to exist as a species. So I kind of accept the pessimistic conclusions, but not the optimistic ones. And in particular, I reject the claim that we're in an unusual period of high risk, and that if we get safely through it we'll enter a period of extended very low risk. I think both MacAskill and Ord say stuff like that. To me, that seems like wishful thinking.
Do you consider yourself a transhumanist? Do you feel any attachment toward the current evolutionary state of humanity (i.e. not genetically edited, not fused with machines, etc.)?
ES:
Yes and no, with a big “but”. I don’t think of “transhumanism” the movement, as something I belong to. I think there's something pretty awesome and special about humanity's current biological form. I also think there's something awesome and special about dogs and garden snails. But I also think that our descendants, whether they're biological, cyborg or AI, could be as awesome as we are, or even, even maybe more awesome. And if people want to envision that future and try to bring it about, I don’t have a problem with that.
So you feel an attachment to our current biological state, but not enough that you'd ever try to put the brakes on technological advancement?
ES:
I think there are ethical and risk-related reasons to put brakes on technological advancement. But I don’t think it's super important to preserve the human form as it currently exists....
....MUCH MORE
And Fifty Years of Chaos:
James Yorke is a mathematician at the University of Maryland.
Almost fifty years ago, when my student T. Y. Li and I wrote a math paper titled "Period 3 Implies Chaos", I could not predict the effect that title would have. Chaos. It would go on to have a life of its own, far beyond the mathematical proof contained in our short paper. Since then, the word has risen and fallen in popularity, while others, like complexity, have emerged. But the principles are the same: there are limits to what we can accurately predict. Many systems are sensitive to initial conditions. And above all: a fully deterministic system can still be unpredictable.
This reflection could have easily been titled “50,000 Years of Chaos.” Humans have always known that slight differences can have dramatic consequences: a boulder lands a meter away from a man’s sleeping head—a meter that contains an entire world. 1 Benjamin Franklin knew about sensitivity to initial conditions when he popularized the old piece For Want of a Nail. Mathematicians merely put numbers to a principle that needed no introduction.
When novelists use Heisenberg’s uncertainty principle as an explanation for divorce, they are speaking by analogy. 2 Chaos, on the other hand, is both a formal mathematical discipline, and a fact about the contingency of our daily lives—we buy health and life insurance in order to manage it. A world that is shorn of chaos doesn’t look anything like ours. But chaos is also a phenomenon that applies to precise numerical quantities, whether those quantities are water molecules or animal populations.3 The reason why it took so long to mathematically formalize, is because people wanted to find clean linear solutions to differential equations. But the vast majority of such equations are not easily solvable, being nonlinear and chaotic. As Stanislaw Ulam famously quipped, “to call the study of chaos ‘nonlinear science’ was like calling zoology ‘the study of non elephant animals’.”4
Most of the world that we see around us is chaotic, with linear solutions being the rare exceptions. A lot of great math has been done with such ideal linear constructions, but the real world is hairy with impurities and noise. To clarify, there is a difference between random noise and chaos, although it can often be difficult to distinguish one from the other.5 Furthermore, not all systems are chaotic, sometimes it is possible to trim down a system so that chaos does not emerge. Water is chaotic as a gas, much less chaotic as a fluid, not chaotic as ice, all depending on a single parameter: temperature.
Chaos persists. Of course, the field already existed before it was named, a loose spiderweb stretching across disciplines. When we wrote our paper, Edward Lorenz had already written about his “strange attractors”—those generalized patterns of movement lacking any precise repetition.6 But the antecedents of chaos stretch back in the scientific literature. In my opinion, James Clerk Maxwell (1831-1879) was likely the first person to understand chaos as sensitivity to initial conditions. For instance, take his writings about the behavior of gas molecules, where a simple collision results in an unpredictable direction of rebound due to initial starting conditions. Maxwell wrote that “small differences in the initial conditions produce very great ones in the final phenomena” and urged scientists to pursue “singularities and instabilities, rather than the continuities and stabilities of things.”7 In 1890, Henri PoincarĂ© (1854-1912) wrote about the three body problem, demonstrating that the orbits of three celestial bodies often behave chaotically and are difficult to predict.
Chaos says that we cannot predict the future with precision or certainty, especially when the future is distant. Given an entirely deterministic universe, we still can’t predict the temperature a year from now.8 This is because we can’t fully quantify our current state! Even if we measure an object’s location to three hundred decimal places of accuracy, its behavior will still diverge from our predictions. Each additional digit of precision extends our predictive ability, so that twenty decimal places of precision, rather than ten, doubles the length of time we can make accurate predictions. But it is immensely difficult to increase precision. Adding more decimal places of accuracy doesn’t do very much—any inaccuracy will be quickly magnified across time. The reason we can’t predict the future precisely is because we can’t measure the present precisely.9 LaPlace’s demon can’t even tie his own shoes....
....MUCH MORE