The End

Sunday 21 March 2021 | general

Every now and then, I learn something so profound that it marks a significant change in my thinking. The last two days have constituted such a change. I listened to Josh Clark’s podcast The End of the World, and I can’t remember anything having a more profound effect on me. Not the realization that I doubted the existence of God; not the change I made from being very left-leaning to being right-leaning and the return to the left I seem to be experiencing now; not the knowledge that I was going to be a parent.

The podcast deals with how the world might end, and by that it means how humanity might end. The world itself might continue on but humanity might not. And the podcast begins with an odd question: given the size of the universe, the number of galaxies it contains, and the number of stars in those galaxies, the universe should be absolutely teeming with life. We’re talking about billions upon billions of galaxies each with billions of stars. Even if the odds of developing intelligent life were a 1-in-100-billion, there should be almost countless examples in the universe. So why have we found no sign of them? That’s the Fermi paradox, and it is the material for the first couple of episodes. The fact that we see no signs of intelligent life anywhere has one of two explanations:

  1. It exists, and we simply haven’t found it.
  2. It doesn’t exist currently, and we alone constitute all the intelligent life in the universe.

Option one is the less interesting of the two, but Clark convincingly illustrates that it’s unlikely. It’s the second option that is terrifying, because it breaks into two options itself:

  1. There never was any other intelligent life — we’re it.
  2. There was once intelligent life somewhere in the universe, but it no longer exists.

Thinking about option two leads us to question what could have happened to them? Presumably, they advanced at least as far as we have, and presumably, they would have advanced further if they could, spreading out into the cosmos and colonizing their solar system, their galaxy, significant portions of the universe itself. That they didn’t suggests that they never existed (back to option one) or that they met an insurmountable obstacle that resulted in the end of their existence. This is known as the Great Filter — in this case, the thing that prevented a given alien species from colonizing beyond its original planet.

What does all of this have to do with the end of the world? Simple: due to the technologies we have developed now, we are almost certainly about to pass through our own Great Filter. There are so many threats to the continued existence of humanity that it seems inevitable that one of them will catch us by the ankle, so to speak, and drag us back down to primordial sludge (or nothingness). Clark covers several of them, including artificial intelligence, biotechnology, and physics experiments, but in my mind (and in Clark’s as well, I believe), the greatest risk comes from artificial intelligence, though biotech is not far behind.

The risk from artificial intelligence is not a Terminator- or Matrix-style war between robots and humans. It’s something much more subtle. Imagine, for example, that a paperclip factory hires a programmer to create a program to maximize paperclip production. The programmer creates algorithms that have the freedom to make their own decisions about how to go about the productivity improvements. Its goal is simple: create more paperclips. Should it attain super-intelligence, it could wreck the world in its effort to make more paperclips. It takes over other computers in order to increase its computational capacity to make more paperclips. It develops machines that make machines that make machines that make machines to improve paperclip production. Eventually, it learns how to create nanotechnology that can actually manipulate things at an atomic level. It can then literally begin the process of turning everything into paperclips, manipulating atoms to transform everything into aluminum to make paperclips — everything, including us. It then launches probes into space and eventually turns the whole universe into paperclips.

It’s hyperbolic and a little silly, but it gets at the heart of the concern: once AI achieves super-intelligence, it will be to us as Einstein is to an earthworm. There are no guarantees that it will have any concern with us at all. After all, could we expect Einstein to spend his tremendous intellect and life worrying about the happiness of every earthworm? So we have to figure out a way to program these things in a way that they have morals compatible with ours.

The biotech and physics experiment risks are equally fascinating, but it was about this point in the podcast (this would have been episode five out of eight) that I began thinking of how any one of these might be our Great Filter — the thing that we run up against which destroys us — in terms that Clark never mentioned: the impact of religion on all of this. Most believers would not take this seriously at all because they already are convinced that there’s intelligent life out there, and it’s responsible for our existence and has a plan for us. That plan doesn’t include us destroying ourselves, so they would be unlikely to take this situation seriously. “Eradicate ourselves from the earth? Come on — Jesus will return before that could ever happen.” God would never let the pinnacle of his creation destroy itself entirely. This is in part why so many Christians don’t take global warming seriously (and after listening to this series, I’m of the mind that global warming, while a threat, is at least not an existential threat to all humanity).

This itself could be the Great Filter: time after time, intelligent life has arisen that evolves tremendous intelligence at the same time it holds radically superstitious ideas. At the point when the species has developed the technology capable of destroying itself, it harbors the superstitious lack of wisdom to know how to handle those technologies, and they destroy themselves.

Much of Clark’s podcast was on the foundation of Nick Bostrom’s work. He’s been thinking along these lines for a long time:

1 Comment

  1. Just on your first point — about intelligent life out there. I think of it as a question that is limited by our inability to understand anything beyond what we’re equipped to understand. We can’t possibly fathom intelligence that exceeds our capabilities. This is the third option then, no? — that there is intelligent life elsewhere, only our evolutionary limitations make it impossible for us to recognize it, as it presents itself in ways that are out of the sphere of our understanding.
    As to how we’ll destroy ourselves — so many possibilities! But really, I think it’s impossible to foresee the greatest threats. Maybe AI, maybe something else. We’re very good at being stupid so I wouldn’t be surprised if we succumb to something that was entirely avoidable. :)