6 Comments

>So, no progress in medicine or AI until the remotest threat of misaligned superintelligence or engineered superviruses is gone. Actually, no, we need to go further. All technology should be positively dismantled — think of the small but nonzero risk of runaway climate change causing extinction.

I’ve heard that the reason climate change might cause extinctions is by causing global instability that leads to people being more reckless and violent. People being poor and sick is not good for stability. Ditching technology would be much worse for stability than some more warming.

Expand full comment

True! But you could argue that moral leaders coordinating to regress to pre-Industrial Revolution levels of technology gradually, so that there is no civil unrest, is the necessary conclusion of taking naive longtermism seriously. Since that is obviously not feasible, you could argue that making an extraordinary effort to have the degrowth movement go well, and at least dent the growth rate if not push it into the negatives, is a necessary conclusion of naive longtermism. Both of these are unappealing conclusions that Aschenbrenner's time of perils model avoids.

Expand full comment

While studying longtermism, I never fully came to terms with the idea of placing equal values to a life right now and a life in 10,000 CE. And one reason I had was that while it is possible to predict the outcome of our actions with certainty right now, it is impossible to predict the outcome of our actions around 8000 years later (thanks to chaos). This suggests that the more distant in future a life is, the less likely we are to positively affect it. Hence, our efforts are much better spent and valued in trying to positively affect the present people. Though I couldn't find a lot written on this issue, the idea of two competing infinities in this post, I think, does a marvelous job at getting to the core of the problem.

Expand full comment

Thanks Parmest!

Expand full comment

X-risk reduction is a clear way present people can benefit the future (unless you place a high probability on the future being an awful place to exist.)

Expand full comment

Another way to robustly affect the far future for the better is to build fair and stable institutions, and popularize moral circle expansion-y ideas like concern for nonhuman animals.

Expand full comment