Article

Nature's zero-sum thinking about AI risk

published on

Here is the title and deck1 from a Nature editorial published on 2023-06-27:

Stop talking about tomorrow’s AI doomsday2 when AI poses risks today

Talk of artificial intelligence destroying humanity plays into the tech companies’ agenda, and hinders effective regulation of the societal harms AI is causing right now.

This writing puts zero-sum thinking on full display. Unacceptable, especially coming from Nature3. While I can be charitable and assume their intentions are good4, I can’t excuse poor reasoning along the way.

Don’t assume zero-sum

Let’s step back and not assume a zero-sum game. Let’s pose the question before drawing a conclusion!

What is the relationship between discussions about (1) long-term versus (2) short-term AI risks? Does having more of one imply that we must have less of the other?

Did the Nature editors consider this question? I can’t read their minds, but their writing is quite lopsided. For example, I see no mention of the many AI safety organizations who do research on both short- and long-term impacts. Worse, the article isn’t just lopsided; it suffers broadly from poor reasoning. I would say more, but I don’t have the time to pick it apart right now.

Endnotes

1

Deck means “a small headline running below the main headline; also called a drop head.” from Newspaper Terms to Know.

2

I strive to be charitable, but when I read an article using the term “doomers”, I get suspicious. It is easy for one party to claim that another party is engaging in “doomsday talk”. Often, the idea is to discredit the “doomsayers” by lumping them together with history’s failed doom prognosticators.

3

Nature is owned by Springer Nature Limited.

4

What are the intentions of the Nature editorial board? Perhaps something along the lines of reducing harm to humanity? But such a phrase leaves much unspecified, including one’s (i) risk profile and (ii) time horizon. I mention this because these two factors are important when balancing short- and long-term risks. Perhaps if the editorial board had a longer-term lens, they would take long-term risks much more seriously.