14/05/2021

Humanity's fuiture. The Right Way to Worry by Daron Acemoglu

After a year in which COVID-19 has suspended normal economic life around the world, humanity has acquired a new appreciation for risk. But simply acknowledging potential threats is merely the beginning of the process; the real challenge comes in deciding which problems warrant our attention, and in what order.

The reign of the dinosaurs was brought to an end 65 million years ago by an asteroid that crashed into what is now the town of Chicxulub in Mexico. Although this lump of rock and metal was not particularly large – probably about ten kilometers (six miles) across – it struck the Earth at more than 60,000 kilometers per hour (37,000 miles per hour), generating an explosion billions of times greater than that of the atomic bomb dropped on Hiroshima, and killing all life within 1,000 kilometers.

More ominously, the explosion sent a massive cloud of dust and ash into the upper atmosphere, blocking the sun for years to come. This prevented photosynthesis and led to sharply reduced temperatures, which is why scientists reckon that it was this atmospheric dust and sulfate aerosols that ultimately killed the dinosaurs and many other species. 

If a similar asteroid or comet were to crash into Earth today, it would cause another mass-extinction event, wiping out most species and human civilization as we know it. This distant possibility is an example of a natural existential risk: an event not caused by humans that leads to the extinction or near-extinction of our species.

But there are also anthropogenic – human-created – existential risks. As the University of Oxford philosopher Toby Ord argues in his thought-provoking new book, The Precipice: Existential Risk and the Future of Humanity, it is these risks that should most concern us now and in the coming century.

Ord recognizes that science and technology are humankind’s most potent tools for solving problems and achieving prosperity. But he reminds us that there are always dangers associated with such power, particularly when it is placed in the wrong hands or wielded without concern for long-term and unintended consequences.

More to the point, Ord argues that anthropogenic existential risk has reached an alarmingly high level, because we have developed tools capable of destroying humanity without the accompanying wisdom needed to recognize the danger we are in. He notes that the eminent twentieth-century astronomer Carl Sagan issued a similar warning in his 1994 book, Pale Blue Dot, writing:

“Many of the dangers we face indeed arise from science and technology – but, more fundamentally, because we have become powerful without becoming commensurately wise. The world-altering powers that technology has delivered into our heads now require a degree of consideration and foresight that has never before been asked of us.”

For Ord, this gap between power and wisdom could decide humanity’s future. On one hand, we could disappear entirely or suffer a collapse that wipes out most of the hallmarks of civilization (from vaccines and antibiotics to art and writing). But, on the other hand, Ord sees in humankind the potential for long-term flourishing on a cosmic scale: with both wisdom and technological ingenuity, humans could well outlive this planet and launch new civilizations across space.

Based on this reasoning, Ord arrives at what mathematicians and economists would call a “lexicographic preference ordering.” In a situation where we care about multiple criteria, a lexicographic order assigns overwhelming importance to one criterion in order to provide clarity when two options are being compared. For example, in a lexicographic order between food and shelter, one would always prefer whichever option offers more food, regardless of how much more shelter the other option offers.

Ord’s philosophical stance is equivalent to a lexicographic order because it implies that we should minimize existential risk, whatever the costs. A future in which existential risk has been minimized trumps any future in which it has not been minimized, regardless of any other considerations. After establishing this basic hierarchy, Ord then proceeds with an expert overview of different types of anthropogenic existential risk, concluding that the greatest threat comes from an artificial superintelligence that has evolved beyond our control.

When Progress Isn’t Progress

One can date science-driven existential risk at least to the controlled nuclear chain reactions that enabled atomic weapons. Ord is probably right that our (social) wisdom has not increased since this fateful development, with its earlier culmination in the bombings of Hiroshima and Nagasaki. Though we have established some institutions, regulatory tools, norms, and other internalization mechanisms to ensure that we do not misuse science, nobody would argue that these are sufficient.

Ord suggests that today’s inadequate institutional framework may be a temporary phenomenon that could be addressed in due time, so long as we survive the next century or so. “For we stand at a crucial moment in the history of our species,” he writes. “Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves…” And, in fact, in writing his book, Ord “aspires to start closing the gap between our wisdom and power, allowing humanity a clear view of what is at stake, so that we will make the choices necessary to safeguard our future.”

However, I see no evidence that this is really feasible. Nor is there any sign that our society and leaders have shown any wisdom when it comes to reining in the destructive power of technology.

To be sure, one could argue in favor of Ord’s optimism on the basis of what the German sociologist Norbert Elias famously called the “civilizing process.” According to Elias, the process of economic development and the emergence of state institutions for resolving conflicts and controlling violence since the Middle Ages have led to the adoption of manners and behaviors conducive to coexistence in mass societies. Elias’s nuanced case for why people in advanced economies have become less violent and more tolerant was popularized recently by the Harvard University cognitive psychologist and linguist Steven Pinker in his bestselling book The Better Angels of Our Nature: The Decline of Violence in History and Its Causes. Both authors offer arguments for why we should continue to expect a strengthening of the norms and institutions needed to control the misuses of science and technology.

But even if such a civilizing process is acting on individual behavioral norms and social intercourse more broadly, it doesn’t seem to have affected many political leaders or scientists and technologists. The civilizing process should have been in full swing by the first half of the twentieth century; and yet the Nobel Prize-winning chemist Fritz Haber enthusiastically used his scientific knowledge to invent and then peddle chemical weapons to the German Army in World War I.

Nor was the impact of the civilizing process much in evidence in the thinking of the American leaders who ordered the attacks on Hiroshima and Nagasaki, or in the attitudes of other political leaders who eagerly embraced nuclear weapons after World War II. Some may find hope in the fact that we haven’t had a repeat of WWI or WWII over the past 75 years. But this sanguine view ignores many near misses, not least the Cuban Missile Crisis in 1962 (the episode with which Ord opens his book).

One can identify many more examples contradicting the idea that we are becoming more “civilized,” let alone better at controlling anthropogenic risks or cultivating collective wisdom. If anything, controlling our bad behavior and adapting to the constant changes wrought by scientific discovery and technological innovation will remain a constant struggle.

This raises problems for the rest of Ord’s argument. Why should trying to eliminate future existential risks be given a superordinate priority over all other efforts to ameliorate the ills and suffering that our current choices are generating now and in the near term?

For the sake of argument, suppose we could significantly reduce the probability of our own extinction by enslaving the majority of humankind for the next several centuries. Under Ord’s lexicographic ordering, we would have to choose this option, because it minimizes existential risk while still preserving humanity’s potential to flourish fully at some point in the distant future.

Not everybody will be convinced by this argument. Count me among the unpersuaded.

The Age of Demonic Machines?

To clarify the choice further, consider the main existential risk that Ord focuses on: the potential misuse of artificial intelligence. Ord estimates that there is a one in six chance that humanity will fall prey to an evil superintelligence (which he calls, euphemistically, “unaligned AI”) in the next 100 years. By contrast, his estimated existential risk to humanity from climate change is one in 1,000, and one in a million in the case of collisions with asteroids or comets.

Even if many other experts would not assign quite so high a probability to the threat of superintelligence, Ord is not alone in worrying about the long-term implications of AI research. In fact, such concerns have become commonplace among many technology luminaries, from Stuart Russell of the University of California, Berkeley, to Microsoft founder Bill Gates and Tesla founder Elon Musk.

These figures all believe that, notwithstanding the existential risks, AI will bring many net benefits. But while Ord is well enough informed about these debates to know that even this last proposition is actually rather shaky, his lexicographic stance leads him to ignore most of the non-existential risks associated with AI.

But if one accepts that our scope of attention is finite, this weighing of priorities is problematic. My own assessment is that the likelihood of superintelligence emerging anytime soon is low, and that the risk of an evil superintelligence destroying our civilization is lower still. As such, I would prefer that the public debate focus much more on the problems that AI is already creating for humanity, rather than on intriguing but improbable tail risks.

Back to Now

As I have argued here and elsewhere, the current trajectory of AI design and deployment is leading us astray, causing a wide range of immediate (albeit prosaic) problems. Far from being inevitable or reflecting some inherent logic of the technology, these problems reflect choices being made (and imposed on us) by large tech companies – and specifically by a small group of executives, scientists, and technologists within these companies (or within their orbit).

One of the most visible problems that AI is causing is incessant automation, which is displacing workers, boosting inequality, and raising the specter of future joblessness for large swaths of the labor force. Worse, the obsession with automation has come at the expense of productivity growth, because it has led executives and scientists to overlook more fruitful, human-complementing uses of innovative technology.

AI is also being designed and used in other problematic ways, none of which inspire hope for humanity’s moral progress. Democratic politics has been defiled not just by an explosion of algorithmically amplified misinformation, but also by new AI technologies that have empowered governments and companies to monitor and manipulate the behaviors of billions of people.

This development represents a double whammy. Democratic politics is the primary means by which a society can rein in misbehavior by political and economic elites, yet it is precisely this process that is being undermined. If we cannot hold elites accountable for the damage they are causing because democracy itself has been impaired, how can we possibly escape our current predicament?

We are not helpless. The costs that AI is inflicting can be addressed, because, unlike the existential risks that Ord focuses on, they are tangible and easy to recognize. But first, we must call more attention to the problem in order to generate pressure on governments and companies to acknowledge the risks that are materializing now. Besides, a tech sector that is bent on automation and anti-democratic manipulation and surveillance is hardly a good foundation upon which to address longer-term risks.

Though we should not dismiss more speculative risks to humanity out of hand, we cannot afford to ignore the threats that are right in front of us. We may reach a precipice at some point, but we are already sliding down a slippery slope.

Source project-syndicate.org 

1 commentaire:

  1. Pour traduire un billet d'anglais en français, voici la méthode. Traduire d'abord dans une autre langue de la liste. Puis la traduction en français devient possible en premier de la liste, cad. de traduire de cette langue en français.

    RépondreSupprimer

Ce blog est ouvert à la contradiction par la voie de commentaires. Je tiens ce blog depuis fin 2005; je n'ai aucune ambition ni politique ni de notoriété. C'est mon travail de retraité pour la collectivité. Tout lecteur peut commenter sous email google valide. Tout peut être écrit mais dans le respect de la liberté de penser de chacun et la courtoisie.
- Je modère tous les commentaires pour éviter le spam et d'autres entrées malheureuses possibles.
- Cela peut prendre un certain temps avant que votre commentaire n'apparaisse, surtout si je suis en déplacement.
- Je n'autorise pas les attaques personnelles. Je considère cependant que ces attaques sont différentes des attaques contre des idées soutenues par des personnes. Si vous souhaitez attaquer des idées, c'est bien, mais vous devez alors fournir des arguments et vous engager dans la discussion.
- Je n'autorise pas les commentaires susceptibles d'être diffamatoires (au mieux que je puisse juger car je ne suis pas juriste) ou qui utilisent un langage excessif qui n'est pas nécessaire pour l'argumentation présentée.
- Veuillez ne pas publier de liens vers des publicités - le commentaire sera simplement supprimé.
- Je suis pour la liberté d'expression, mais il faut être pertinent. La pertinence est mesurée par la façon dont le commentaire s'apparente au sujet du billet auquel le commentaire s'adresse. Si vous voulez juste parler de quelque chose, créez votre propre blog. Mais puisqu'il s'agit de mon blog, je vous invite à partager mon point de vue ou à rebondir sur les points de vue enregistrés par d'autres commentaires. Pour ou contre c'est bien.
- Je considère aussi que la liberté d'expression porte la responsabilité d'être le propriétaire de cette parole.

J'ai noté que ceux qui tombent dans les attaques personnelles (que je supprime) le font de manière anonyme... Ensuite, ils ont l'audace de suggérer que j'exerce la censure.