Armitage Archive

A Risk Expert's Analysis on What We Get Wrong About AI Risks [Guest]

by Devansh

Original article

This page contains highlights I saved while reading A Risk Expert's Analysis on What We Get Wrong About AI Risks [Guest] by Devansh. These quotes were collected using Readwise.

Highlights

A common pitfall in discussing AI risks is what I term the 'Svengali Fallacy' — the tendency to overestimate the power of AI to manipulate and shape human behavior through algorithms and synthetic media. This fallacy often leads to forecasts that fail to account for the practical realities of how such influence would actualize.

Permalink to this highlight


When people are asked to estimate the dangers posed by AI, they're not truly evaluating the probability of catastrophic outcomes; instead, they're rating how readily a scenario comes to mind—how vividly they can imagine it, which is more a measure of familiarity than likelihood.

Permalink to this highlight


threats and hazards are often the triggers or sources of potential harm. They are not, in themselves, risks. Risks pertain to the potential negative outcomes that these threats and hazards could cause. To draw an analogy from public health, consider pathogens. They are hazards, while the risk they pose translates to the likelihood and impact of resulting illness or death.

Permalink to this highlight


The situation at OpenAI presented us with a rare and invaluable post-mortem opportunity to assess the quality of risk judgment and decision-making (JDM) exhibited by its board members, including but not limited to figures such as Helen Toner, Ilya Sutskever, and Adam D'Angelo.

Permalink to this highlight


In Argentina, there was certainly no lack of expertise or willingness to exploit AI for political gain. The generative AI tools were mature enough to be deployed effectively. The stage seemed set for AI to demonstrate its potentially disruptive power in the electoral arena. Yet, the anticipated seismic shift did not occur.

Permalink to this highlight


Similarly, in discussions about AI, terms like proxy gaming, emergent goals, power-seeking, or unaligned behaviors are frequently and inaccurately cited as risks, when they are actually potential threats or hazards. They don't inherently include the dimension of outcome.

Permalink to this highlight


Chief among these is the deadly ambiguity spawned by conflating threats and hazards with risks—a distinction that might seem academic but is vital for accurate discourse.

Permalink to this highlight


The Dunning-Kruger effect, a cognitive bias where individuals with limited knowledge overestimate their understanding, is particularly rife in these debates.

Permalink to this highlight


Many assertions about AI's future are driven by the availability heuristic—our tendency to predict the likelihood of events based on what's readily brought to mind, not on objective data.

Permalink to this highlight


When we scrutinize the chorus of voices in the AI risk debate, we find it overwhelmingly composed of software engineers, language and cognition researchers, and notably, venture capitalists and Silicon Valley influencers. Few of these commentators come from backgrounds steeped in risk measurement and quantification, risk communication, or decision science.

Permalink to this highlight


As for pandemics and COVID-19 risks, opinions about AI risk are as ubiquitous and deeply entrenched as they are susceptible to cognitive biases. Often, these views stem from the same psychological heuristics that mislead our judgments across various domains.

Permalink to this highlight


Want more like this? See all articles or get a random quote.