WHAT is p(doom)?

FACT: AI Industry Engineer Average P(doom): 10-20%

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or “doom”) as a result of artificial intelligence.[1][2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.[3]

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[4] and Yoshua Bengio[5] began to warn of the risks of AI.[6] In 2022, a survey of AI researchers, which had a 17% response rate, found that the majority believed there is at least a 10% chance that our inability to control AI could cause an existential catastrophe.[7]

FACT: Basic Summary of The AI Industry and Frontier Labs’ Current Position on AI Safety:

1. They openly admit their technology could end all life on earth.

2. They openly admit they do not know (yet) how to control their technology.

3. They openly admit they fundamentally do not understand how their technology works.

4. The vast majority of time and money (+80%) is a ‘race-to-the-bottom’ to make AI stronger but not guaranteed to be safe.

5. They currently lobby to weaken or hinder regulations of the new California State Law: SB 1047

EXAMPLES: P(doom) opinions of AI thought leaders

Name P(doom) Notes
Roman Yampolskiy 99.99%[11][Note 5] Leading Safe AI and Cybersecurity Scientist
Eliezer Yudkowsky 99%+ [10] Founder of the Machine Intelligence Research Institute
Dan Hendrycks 80%+ [1][Note 3] Director of Center for AI Safety
Geoffrey Hinton 50%[6][Note 1] “Godfather of AI” and formerly at Univ. Toronto, and Google
Emmet Shear 5-50%[6] Co-founder of Twitch and former interim CEO of OpenAI
Paul Christiano 50%[9] Head of the US AI Safety Institute
Jan Leike 10-90%[1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Yoshua Bengio 20%[3][Note 2] “Godfather of AI” and scientific director of the Montreal Institute for Learning Algorithms
Lina Khan 15%[6] Chair of the Federal Trade Commission
Dario Amodei 10-25%[6] CEO of Anthropic
Elon Musk 10-20%[8] CEO of X, Tesla, and SpaceX and Wealthiest Person in the World
Vitalik Buterin 10%[1] Cofounder of Ethereum
Casey Newton 5%[1] American technology journalist
Yann Le Cun <0.01%[13][Note 6] Chief AI Scientist at Meta
Grady Booch 0%[1][Note 4] American software engineer
Marc Andreessen 0%[12] American businessman and leading venture capitalist

Learn more links…

Google Zeitgeist. Creating AI Could Be the Biggest & Last Event in Human History | Stephen Hawking

‘Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.
Stephen Hawking, GZ, 12 May 2015

The phenomenon of [AI] synthesis, if properly arranged, can become explosive. The danger is intrinsic. For progress there is no cure.”

John von Neumann, 1946

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powersexpect the machines to take control.”

Alan Turing, 1951

“As machines learn they may develop unforeseen strategies at rates that baffle their programmers.”

Norbert Weiner, 1960

Unquestionably… an “intelligence explosion”

I.J. Good, 1966

“Cogito, ergo sum.” (I think, therefore I am.)

René Descartes, 1637