Skip to content
  • P(doom)
  • Engineering!
    • AISIC by NIST
    • BiocommAI
    • NormaxAI
    • SoftServe
    • SSI
    • Supermicro
    • WSP
    • Science
      • Science. Managing extreme AI risks amid rapid progress πŸ”—
      • Science. Regulating advanced artificial agents πŸ”—
      • Provably safe systems: the only path to controllable AGI πŸ”—
      • AI: Unexplainable, Unpredictable, Uncontrollable. | Roman Yampolskiy πŸ”—
      • Evidence for AI being Uncontrollable | Roman Yampolskiy | Future of Life Institute πŸ”—
      • The Case for Narrow AI | Roman Yampolskiy | Foresight Institute πŸ”—
  • Standards
    • AISIC. Artificial Intelligence Safety Institute Consortium by NIST πŸ”—
      • Dipotra by NIST πŸ”—
      • Managing Misuse Risk for Dual-Use Foundation Models πŸ”—
      • AI Risk Management Framework | NIST πŸ”—
      • NIST AIRC – Playbook πŸ”—
      • NIST AIRC – Glossary πŸ”—
      • Towards a Standard for Identifying and Managing Bias in Artificial Intelligence πŸ”—
    • Amazon Web Services
      • AWS. Transform responsible AI from theory into practice πŸ”—
      • AWS. Secure approach to generative AI πŸ”—
    • Anthropic
      • Anthropic Research πŸ”—
      • Anthropic. Core Views on AI Safety: When, Why, What, and How πŸ”—
    • Google Deepmind
      • AI at Google: our principles πŸ”—
      • Google. Frontier Safety Framework πŸ”—
    • Meta
      • Meta. Responsible AI πŸ”—
      • Meta. Our responsible approach to Meta AI and Meta Llama 3 πŸ”—
    • Microsoft
      • MICROSOFT AI. RESPONSIBLE AI Principles and approach πŸ”—
      • Microsoft. AI Safety Policies πŸ”—
    • OpenAI
      • OpenAI. Product safety standards πŸ”—
      • OpenAI. Rule-Based Rewards (RBRs) πŸ”—
      • OpenAI. Rule Based Rewards for Language Model Safety πŸ”—
      • OpenAI. Using GPT-4 for content moderation πŸ”—
  • News
  • Story
  • Biology
    • Human Brain vs. AI Brain (20 trillion times faster)
    • Homo sapiens vs. Machine intelligence. Ten Simple Facts.
    • How Rogue AIs may Arise. Published 22 May 2023 by Yoshua Bengio.
    • Reasoning through arguments against taking AI safety seriously. Published 9 July 2024 by Yoshua Bengio.
    • Mutualistic Symbiosis is an extremely successful strategy in our Natural World. πŸ”—
    • When corals met algae: Symbiotic relationship crucial to reef survival dates to the Triassic. Nov. 2, 2016. PRINCETON πŸ”—
    • BiocommAI. On Homo sapiens… and P(doom) by Peter A. Jensen, 20 August 2024 πŸ”—
    • Discover SafeAI Forever πŸ”—
  • Laws
    • California SB 1047 is a transformative law at a pivotal moment in human history. πŸ”—
    • CA SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.(2023-2024) πŸ”—
    • EU AI Act. Shaping Europe’s digital future πŸ”—
    • International Scientific Report on the Safety of Advanced AI Published 19 June 2024 by Yoshua Bengio πŸ”—
    • S.4178 – Future of Artificial Intelligence Innovation Act of 2024 πŸ”—
    • A Roadmap for Artificial Intelligence Policy in the U.S. Senate πŸ”—
    • U.S. Congress. AI Bills (1,212) πŸ”—
  • Letters
    • Statement on AI Risk πŸ”—
    • Pause Giant AI Experiments: An Open Letter πŸ”—
    • Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell πŸ”—
    • Letter from OpenAI Whistleblowers πŸ”—
    • Senator Scott Weiner. May 20, 2024. RE: Senate Bill 1047 An Open Letter to the AI Community πŸ”—
    • Senator Scott Weiner. RE: Response to inaccurate, inflammatory statements by Y Combinator & a16z regarding Senate Bill 1047 πŸ”—
    • SUPPORT by BiocommAI for SB 1047. Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act
  • For Safe AI
    • American People Overwhelmingly Support Safe AI (AIPI Media) πŸ”—
    • 1,000’s of AI Scientists and Thought Leaders Here πŸ”—
    • 1,000’s of AI Scientists and Thought Leaders Here πŸ”—
    • Yoshua Bengio, PhD. at UM and Mila πŸ”—
    • Geoffrey Hinton, PhD. at UT (Retired)πŸ”—
    • Peter A. Jensen, MFA BSc, at BiocommAI πŸ”—
    • Lawrence Lessig, at Harvard Law πŸ”—
    • Stuart Russell, PhD. at UC Berkeley πŸ”—
    • John Sherman, For Our Humanity πŸ”—
    • Jaan Tallinn, at Centre for the Study of Existential Risk πŸ”—
    • Max Tegmark, PhD. at MIT πŸ”—
    • Roman Yampolskiy, PhD. at UL πŸ”—
  • THINK.
    • THINK. Open & Safe Google Slides Link. Sharing welcomed! πŸ”—
    • THINK (flip book, same) A humble public service message. Sharing welcomed! πŸ”—
    • THINK. Would you drive across a BRIDGE if the Bridge Engineers estimated 10-20% CHANCE of total collapse? (DEATH) πŸ”—
    • THINK. Would you get in an ELEVATOR if the Elevator Engineers estimated a 10-20% CHANCE of catastrophic failure? (DEATH) πŸ”—
    • THINK. Would you drive a CAR if the Automotive Engineers estimated a 10-20% CHANCE of autonomous total disaster? (DEATH) πŸ”—
    • THINK. Would you ride a CABLE CAR if the Cable Car Engineers estimated a 10-20% CHANCE of total disaster? (DEATH) πŸ”—
    • THINK. Would you board a TRAIN if the Train Engineers estimated a 10-20% CHANCE of total disaster? (DEATH) πŸ”—
    • THINK. Would you board a SHIP if the Ship Engineers estimated a 10-20% CHANCE of sinking? (DEATH) πŸ”—
    • THINK. Would you board a PLANE if the Aviation Engineers estimated a 10-20% CHANCE of catastrophic failure? (DEATH) πŸ”—
  • SafeAI Blog πŸ”—
Ten Simple Facts. Homo sapiens vs. Machine intelligencesibiu2082024-07-30T15:34:39+00:00

Copyright 2024 | BiocommAI Inc. | All Rights Reserved

Page load link
Go to Top