Artificial Intelligence Safety Institute Consortium (AISIC)

Development and deployment of safe and trustworthy artificial intelligence (AI)

About Artificial Intelligence Safety Institute Consortium (AISIC) by The U.S. National Institute of Standards and Technology (NIST)

InΒ supportΒ of efforts to create safe and trustworthy artificial intelligence (AI), NIST has established the U.S. Artificial Intelligence Safety Institute (USAISI). ToΒ supportΒ this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 200 organizations toΒ develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.

Consortium members will be expected to contribute technical expertise in one or more of the following areas: ​ 

  • ​​​Data and data documentation ​​
  • ​​​AI Metrology ​​
  • ​​​AI Governance ​​
  • ​​​AI Safety
  • ​Trustworthy AI ​​
  • ​​​Responsible AI ​​
  • ​​​AI system design and development ​​
  • ​​​AI system deployment
  • ​AI Red Teaming​​
  • ​​​Human-AI Teaming and Interaction ​​
  • ​​​Test, Evaluation, Validation and Verification methodologies ​​
  • ​​​Socio-technical methodologies ​​
  • ​​​AI Fairness  ​​
  • ​​​AI Explainability and Interpretability ​​
  • ​​​Workforce skills  ​​
  • ​​​Psychometrics ​​
  • ​​​Economic analysis
  • Models, data and/or products to support and demonstrate pathways to enable safe and trustworthy artificial intelligence (AI) systems through the NIST AI Risk Management Framework
  • Infrastructure support for consortium projects
  • Facility space andΒ hosting consortium researchers, webinars,Β workshops and conferences, and online meetings