Artificial Intelligence Safety Institute Consortium (AISIC)
Development and deployment of safe and trustworthy artificial intelligence (AI)
About Artificial Intelligence Safety Institute Consortium (AISIC) by The U.S. National Institute of Standards and Technology (NIST)
InΒ supportΒ of efforts to create safe and trustworthy artificial intelligence (AI), NIST has established the U.S. Artificial Intelligence Safety Institute (USAISI). ToΒ supportΒ this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 200 organizations toΒ develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.
Consortium members will be expected to contribute technical expertise in one or more of the following areas: βΒ
- βββData and data documentation ββ
- βββAI Metrology ββ
- βββAI Governance ββ
- βββAI Safety
- βTrustworthy AI ββ
- βββResponsible AI ββ
- βββAI system design and development ββ
- βββAI system deployment
- βAI Red Teamingββ
- βββHuman-AI Teaming and Interaction ββ
- βββTest, Evaluation, Validation and Verification methodologies ββ
- βββSocio-technical methodologies ββ
- βββAI Fairness Β ββ
- βββAI Explainability and Interpretability ββ
- βββWorkforce skills Β ββ
- βββPsychometrics ββ
- βββEconomic analysis
- Models, data and/or products to support and demonstrate pathways to enable safe and trustworthy artificial intelligence (AI) systems through the NIST AI Risk Management Framework
- Infrastructure support for consortium projects
- Facility space andΒ hosting consortium researchers, webinars,Β workshops and conferences, and online meetings