Our members are building the future of AI

These core projects represent some of the most significant efforts behind the movement to create safe, responsible AI rooted in open innovation.

Core Projects

A statement in opposition to California SB 1047


Our perspectives and recommendations in opposition to California SB 1047, the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

Responding to the U.S. NTIA request for comment on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights


The request seeks public input on the potential risks, benefits, and policy approaches for AI foundation models whose weights are broadly accessible.

Trusted evals request for proposals

Trust & Safety

The AI Alliance Trusted Evals request for proposals is aimed at seeking new perspectives in the AI evaluation domain. We are excited to work with those in academia, industry, startups and anyone excited to collaborate in the open and build an ecosystem around their work.

Understanding AI Trust and Safety: A Living Guide

Trust & Safety

A major challenge for the successful use of AI is the importance of understanding potential trust and safety issues, along with their mitigation strategies. Failure to consider these issues could impact an organization's operations and the experience of its users. Concerns about safety are also a driver for current regulatory initiatives. Hence, applications built with AI must be designed and implemented with AI trust and safety in mind. This guide provides an introduction to trust and safety concerns, and offers guidance for AI projects.