Return to Articles

Getting started with AI trust and safety

Technical Report
Screen shot of the User Guide website.

Introducing The AI Alliance Trust and Safety User Guide, now available here:

This “living” document provides an introduction to current trends in research and development for ensuring AI models and applications meet requirements for trustworthy results, and in particular, results that satisfy various safety criteria. Aimed at developers and leaders who are relatively new to this topic, the guide defines some common terms, provides an overview of several leading trust and safety education and technology projects, and offers recommendations for how to build-in trust and safety into your AI-based applications.

The leading trust and safety projects discussed include the Risk Management Framework from the National Institute of Standards and Technology (NIST), Trust and Safety at Meta, The Mozilla Foundation’s guidance on Trustworthy AI, The MLCommons Taxonomy of Hazards, and others. 

We welcome your contributions! 

We intend to evolve this living document, in collaboration with the broader AI community, to reflect trends in trust and safety, and to provide more in-depth guidance and usable examples. The guide is published using GitHub Pages, allowing anyone to contribute improvements as pull requests in the guide source repo.