Getting started with AI trust and safety
The AI trust and safety guide is a “living” document that provides an introduction to current trends in research and development for ensuring AI models and applications meet requirements for trustworthy results, and in particular, results that satisfy various safety criteria. Aimed at developers and leaders who are relatively new to this topic, the guide defines some common terms, provides an overview of several leading trust and safety education and technology projects, and offers recommendations for how to build-in trust and safety into your AI-based applications.