Trust and Safety
Creating benchmarks, tools, and methodologies to evaluate and ensure safe, trusted generative AI.
Our members are working together to understand the landscape of AI trust and safety risks and build tools, methods, and benchmarks for detecting and mitigating those risks. We also help educate the public about responsible AI and the developer community about responsible model and application development.
Our work
Trusted evals request for proposals
Trust & Safety
The AI Alliance Trusted Evals request for proposals is aimed at seeking new perspectives in the AI evaluation domain. We are excited to work with those in academia, industry, startups and anyone excited to collaborate in the open and build an ecosystem around their work.
Understanding AI Trust and Safety: A Living Guide
Trust & Safety
A major challenge for the successful use of AI is the importance of understanding potential trust and safety issues, along with their mitigation strategies. Failure to consider these issues could impact an organization's operations and the experience of its users. Concerns about safety are also a driver for current regulatory initiatives. Hence, applications built with AI must be designed and implemented with AI trust and safety in mind. This guide provides an introduction to trust and safety concerns, and offers guidance for AI projects.