Our members are building the future of AI

These core projects represent some of the most significant efforts behind the movement to create safe, responsible AI rooted in open innovation.

Core Projects

abstract gradient

A statement in opposition to California SB 1047

Advocacy

Our perspectives and recommendations in opposition to California SB 1047, the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

Screen shot of the AI Accelerator Software Ecosystem Guide

AI Accelerator Software Ecosystem Guide

Hardware Enablement

The AI Accelerator Software Ecosystem Guide provides an introduction to the broad topic of software tools that support seamless execution of AI workloads on different hardware accelerators. Industry experts provide guidance on the state of the art and how to ensure success.

Software engineers working on project together

Guide to Essential Competencies for AI

Skills & Education

This new guide has been developed to help address the AI skills gap that is increasing across geographies and industries.

abstract gradient

Industry Open FMs Initiative

Foundation Models and Datasets

We have seen rapid progress in building and releasing highly-capable and open foundation models for general language, coding, scientific discovery, and multi-modal scenarios.

A key development in model strategies is a focus on building smaller, more specialized models.

More details are coming soon, but we would love for you to join us. We need both model-building and domain experts, including those outside the target domains listed above.

Open Agent Lab description and projects

Open Agent Lab

Trust & Safety

The Open Agent Lab is a collaborative community of open source AI projects and domain-specific work groups that seek to make AI Agents successful in the real world through fast experimentation and distillation of learning into usable reference architectures and implementations, and build out of new tools to enable development and deployment.

Open Trusted Data Initiative

Foundation Models and Datasets

Cataloging and managing trustworthy datasets.

Ranking AI Safety Priorities by Domain

Trust & Safety

A challenge for software development teams adopting generative AI is making sense of the safety issues that their applications must address. The AI safety ecosystem is broad and growing quickly, making it difficult for these development teams to know where they should focus their efforts. What safety concerns are most important for them to work on first?

abstract gradient

Responding to the U.S. NTIA request for comment on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights

Advocacy

The request seeks public input on the potential risks, benefits, and policy approaches for AI foundation models whose weights are broadly accessible.

The Living Guide to Applying AI

Applications and Tools

A "living guide" for building AI-enabled applications, this guide provides an introduction to several established design patterns for building AI systems and products, with contributions from different experts in the AI Alliance. Several common patterns, like RAG (retrieval-augmented generation), are explored from different angles, and emerging patterns, like GraphRAG (using a graph network as a source of RAG content) and agents.

abstract gradient

Time Series Data and Model Initiative

Foundation Models and Datasets

Time-series applications are an important target for AI. In addition to gathering high-quality and fully-governed time series datasets as part of the Open Trusted Data Initiative, Alliance members are collaborating on new and improved time series models (as part of the Industry Open FMs Initiative and benchmarks, both general-purpose and application-specific.

Please join us. We need time series and domain experts, including especially subject matter experts and use case and product owners who would like to apply emerging time series foundation models to new applications. There is an acute shortage of good, open datasets for time series and data specially benchmarks and evaluation methods for various use cases. Contributions are especially welcome here.

Trust and Safety Evaluation Projects

Trust & Safety

Much like other software, generative AI (“GenAI”) models and the AI systems that use them need to be trusted and useful to their users. Our set of Trust and Safety Evaluation Projects fill gaps in the current landscape of the taxonomy of different kinds of evaluation, the tools for creating and running evaluations, and leaderboards to address particular categories of user needs.

Trusted Evals RFP

Trusted Evals request for proposals

Trust & Safety

The AI Alliance Trusted Evals request for proposals is aimed at seeking new perspectives in the AI evaluation domain. We are excited to work with those in academia, industry, startups and anyone excited to collaborate in the open and build an ecosystem around their work.