Accelerating responsible innovation in AI
Open and transparent innovation is essential for equipping AI researchers, developers, and users with the knowledge and tools to leverage AI advancements safely and inclusively, prioritizing diversity and economic opportunity for all.
Through member-driven working groups, we bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits.
Contribute to an AI Alliance projectCore projects
View allCore projects address substantial cross-community challenges and are an opportunity for individual contributors and members to collaborate, build, and make an impact on the future of AI. Core Projects are managed directly by the AI Alliance and governed as described below.
A statement in opposition to California SB 1047
Advocacy
Our perspectives and recommendations in opposition to California SB 1047, the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Trusted Evals request for proposals
Trust & Safety
The AI Alliance Trusted Evals request for proposals is aimed at seeking new perspectives in the AI evaluation domain. We are excited to work with those in academia, industry, startups and anyone excited to collaborate in the open and build an ecosystem around their work.
Responding to the U.S. NTIA request for comment on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights
Advocacy
The request seeks public input on the potential risks, benefits, and policy approaches for AI foundation models whose weights are broadly accessible.
Ranking AI Safety Priorities by Domain
Trust & Safety
A challenge for software development teams adopting generative AI is making sense of the safety issues that their applications must address. The AI safety ecosystem is broad and growing quickly, making it difficult for these development teams to know where they should focus their efforts. What safety concerns are most important for them to work on first?
Open Trusted Data Initiative
Foundation Models and Datasets
Cataloging and managing trustworthy datasets.
Time Series Data and Model Initiative
Foundation Models and Datasets
Time-series applications are an important target for AI. In addition to gathering high-quality and fully-governed time series datasets as part of the Open Trusted Data Initiative, Alliance members are collaborating on new and improved time series models (as part of the Industry Open FMs Initiative and benchmarks, both general-purpose and application-specific.
Please join us. We need time series and domain experts, including especially subject matter experts and use case and product owners who would like to apply emerging time series foundation models to new applications. There is an acute shortage of good, open datasets for time series and data specially benchmarks and evaluation methods for various use cases. Contributions are especially welcome here.
AI Accelerator Software Ecosystem Guide
Hardware Enablement
The AI Accelerator Software Ecosystem Guide provides an introduction to the broad topic of software tools that support seamless execution of AI workloads on different hardware accelerators. Industry experts provide guidance on the state of the art and how to ensure success.
Industry Open FMs Initiative
Foundation Models and Datasets
We have seen rapid progress in building and releasing highly-capable and open foundation models for general language, coding, scientific discovery, and multi-modal scenarios.
A key development in model strategies is a focus on building smaller, more specialized models.
More details are coming soon, but we would love for you to join us. We need both model-building and domain experts, including those outside the target domains listed above.
Understanding AI Trust and Safety: A Living Guide
A major challenge for the successful use of AI is the importance of understanding potential trust and safety issues, along with their mitigation strategies. Failure to consider these issues could impact an organization's operations and the experience of its users. Concerns about safety are also a driver for current regulatory initiatives. Hence, applications built with AI must be designed and implemented with AI trust and safety in mind. This guide provides an introduction to trust and safety concerns, and offers guidance for AI projects.
The Living Guide to Applying AI
Applications and Tools
A "living guide" for building AI-enabled applications, this guide provides an introduction to several established design patterns for building AI systems and products, with contributions from different experts in the AI Alliance. Several common patterns, like RAG (retrieval-augmented generation), are explored from different angles, and emerging patterns, like GraphRAG (using a graph network as a source of RAG content) and agents.
Trust and Safety Evaluations
Trust & Safety
Much like other software, generative AI (“GenAI”) models and the AI systems that use them need to be trusted and useful to their users. The Trust and Safety Evaluations project fills gaps in the current landscape of the taxonomy of different kinds of evaluation, the tools for creating and running evaluations, and leaderboards to address particular categories of user needs.
Affiliated projects
View allAffiliated projects are led and managed by members and identified as being aligned to the AI Alliance mission. Although Affiliated Projects are not managed by the AI Alliance, we highlight them as contributing to open, safe, and responsible AI.
IBM Granite Code Models
A series of code models trained by IBM licensed under Apache 2.0 license. Includes base pretrained and instruct models.
AIConfig & Editor
AIConfig is a config-based framework for building generative AI applications.
GenAI in Education: Usage Guidance
A report evaluating the feasibility, benefits, and limitations of using generative AI technologies in an educational setting and its impact on learning outcomes.
LLM 360: Amber
The Amber project includes a 7B English language model with the LLaMA architecture, an instruction-following model finetuned from LLM360-Amber, and a safety-finetuned instruction model using LLM360-AmberChat as the base.
TrustyAI
TrustyAI is an open-source toolkit designed for responsible AI.
Ragna
Ragna is an open-source RAG-based (Retrieval-Augmented Generation) AI orchestration framework designed to bridge the gap between AI research and production deployment.