One year ago, IBM, Meta and more than 50 leading organizations in AI announced the creation of the AI Alliance, a global collaboration committed to building, enabling, and advocating for open innovation in AI across the technology and societal landscape to ensure that AI technology is accessible, trusted, and beneficial to everyone.
One year in, we are well on our way to making this goal a reality with a vibrant and fast-growing global community collaborating to drive progress in open and trusted data; new AI foundation models for industrial and scientific applications essential to the world economy and to the health and prosperity of people; open technology tooling to evaluate and enable safe and trusted AI models and systems in deployment; education and advocacy with students and governments globally; and much more.
AI Alliance Today: A Snapshot
Let’s begin with some key achievements. Starting from zero, we have created a community that has:
- More than 140 organizational members in 23 countries, with a steering committee and 12 major working groups working to address the biggest challenges in AI today, from data provenance and quality, to ensuring trust and safety in AI systems, to applying AI to new problems and opportunities.
- Ninety-three active core and affiliated projects with more than 1200 working group collaborators.
- Participated in or led thirty global events in 10 countries involving more than 20,000 people.
- Engaged and educated government policymakers on essential topics impacting open innovation in AI.
- Published five AI Alliance Guides to essential topics in AI, embodying the experience of our members, cataloging the most capable open tools, and guiding researchers, developers, and organizational decision makers on how to build and apply AI.
These collective efforts represent important and impactful progress in our first year, when we were just getting our program up and running. However, much more is coming. We intend to grow our global community of individuals, organizations, projects, events and engagement to make AI open, beneficial, trusted, and accessible to everyone. Today we are committing to scale up the breadth of our reach and impact by 10x in 2025.
As a next step toward that goal, we are excited to announce two new major initiatives of the AI Alliance leading us into 2025: our Open Trusted Data Initiative (OTDI) and Trust and Safety Evaluation Initiative (TSEI). These new initiatives are major priorities for the AI Alliance and its members and seek to address some of the biggest challenges in AI today: the provenance, transparency and quality of data we use in AI models and systems, and the tooling, methods, and guidance on how to build and deploy AI systems that people and organizations can trust.
In addition, we have just released the latest AI Alliance “Guide to Essential Competencies for AI” a project of the AI Alliance years-long effort to engage and achieve consensus across academic, industry, and civil society stakeholders about the future curriculum for AI education. Today we are also announcing the launch of a new Climate and Sustainability Working Group, which will develop and apply multi-modal foundation models and open source software tooling to major challenges in climate change understanding and mitigation.
Here's more on these announcements and many other accomplishments of 2024:
Ensuring Safety and Trust in AI
One of the biggest challenges of the GenAI revolution has been ensuring that AI systems behave and produce outputs that are helpful, accurate, and predictable, so they can serve as trusted capabilities in partnership with humans and with the information technology systems that enable the world. That’s why we started here.
Launched earlier this year, our Safety and Trust working group has grown to more than 230 individual participants from over 40 organizations including IBM, Meta, Intel, Red Hat, HydroX AI, ServiceNow, Sony, Columbia, Eleuther AI, Stanford, ML Commons, Notre Dame, Imperial College, University of Toronto, and others.
One of the first outputs of this group was The AI Alliance Living Guide to Trust and Safety: a growing catalog of tools, capabilities and guidance on how to evaluate AI systems, including many AI Alliance Affiliated Projects from our members, such as MLCommons’ AILuminate, Meta’s Llama Guard, IBM’s unitxt, AI Risk Atlas, and Granite Guardian, NIST Artificial Intelligence Risk Management Framework, and others.
To expand our work in AI Trust and Safety, we are announcing today the AI Alliance Trust and Safety Evaluation Initiative to engage the global community far beyond just our Alliance members with a three-pronged approach for AI system evaluation:
- AI Evaluation Taxonomy and Domain-Specific Risk Assessments: we released the start of a comprehensive taxonomy of evaluation concerns, from risks, to alignment, to performance, with general-purpose and domain-specific (e.g., finance, healthcare, retail, scientific discovery, etc.) criteria. As part of this, we launched a project to engage the open community of subject matter experts to identify domain-specific risks that existing and new evaluators must address.
- New Evaluators for the Taxonomy and a Benchmark Leaderboard: a major goal of the initiative is to ensure that every evaluation taxonomy “node” has implementations suitable for any deployment scenario and supporting build out of new evaluators and benchmarks. One enabler of this is Safety BAT, an interactive dashboard benchmark leaderboard deployed on Hugging Face that gives users the ability to compare benchmarks and decide which ones provide the most coverage with the least duplication and hence overhead. This is the first of several leaderboards under development that will enable users to filter benchmarks for their specific safety or use case interests and download configurations for straightforward deployment. For example, if you are building a chat assistant for healthcare, which will have both general and domain-specific criteria that must be met, what benchmarks and individual evaluators are best used to ensure your safety and alignment goals?
- AI Trust and Safety Reference Stack: we released the first version of an open software platform for comprehensive evaluation of AI models and systems to enable developers to choose the right system components and architecture for their use case. This new stack is released today as a V0.1 for early community use and feedback. It includes several of the most popular, de facto standard libraries and tools in the ecosystem.
In 2025, we will integrate and expand on these projects and collaborations by creating an industry-standard community hub for testing, developing, and choosing benchmarks and evaluators from a growing ecosystem and applying them to AI models and systems; building out of the Trust and Safety Reference Stack, as well as recipes for use, integration, and deployment; and creating new benchmarks and evaluation techniques to address safety, risk and quality requirements across specific domains including education, finance, health, retail, manufacturing, climate, materials, and drug discovery.
Open Trusted Data: Creating a Better Foundation for AI
Data is the most important constituent of AI models and systems, yet today data for AI too often has murky provenance, unclear licensing, and large gaps in quality and diversity of languages, modalities, and expert domains represented.
That’s why we are announcing today the AI Alliance’s Open Trusted Data Initiative (OTDI) with the goal of releasing large scale open, permissively licensed data sets with clear provenance and lineage across all domains and modalities essential for AI.
The newly formed Open Trusted Data Initiative brings together more than 150 individual participants from more than 20 organizations including Pleias, BrightQuery, Common Crawl, ServiceNow, Hugging Face, IBM, Allen Institute for AI, Cornell, Aitomatic, Tokyo Electron, EPFL, and beyond, all working to develop better provenance and governance expectations, and better tools and processes to curate data sets that are more transparent, trusted, accurate, and applicable broadly.
Today we release a three-part starting point for OTDI:
- Provenance and Governance: we are releasing our first proposal, a baseline provenance and governance specification v0.1 for data that will be part of the initiative. We are also announcing the formation of a steering committee to guide the development of this specification and its implementation for the intake, curation, and management of data in the OTDI.
- Tools for Data Preparation and Provenance: AI Alliance has been curating a set of tools for preparing and processing data from AI Alliance members including Data Prep Kit, Docling, Ray Data, Arize Phoenix, and other tools. The special characteristics and needs of AI data pipelines, including multi-modal data and massive scalability requirements, are demanding and driving innovation in data processing technologies. The AI Alliance is supporting and directing this innovation with OTDI.
- AI Alliance Open Trusted Data Catalog: we are announcing the first data sets in the AI Alliance’s Open Trusted Data Catalog. These data sets have been sourced from AI Alliance’s members who are experts in data engineering, model development, and AI applications in various domains. They represent the start of what we hope will grow to be a large resource for the global AI community.
This is just the start. In 2025, we plan to refine our provenance and governance expectations, and our processes for acceptance and verification for open trusted data. We also plan to build out tooling and publish recipes (pipelines) for trusted data processing, including with end-to-end lineage tracking capabilities. We plan to significantly expand the data catalog aiming to include data for most of the world’s languages, large repositories of high quality multi-modal data including images, audio and video, as well as time series and scientific modalities. We also plan to catalog expert domain-specific data relevant for education, finance, health, retail, manufacturing, climate and geospatial, materials and chemistry, health, and biomedical science. As part of this effort, we will be launching a series of global workshops and events to promote the collection of open trusted data and to improve the tooling required to enable this work.
Making AI Useful to Everyone
Generative AI has produced many impressive demonstrations and applications, yet large gaps remain in making it truly useful across the many domains and use cases relevant for individuals, businesses, governments and educational institutions. From trusted personal assistants to aiding doctors with diagnoses to helping climate scientists understand large scale trends in weather and land use and much more, the many promises of AI today can only be achieved if broad communities of scientists, engineers, subject matter experts, and organizational leaders work together.
This is why the AI Alliance has created a set of working groups that aim to provide better tools, models, application architectures, and guidance to developers building applications in many domains. We have brought together more than 500 individual participants from more than 50 organizations including IBM, Meta, Neo4j, Intel, AMD, ML Commons, Aitomatic, Red Hat, Citadel, Cornell, Senzing, EPFL, and many others to do this. Some of the other highlights of our work in 2024 include:
- AI Alliance Guide to Applying AI: a catalog and guide to the most capable tools, libraries and frameworks for building AI applications from the work and collected experience of AI Alliance members.
- AI Alliance AI Accelerator Software Ecosystem Guide: a comprehensive catalog and guide to the software needed to enable deployment of AI models and systems to various GPU and AI-specific accelerator hardware.
- Cooking with AI: an open “test kitchen” for experimenting and developing new ways to apply AI with various model families such as IBM’s Granite models and Meta’s Llama models, with a growing curation of recipes utilizing the most capable open libraries and models for popular application patterns including RAG, RAFT, Graphs, Neurosymbolic, and Agentic Planning and Reasoning architectures.
- Industry Foundation Models: including foundation model development projects in semiconductors, robotics, geospatial, legal, health, finance, materials and chemistry, agentic interaction and reasoning, and time series. This includes the release of our first domain-specific open model, called SemiKong, a model trained with expert data from the semiconductor processing industry.
- Climate and Sustainability: today we are launching a new working group on Climate and Sustainability which seeks to develop data, models, and tools that use AI to better understand and mitigate climate challenges. This working group will be led by commitments from major AI Alliance members including Imperial College, MBZUAI, ServiceNow, IBM, and NASA. Its first two major projects are Geobench for benchmarking geospatial foundation models, and Geochat, and application that enables processing and understanding of geospatial data and phenomenon facilitated by natural language interface enabled by IBM’s Granite open foundation models.
Policy and Advocacy in Support of AI Open Innovation
The AI Alliance has executed a vibrant global campaign focused on educating stakeholders and advocating for policies and governance of AI that embrace the importance of openness, open AI models, and an ecosystem that enables open innovation of AI. Governments worldwide are working to ensure that AI is used for the good of humanity and minimize potential harms related to specific challenges such as the quality and provenance of training data (e.g. sensitive data, privacy, etc.) and concerns about safety and trustworthiness of AI model outputs (e.g. bias, hallucinations, lack of robustness, etc.). These harms can result from the nuances of the AI technology itself or from adversarial actors that exploit the specific AI vulnerabilities. The AI Alliance is committed to not only advocating for open technology but also policies that help to address these challenges. Highlights from the Policy and Advocacy Focus Area for 2024 include responses to:
Each of these represents a broad consensus of leaders in the AI Alliance member organizations. Based on these engagements, a few key observations on policy and advocacy emerge:
- Legislations that focus on large proprietary models need to consider open-source AI models explicitly due to differences in benefits, underlying incentives, and risk management roles.
- Typical legislators need adequate familiarity with AI development practices (particularly open source) to create effective and appropriate legislation.
- Legislations that place undue responsibility on foundation model developers to manage the risk of deployed applications completely ignore the downstream activities that define the specific application/use case and the actual risk.
As we look to 2025, the AI Alliance will continue its work worldwide to inform policy makers on AI and the advantage of an open innovation ecosystem. For example, ETH Zurich will lead an AI Alliance policy roundtable to address the road blockers hindering the democratization of AI on a global scale and will examine the specific work associated with the implementation of the EU AI Act. Furthermore, our recent expansion in membership can make a significant impact on evolving policies in India, promoting responsible AI development that drives economic growth.
Educating and Enabling the Global Community
As AI capabilities advance at lightning speed, it is essential that students and the existing workforce develop skills they can use to harness the capability of AI. The AI Alliance Skills and Education working group is working on this grand challenge, with more than 150 participants from more than 25 organizations including Cornell University, IBM , Abdus Salam International Centre for Theoretical Physics, Intel, Lightning AI, Meta, Montreal AI Ethics Institute, Neo4j, New York University, Red Hat, Rensselaer Polytechnic Institute, Scuola Internazionale Superiore di Studi Avanzati, Technical University of Munich, University of Notre Dame, SEE Change Initiative, Johns Hopkins University, Jobs for the Future, Senzing, and others.
The group, in collaboration with the Aspen Institute, has just released its “Guide to Essential Competencies for AI” a comprehensive resource on the future of AI education that represents input and alignment across many stakeholders in academia, industry, government and non-profit sectors. The guide includes:
- Promoting understanding of AI: Defining essential competencies to help individuals and organizations grasp necessary knowledge and skills.
- Supporting curriculum development: Providing a framework for aligning education and training programs around core AI competencies.
- Facilitating workforce development: Assisting organizations in upskilling employees and developing AI-focused training programs.
- Bridging the AI divide: Promoting equitable access to AI education and learning opportunities for marginalized communities.
The AI Alliance has also worked to better support and enable Academic Research. We have initiated a collaboration with IBM, Red Hat, Mass Open Cloud Consortium and the National Science Foundation to create an open AI cloud environment that supports the research community with data preparation, model enhancements and an application lab for experimenting and improving the ways we apply AI to scientific and societal challenges. We plan to significantly expand on this in 2025, including creating an academic researcher hub to facilitate matchmaking among Alliance projects, and with various sources of funding and support to enable broader participation in leading edge AI research by the academic community.
Onward to 2025!
2024 has been a strong first year for the AI Alliance, and we are happy to take a moment to acknowledge and thank everyone that made it possible.
But with more than 140 of the world’s leading organizations in AI coming together, and more joining every month, we want to do much more. To grow our impact by 10x, grow our major initiatives in Open Trusted Data and Safety and Trust, and ensure the success of our many other projects, we must reach far beyond our members and engage developers, researchers, business and government leaders, and civil society as contributors and collaborators. This scale of global collaboration is the overarching mission for 2025.
Stay tuned!