Our members are building the future of AI

These affiliated projects represent some of the most significant efforts behind the movement to create safe, responsible AI rooted in open innovation.

Affiliated Projects

PatchTSMixer

IBM logo

A lightweight time-series modeling approach based on the MLP-Mixer architecture

Phoenix

Arize AI

Arize Phoenix is an open-source LLM tracing and evaluation tool, designed for AI developers to evaluate, experiment, and optimize AI applications in real-time.

Project Kepler

IBM logo

Kepler (Kubernetes-based Efficient Power Level Exporter) uses eBPF to probe performance counters and other system stats, use ML models to estimate workload energy consumption based on these stats, and exports them as Prometheus metrics

Ragna

Ragna is an open-source RAG-based (Retrieval-Augmented Generation) AI orchestration framework designed to bridge the gap between AI research and production deployment.

Ray.io

Ray is an open-source, unified compute framework that makes it easy to scale AI and Python workloads.

Remote sensing foundation models for public good

A project to further the reach of foundation models by providing a set of open, online, and evolving materials on how to use remote sensing foundation models for public good.

Responsible prompting

IBM logo

A recommender system for responsible prompting LLMs

Safe and Interactive Robot Learning

Leveraging internet-scale data to help robots execute complex tasks in human-centered environments. Enabling non-experts to program robots.

Safeguarding LLMs

Safeguard LLMs from acquiring unsafe knowledge and being exploited for malicious purposes.

Small Specialist Agents for Industries

OpenSSA is an open-source agentic AI framework that provides precise, reliable AI agents for industries like semiconductor, manufacturing and finance, where high accuracy and deterministic outcomes are crucial.

TrustyAI

Red Hat logo

TrustyAI is an open-source toolkit designed for responsible AI.

Unitxt

IBM logo

A python library for customizable textual data preparation and evaluation tailored to generative language models.