- Ollama - helps developers run language models such as Llama 3.1, Mistral, Gemma 2, and others, locally on the computer or on a server cluster. Watch Michael Yang’s demo here: Tool calling with Ollama - How an LLM accesses external information.
- Continue – a leading open-source AI code assistant that connects any models and any context to build custom autocomplete and chat experiences inside the IDE. Watch Ty Dunn’s demo here: Using Continue to understand a brand new code library
- AgentOps – an industry-leading developer platform to test and debug AI agents. Watch Alex Reibman and Ajay Poshak demo LlamaFS here: LlamaFS: A self-organizing agentic filesystem
- CrewAI - Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.Watch João Moura’s demo here: Build multi-agent automations with Crew.ai.
- Based Hardware – a fully open-source AI note taker that provides you with reminders, suggestions, and more; all in one simple app. Watch Nik Shevchenko’s demo here: Friend: An AI necklace you wear which records your day
- Datafog – an open source AI/ML platform with solutions to scan unstructured content in files for PII, either annotating, anonymizing, or redacting sensitive information. Watch Sid Mohan’s demo here: Using Open Source LLMs for PII data detection with DataFog
- Semikong - the World’s First Semiconductor Industry-Specific Large Language Model. Watch Nanda Kishore‘s demo here: SemiKong: The Open Source Semiconductor LLM powered by Llama
Related Articles
View AllAdvancing Domain-Specific Q&A: The AI Alliance's Guide to Best Practices
The AI Alliance application and tools working group has conducted a comprehensive study on best practices for advancing domain-specific Q&A using retrieval-augmented generation (RAG) techniques. The findings of this research, provide insights and recommendations for maximizing the capabilities of Q&A AI in specialized domains.
Getting started with AI trust and safety
The AI trust and safety guide is a “living” document that provides an introduction to current trends in research and development for ensuring AI models and applications meet requirements for trustworthy results, and in particular, results that satisfy various safety criteria. Aimed at developers and leaders who are relatively new to this topic, the guide defines some common terms, provides an overview of several leading trust and safety education and technology projects, and offers recommendations for how to build-in trust and safety into your AI-based applications.
AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI
IBM and Meta Launch the AI Alliance in collaboration with over 50 Founding Members and Collaborators globally including AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others