- Ollama - helps developers run language models such as Llama 3.1, Mistral, Gemma 2, and others, locally on the computer or on a server cluster. Watch Michael Yang’s demo here: Tool calling with Ollama - How an LLM accesses external information.
- Continue – a leading open-source AI code assistant that connects any models and any context to build custom autocomplete and chat experiences inside the IDE. Watch Ty Dunn’s demo here: Using Continue to understand a brand new code library
- AgentOps – an industry-leading developer platform to test and debug AI agents. Watch Alex Reibman and Ajay Poshak demo LlamaFS here: LlamaFS: A self-organizing agentic filesystem
- CrewAI - Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.Watch João Moura’s demo here: Build multi-agent automations with Crew.ai.
- Based Hardware – a fully open-source AI note taker that provides you with reminders, suggestions, and more; all in one simple app. Watch Nik Shevchenko’s demo here: Friend: An AI necklace you wear which records your day
- Datafog – an open source AI/ML platform with solutions to scan unstructured content in files for PII, either annotating, anonymizing, or redacting sensitive information. Watch Sid Mohan’s demo here: Using Open Source LLMs for PII data detection with DataFog
- Semikong - the World’s First Semiconductor Industry-Specific Large Language Model. Watch Nanda Kishore‘s demo here: SemiKong: The Open Source Semiconductor LLM powered by Llama
Related Articles
View AllThe AI Alliance: Our First Year
News
The AI Alliance launched last December with a mission to build, enable, and advocate for open innovation in AI globally. We’re well on our way!
Domain-Aware Neurosymbolic Agent (DANA) Architecture: Delivering Consistency & Accuracy for Industrial AI
News
arXiv: DANA: Domain-Aware Neurosymbolic Agents for Consistency & Accuracy
Open-Source Implementation:OpenSSA framework for Small Specialist Agents
Evaluation of Generative AI - What’s ultimately our goal?
News
The era of generative AI has ushered in new challenges and risks, which have completely changed the way we think about product development. With increasingly diverse ways to evaluate models, we learn both about emergent capabilities but also about the potential harms they bring. One of the major challenges as evaluations become more esoteric is that ways to collaborate are limited for experts in various fields that also intersect in generative AI.