Return to Articles

Building AI Agents to Real-World Use Cases

Andrew Musselman
Agata Ferretti

Powered by Gofannon and AgentLabUI

At The AI Alliance, we’re advancing open-source projects that make AI more practical, collaborative, and impactful across industries. Two of these projects are Gofannon, a set of agent tools, and AgentLabUI, a practitioner-focused environment where technical teams can create, tune, and deploy AI agents.

As a member of the Alliance, ATA Systems, a generative AI consultancy, has been actively contributing to this work. ATA's role is to help translate open-source innovation into production-ready solutions for customers. With AgentLabUI as the foundation enabled by Gofannon's toolset, ATA has also developed a workflow for building front-end user interfaces, enabling organizations to give non-technical staff a clean, intuitive way to directly consume the agents created in the Lab.

This combined approach makes it possible to spin up production-quality, user-facing applications powered by flexible, agile AI agents in a matter of days rather than weeks. It’s an example of how the Alliance’s open-source ecosystem, paired with ATA’s customer-focused development expertise, brings cutting-edge AI from experimentation to real-world adoption.

Together, they form a full stack for AI agent workflows: a practitioner’s workbench on the back end, and user-facing apps on the front end.

Case Study: Grant Matching Agent

Within the AI Alliance, we recognized that many developers and researchers face a common challenge: they have promising ideas to push forward, but struggle to find the right grants and funding opportunities to support their work. Navigating the vast landscape of open government grants is often a time-consuming and manual process.

To address this, we launched a collaborative effort led by IBM with contributions from NumFOCUS and several valued individual members of the AI Alliance community, to design an AI-powered Grant Matching Agent.

The goal was to provide end users a simple web interface where they could upload their resume or CV, or a capabilities statement, and have an agentic AI system match them with the most relevant opportunities.

Together, we developed a two-layer solution: a workbench for crafting AI agents, paired with a lightweight, user-friendly front end for end users to iterate in real time.

Using AgentLabUI and Gofannon tools, our team built a pipeline of agents and models to analyze researcher resumes, extract relevant skills and capabilities, and generate structured queries. Then we deployed those agents to a simple front-end grant-matching app.

Here’s how it works from the user side:

  1. Upload your resume (PDF): The system extracts relevant details about your skills, research areas, and focus.
  2. Generate queries automatically: The agent uses either a default or custom prompt to create targeted search queries.
  3. Search grant databases: Another agent runs those queries against available opportunities (e.g. NSF, NIH, Bureau of European & Eurasian Affairs).
  4. Review results: Users can view details, save opportunities, and even contact the funding agency directly.
  5. Iterate quickly: Users can edit their resume, adjust prompts, or refine queries for better matches.

In practice, this means an academic can upload their CV, and within minutes see a curated list of grant opportunities relevant to their expertise, saving hours of manual searching.

This example also shows the division of labor:

  • Practitioners used AgentLabUI and Gofannon to build the underlying agents (resume parser, query generator, grant searcher).
  • End-users interacted with those agents through the Agent UI, without needing to know anything about agents, tools, models, frameworks, or pipelines.


Why It Matters

Organizations need AI tools that are:

  • Flexible for practitioners (supporting many models and tools)

  • Secure (tenanted, SSO-ready, on-prem or cloud)

  • Accessible to non-technical staff

By pairing AgentLabUI and Gofannon with the Agent UI, we bridge the gap between AI R&D and real-world adoption. Practitioners can innovate quickly, while end-users gain access to production-ready agents in workflows tailored to their needs.

In the case of grant matching, this means researchers spend less time hunting for funding and more time advancing their work. In other domains, the same stack can be used for compliance review, sales enablement, knowledge management, and beyond.

AgentLabUI: The AI Practitioner’s Workbench

AgentLabUI (https://github.com/The-AI-Alliance/agent-lab-ui) functions as an AI switchboard, including models, tools, MCP server connections, and a workbench; essentially an IDE for AI development. Practitioners can:

  • Swap and tune LLMs across providers (OpenAI, Anthropic, Meta, etc.)
  • Build and assemble tools in a modular way
  • Export and deploy agents to multiple frameworks (LangChain, Bedrock, etc.)
  • Load tools from GitHub or any MCP server, including A2A-compliant integrations
  • Collaborate across teams, with SSO and secure tenancy

The workbench is project-based. Teams can define agents, connect them to models and tools, and interact with them through a chat interface, uploading files, pulling in repositories, or querying projects directly in natural language.

The result is a secure, flexible environment where AI work isn’t siloed. Teams can work with different models or frameworks, but share results in a consistent interface, multiplying the impact of their collective efforts.

Agent UI: Making Agents Consumable

While AgentLabUI empowers practitioners, the Agent UI makes those agents usable by non-technical users. It provides a simple, structured interface where agents created in the lab can be deployed for direct use.

Think of it as the delivery layer: once an agent is ready, it can be surfaced to the right audience with guardrails intact. Users don’t have to worry about prompts, frameworks, or model details, they just use the tool. Our UI development methodology allows us to consume agents piecewise and assemble them into a processing chain that can combine automatic sequential stages, or provide manual checkpoints where users can review results and iterate on them before taking the next steps.

What’s Next and How to Get Involved

We’re continuing to expand both projects:

  • Adding new integrations and models to AgentLabUI
  • Adding new tools to Gofannon
  • Enhancing Agent UI with richer ways to interact with agents
  • Sharing open-source examples so others can build their own custom agent + UI stacks

This is just the start. By combining a practitioner’s workbench with a user-facing delivery layer, we’re making AI practical, secure, and usable, from lab to deployment.

To get involved, whether writing code or exploring other use cases, you can visit the AgentLabUI (https://github.com/The-AI-Alliance/agent-lab-ui) and Gofannon (https://github.com/The-AI-Alliance/gofannon) GitHub sites and try the software out yourself, contribute issues and pull requests, and ask for help from the teams.

If you would like to explore new use cases to collaborate on, please visit the AI Alliance's intake forms at:  

We look forward to hearing from you.

Related Articles

View All

From Semiconductor to Maritime: A Blueprint for Domain-Specific AI in Safety-Critical Industries

From semiconductor fabs to open seas, the AI Alliance is redefining how domain-specific AI supports safety-critical industries. This blog spotlights Llamarine, a maritime large language model co-developed by Aitomatic and Furuno, building on lessons from SemiKong, the first semiconductor-specific model. Designed to embody real seamanship rather than generic knowledge, Llamarine integrates deep maritime regulations and Furuno’s decades of navigational expertise into its reasoning. The result is a model that provides deterministic, regulation-compliant, and operationally sound guidance—outperforming GPT-4o, Claude Sonnet 3.5, and other general-purpose models. Together, these projects outline a blueprint for trustworthy, specialized AI that can be applied across industries where precision and reliability are non-negotiable.

DoomArena: A Security Testing Framework for AI Agents

Technical Report

Mastering Data Cleaning for Fine-Tuning LLMs and RAG Architectures

News

In the rapidly advancing field of artificial intelligence, data cleaning has become a mission-critical step in ensuring the success of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) architectures. This blog emphasizes the importance of high-quality, structured data in preventing AI model hallucinations, reducing algorithmic bias, enhancing embedding quality, and improving information retrieval accuracy. It covers essential AI data preprocessing techniques like deduplication, PII redaction, noise filtering, and text normalization, while spotlighting top tools such as IBM Data Prep Kit, AI Fairness 360, and OpenRefine. With real-world applications ranging from LLM fine-tuning to graph-based knowledge systems, the post offers a practical guide for data scientists and AI engineers looking to optimize performance, ensure ethical compliance, and build scalable, trustworthy AI systems.