Return to Articles

Building AI Agents to Real-World Use Cases

Andrew Musselman
Agata Ferretti

Powered by Gofannon and AgentLabUI

At The AI Alliance, we’re advancing open-source projects that make AI more practical, collaborative, and impactful across industries. Two of these projects are Gofannon, a set of agent tools, and AgentLabUI, a practitioner-focused environment where technical teams can create, tune, and deploy AI agents.

As a member of the Alliance, ATA Systems, a generative AI consultancy, has been actively contributing to this work. ATA's role is to help translate open-source innovation into production-ready solutions for customers. With AgentLabUI as the foundation enabled by Gofannon's toolset, ATA has also developed a workflow for building front-end user interfaces, enabling organizations to give non-technical staff a clean, intuitive way to directly consume the agents created in the Lab.

This combined approach makes it possible to spin up production-quality, user-facing applications powered by flexible, agile AI agents in a matter of days rather than weeks. It’s an example of how the Alliance’s open-source ecosystem, paired with ATA’s customer-focused development expertise, brings cutting-edge AI from experimentation to real-world adoption.

Together, they form a full stack for AI agent workflows: a practitioner’s workbench on the back end, and user-facing apps on the front end.

Case Study: Grant Matching Agent

Within the AI Alliance, we recognized that many developers and researchers face a common challenge: they have promising ideas to push forward, but struggle to find the right grants and funding opportunities to support their work. Navigating the vast landscape of open government grants is often a time-consuming and manual process.

To address this, we launched a collaborative effort led by IBM with contributions from NumFOCUS and several valued individual members of the AI Alliance community, to design an AI-powered Grant Matching Agent.

The goal was to provide end users a simple web interface where they could upload their resume or CV, or a capabilities statement, and have an agentic AI system match them with the most relevant opportunities.

Together, we developed a two-layer solution: a workbench for crafting AI agents, paired with a lightweight, user-friendly front end for end users to iterate in real time.

Using AgentLabUI and Gofannon tools, our team built a pipeline of agents and models to analyze researcher resumes, extract relevant skills and capabilities, and generate structured queries. Then we deployed those agents to a simple front-end grant-matching app.

Here’s how it works from the user side:

  1. Upload your resume (PDF): The system extracts relevant details about your skills, research areas, and focus.
  2. Generate queries automatically: The agent uses either a default or custom prompt to create targeted search queries.
  3. Search grant databases: Another agent runs those queries against available opportunities (e.g. NSF, NIH, Bureau of European & Eurasian Affairs).
  4. Review results: Users can view details, save opportunities, and even contact the funding agency directly.
  5. Iterate quickly: Users can edit their resume, adjust prompts, or refine queries for better matches.

In practice, this means an academic can upload their CV, and within minutes see a curated list of grant opportunities relevant to their expertise, saving hours of manual searching.

This example also shows the division of labor:

  • Practitioners used AgentLabUI and Gofannon to build the underlying agents (resume parser, query generator, grant searcher).
  • End-users interacted with those agents through the Agent UI, without needing to know anything about agents, tools, models, frameworks, or pipelines.


Why It Matters

Organizations need AI tools that are:

  • Flexible for practitioners (supporting many models and tools)

  • Secure (tenanted, SSO-ready, on-prem or cloud)

  • Accessible to non-technical staff

By pairing AgentLabUI and Gofannon with the Agent UI, we bridge the gap between AI R&D and real-world adoption. Practitioners can innovate quickly, while end-users gain access to production-ready agents in workflows tailored to their needs.

In the case of grant matching, this means researchers spend less time hunting for funding and more time advancing their work. In other domains, the same stack can be used for compliance review, sales enablement, knowledge management, and beyond.

AgentLabUI: The AI Practitioner’s Workbench

AgentLabUI (https://github.com/The-AI-Alliance/agent-lab-ui) functions as an AI switchboard, including models, tools, MCP server connections, and a workbench; essentially an IDE for AI development. Practitioners can:

  • Swap and tune LLMs across providers (OpenAI, Anthropic, Meta, etc.)
  • Build and assemble tools in a modular way
  • Export and deploy agents to multiple frameworks (LangChain, Bedrock, etc.)
  • Load tools from GitHub or any MCP server, including A2A-compliant integrations
  • Collaborate across teams, with SSO and secure tenancy

The workbench is project-based. Teams can define agents, connect them to models and tools, and interact with them through a chat interface, uploading files, pulling in repositories, or querying projects directly in natural language.

The result is a secure, flexible environment where AI work isn’t siloed. Teams can work with different models or frameworks, but share results in a consistent interface, multiplying the impact of their collective efforts.

Agent UI: Making Agents Consumable

While AgentLabUI empowers practitioners, the Agent UI makes those agents usable by non-technical users. It provides a simple, structured interface where agents created in the lab can be deployed for direct use.

Think of it as the delivery layer: once an agent is ready, it can be surfaced to the right audience with guardrails intact. Users don’t have to worry about prompts, frameworks, or model details, they just use the tool. Our UI development methodology allows us to consume agents piecewise and assemble them into a processing chain that can combine automatic sequential stages, or provide manual checkpoints where users can review results and iterate on them before taking the next steps.

What’s Next and How to Get Involved

We’re continuing to expand both projects:

  • Adding new integrations and models to AgentLabUI
  • Adding new tools to Gofannon
  • Enhancing Agent UI with richer ways to interact with agents
  • Sharing open-source examples so others can build their own custom agent + UI stacks

This is just the start. By combining a practitioner’s workbench with a user-facing delivery layer, we’re making AI practical, secure, and usable, from lab to deployment.

To get involved, whether writing code or exploring other use cases, you can visit the AgentLabUI (https://github.com/The-AI-Alliance/agent-lab-ui) and Gofannon (https://github.com/The-AI-Alliance/gofannon) GitHub sites and try the software out yourself, contribute issues and pull requests, and ask for help from the teams.

If you would like to explore new use cases to collaborate on, please visit the AI Alliance's intake forms at:  

We look forward to hearing from you.

Related Articles

View All

How Can We Test Enterprise AI Applications?

The AI Alliance’s Trust and Safety Focus Area has released version V0.2.0 of the “Achieving Confidence in Enterprise AI Applications” guide, addressing one of the biggest challenges in enterprise adoption of generative AI: how to test probabilistic systems. Traditional enterprise developers are accustomed to deterministic testing, but AI introduces new complexities. The living guide bridges this gap by adapting benchmark techniques into unit, integration, and acceptance benchmarks for AI applications. It shows how to leverage LLMs to generate and validate datasets, reduce randomness in application design, and identify AI “features” that can be developed incrementally in agile workflows. A practical healthcare chatbot example demonstrates how FAQs can be handled deterministically while still using LLMs for flexible input interpretation, balancing trust, safety, and innovation. This release marks a step forward in helping developers confidently design, test, and deploy enterprise-grade AI systems, while inviting broader collaboration from the community.

Building a Deep Research Agent Using MCP-Agent

This article by Sarmad Qadri documents the journey of building a Deep Research Agent with MCP-Agent, highlighting the evolution from an initial Orchestrator design, to an over-engineered Adaptive Workflow, and finally to the streamlined Deep Orchestrator. The author emphasizes that “MCP is all you need,” showing how connecting LLMs to MCP servers with simple design patterns enables agents to perform complex, multi-step research tasks. Key lessons include the importance of simplicity over complexity, leveraging deterministic code-based verification alongside LLM reasoning, external memory for efficiency, and structured prompting for clarity. The resulting Deep Orchestrator balances performance, scalability, and adaptability, proving effective across domains like finance research. Future directions include remote execution, intelligent tool and model selection, and treating memory/knowledge as MCP resources. The open-source project, available on GitHub, offers developers a powerful foundation for creating general-purpose AI research agents.

DoomArena: A Security Testing Framework for AI Agents

Technical Report