Return to Articles

GoFannon: Stop Rewriting AI Tools for Every Framework

Gofannon AI Alliance Project

The Problem: Framework Lock-In Wastes Time

If you’ve built and migrated AI agents, you’ve done this before:

  1. Write a tool for LangChain (e.g., a weather lookup).
  2. Switch to AWS Bedrock—now rewrite it.
  3. Try SmolAgents? Yep, rewrite it again.

This is inefficient. Most AI frameworks use similar underlying specs (like OpenAI’s function calling), but each has its own quirks. GoFannon fixes this by letting you write a tool once and export it to any framework.

 

How GoFannon Works

1. Standardized Tool Definitions

Every GoFannon tool inherits from BaseTool and defines:

  • A definition (following OpenAI’s standards) for inputs/outputs.
  • A fn() method with your core logic.

Example from the on of our example notebooks.


2. One-Line Framework Exports

Convert any tool to your framework of choice:


No rewrites. No boilerplate.

3. Pre-Built Tools

Gofannon includes production-ready tools, like:

  • Wikipedia search
  • Basic math operations

 

Why Developers Are Switching

No More Duplicate Work

Modular Install
Only install what you need:


Easy to Extend
Adding a new tool or framework? We have curated guides.

 

How to Contribute (We Made It Simple)

GoFannon is built for open-source collaboration:

  1. Curated Developer Pathways
  2. Maintained list of ‘Good First Issue’ (Beginner)
  3. Documented path for creating and contributing tools (Intermediate)
  4. Documented path for creating and contributing new agentic frameworks
  5. Gamified Leaderboard
  6. PRs earn points (e.g., +500 for a new tool, +25 for a framework).
  7. Transparent Process
  8. ASFv2 

Try It Now

Explore the docs:

Final Thought

AI frameworks will keep changing. Your tools shouldn’t have to.

Gofannon is by developers, for developers—no hype, just less redundant code.

Star the repo → Try it → Send a PR. Let’s fix this problem together. 🚀


Related Articles

View All
abstract gradient

AI Alliance Comment in Response to the RFI on the Development of an AI Action Plan

News

Architecture of Data Prep Kit Framework 

Technical Report

The Data Prep Kit (DPK) framework enables scalable data transformation using Python, Ray, and Spark, while supporting various data sources such as local disk, S3, and Hugging Face datasets. It defines abstract base classes for transformations, allowing developers to implement custom data and folder transforms that operate seamlessly across different runtimes. DPK also introduces a data abstraction layer to streamline data access and facilitate checkpointing. To support large-scale processing, it provides three runtimes: Python for small datasets, Ray for distributed execution across clusters, and Spark for highly scalable processing using Resilient Distributed Datasets (RDDs). Additionally, DPK integrates with Kubeflow Pipelines (KFP) for automating transformations within Kubernetes environments. The framework includes transform utilities, testing support, and simplified APIs for invoking transforms efficiently. By abstracting complexity, DPK simplifies development, deployment, and execution of data processing pipelines in both local and distributed environments.

Transform Pipelines in Data Prep Kit 

Technical Report

The blog post explores how Kubeflow Pipelines (KFP) automate Data Prep Kit (DPK) transforms on Kubernetes, simplifying execution, scaling, and scheduling. It details the required Kubernetes infrastructure, reusable KFP components, and a pipeline generator for automating workflows. By integrating KFP, DPK streamlines orchestrating and managing complex data transformations.