CoW DAO's First Retro Funding Round

The Grants Council is proposing CoW DAO’s first retroactive funding round, a new approach to rewarding ecosystem contributions that focuses on demonstrated impact rather than upfront promises.

Why Retroactive Funding?

The concept of retroactive funding was pioneered by Optimism. Many of you may already be familiar with the concept; if you would like a refresher, it’s described in this article:

The core insight is simple: it’s easier to agree on what was useful than on what will be useful.

Traditional grants require us to predict which projects will create value. Retroactive funding flips this model. Instead of funding promises, we fund results. Builders create first, then we reward based on actual impact. This approach:

  • Reduces risk - We only fund work that demonstrably benefits the ecosystem

  • Rewards execution - Success is measured by what you built, not what you promised

  • Attracts serious builders - Those confident in their ability to create value will build knowing quality work gets rewarded

How We’re Adapting This for CoW DAO

While Optimism’s model has evolved over the years (great information on it is available here from Carl Cervone and the team at OSO), we’re tailoring our approach specifically for CoW Protocol’s needs.

Our retroactive round will run more like a focused hackathon: we announce priority areas, provide a 4-month build period, and then evaluate and reward based on the measurable impact over that time frame.

Just as our batch auctions find the best prices through competition, retroactive funding finds the best contributions through demonstrated value.

Proposed Program Details

Timeline

  • Launch: August 2025 (pending feedback from the community)

  • Build Period: August - November 2025 (4 months for development)

  • Evaluation: December 2025

  • Distribution: End of 2025

What We’re Looking to Fund

We’ve identified five key areas where the ecosystem needs development:

  • Solver Infrastructure - The solver ecosystem needs updated templates and better tooling. Many existing resources haven’t been touched in years. We aim to fund infrastructure that facilitates the entry of new solvers into the ecosystem, rather than specific solver implementations.

  • Developer Tools - SDKs, debugging tools, monitoring dashboards, and anything that reduces integration time or improves the developer experience when building on CoW Protocol.

  • MEV Protection Research - Academic research and novel protection mechanisms with clear implementation paths. The emphasis is on practical solutions that can be integrated into the protocol.

  • AI Agent Infrastructure - With AI agents becoming more prevalent, we need MCP servers, trading bot frameworks, and APIs designed for programmatic access that leverage CoW’s batch auction benefits.

  • User Experience & Education - Practical educational content that explains intent-based trading, interface improvements, and localization efforts. Not marketing materials, but resources that help users understand and use the protocol effectively.

How It Works

Builders work on their contributions during the 4-month period. At the end, they submit applications documenting their work, its impact, and how to measure that impact. The Grants Council, supported by external technical reviewers, evaluates submissions based on implementation quality, ecosystem impact, and alignment with CoW values.

Awards start at a minimum of 5,000 xDAI plus 5,000 vested COW tokens for approved contributions, scaling up based on impact and quality. The total budget is allocated from the existing Grants Council budget and will be distributed based on merit, rather than predetermined limits.

Key Dependencies

The program relies on the completion of the CoW SDK to provide the necessary infrastructure foundation. We’re also coordinating with the marketing team for developer outreach.

What We Need From You

Before we finalize this program, we want community input on:

  1. Are these the right focus areas? What’s missing?

  2. Is the timeline realistic for meaningful contributions?

  3. What success metrics should we prioritize?

  4. How can we best support builders during the development period?

Please share your thoughts, questions, and suggestions below. We’ll incorporate community feedback before the final proposal.

5 Likes

Thank you for putting forward this proposal — it’s great to see CoW DAO exploring a retroactive funding model tailored to its ecosystem needs. The outlined focus areas and structure seem well considered, and this could meaningfully encourage impactful contributions.

Regarding the points mentioned:

  1. In areas like MEV protection and AI agent infrastructure, impact may be more difficult to quantify within four months. How will the Council weigh early-stage work with long-term potential versus immediately deployable outputs?

  2. Have you considered offering technical support during the build period — such as access to protocol engineers for code reviews, feedback sessions, or security audits? Structured technical support could both raise the overall quality of submissions and reduce the evaluation burden later.

  3. On a practical note, would the applicants need to demontrate their interest at the beginning of period or just at the end, when submitting their work?

2 Likes

Thanks @kpk we very much appreciate you checking in! To answer your questions:

For early-stage work in areas like MEV protection and AI agents, we’d weight progress indicators differently than finished products. Applicants working on longer-term projects should provide their own impact tracking methodology including research milestones, performance metrics, or community engagement with their work. We’d like to ask for publicly verifiable data sources where possible (GitHub commits, research papers, deployments etc).

Since this is our first retro funding round, we expect to learn what evaluation methods work best for different project types and will iterate based on community feedback like this.

This is a great suggestion we hadn’t fully fleshed out. Our current thinking is to leverage the growing developer community around the JavaScript and Python SDKs (built by non-core contributors).

We could formalize office hours or feedback sessions with developers but we also have to be cognizant of time commits and balance. We have also discussed internally tapping other members of the community with specific expertise in these domains so that may be another way to achieve what you are suggesting.

We’re open to experimenting with different support models during this round. The goal is finding what actually helps builders ship quality work without creating bottlenecks.

We recognize some builders might want early feedback or to ensure their work aligns with CoW’s needs so we’re considering an “intent to build” (similar to the standard forum proposal process we have today) where builders could share what they’re working on without formal commitment and get feedback from us if it is something we could see funding.

This would help us prepare appropriate technical support and avoid duplicate efforts (along with aligning expectations on all sides).

Again, this being our first round, we’ll adapt based on what builders tell us works best for them.

Thanks again for your time!

2 Likes

Hi, this is Lucas from Drips.network :droplet:

Great to see CoW DAO piloting a retro funding model with clear focus areas and an “intent to build” phase. If you’re open to exploring RetroPGF for the COW ecosystem, Drips can provide the full infrastructure stack. Already used by Filecoin for their $2M RetroPGF round, with another round launching next week.

  • Public round dashboard & application intake

  • Automated metric tracking from verifiable sources (e.g., GitHub via Open-Source Observer)

  • Admin review & badgeholder/committee voting

  • Public results and automated prize distribution

1. Centralized Data and Effortless Impact Measurement:

You’ve emphasized the need for verifiable metrics. Drips integrates with Open-Source Observer to pull GitHub and other open-source data automatically, displaying contributions on each applicant’s profile. This removes manual self-reporting, cuts admin overhead, and ensures transparency.

2. A Transparent, public hub for COW’s RPGF round:

Collected data powers a public, shareable round profile that serves as the central hub for CoW RPGF round. It shows real-time applicant updates, voting outcomes, and fund distribution, giving the council a single view for evaluation and the community a clear window into progress.

To learn more about Drips and Drips RetroPGF, please reach out or visit:

Happy to share more details or demo the solution if there’s interest. Please share your feedback and questions :slight_smile:

Hi, I also have some questions about this proposal

First of all I want to support this direction, but we need some more specifics

  • The process for selecting external technical reviewers is not described.
    Who will they be, and how will objectivity be ensured?
  • There is no clarity on the maximum total budget.
    This could cause tension if there are many high-quality submissions.

I also believe it would be optimal not to switch entirely to retroactive funding, but instead to split the available budget into two parts (and adjust the allocation over time based on results).

Some developers who are supposed to be impacted by this program won’t be able to fully commit to a project if they are not confident they’ll receive at least some compensation for their work over the 4-month period.

Sure, thanks for the questions. I can drop some answers here:

We have identified a person that previously applied to be part of the grants committee earlier this year. They were not selected but we were impressed by their technical ability. This person and other members of the core team will help when/if needed

There is not a separate budget for this program. We will assess applications as they come in and will use the already approved budget for the grants council. We will keep a careful eye on consumption of budget and ensure that we do not oversubscribe (right now more of the worry is driving quality applications, though)

Understood. The volume of applications has been relatively low so our hope is that this program can bring more quality applications in. We did not allocate a separate budget to the retrofunding since we have no way of anticipating the volume we will see and did not want to commit to an amount because of this.

We are open to paying on successful delivery and verification of impact. If developers are in the position you describe they should denote their preferred terms of payment on their applications along with how they intend to show impact so we can assess.

1 Like

Hey Sov!

  1. What happens if the builder ends before the 4 moths? The user journey during/after the four months will also be something interesting to explore. While thinking on how do you keep this builders engaged and on the right direction. (The suggestion of kpk I agreed with, office hours and connection)
  2. What kPIs have been thought for each of this categories? One of the main things of RetroPGF was the expectation they set during rounds like the third one, where in order to participate not only you needed to have a minimum required, but also have a clear understanding of what the protocol is expecting from this category. The part of minimum requirements may not be useful here, but for the grant program to be transparent and accountable, a guideline of what is expected to be achieve by each category should be added too.
  3. I do agree that there should be an open option for people who can’t / won’t do retrofunding but still provide value by other means.

Thanks for the questions … dropping some responses here inline!

We are open to paying out faster than the four month window if the value is apparent and the builder completes their committed work.Depending on the applications that come in we will be happy to see how best we can support grantees but I’m not sure based on the relatively low volume of applications we have seen to the program historically if office hours or those kinds of spaces make sense just yet.

If you look at the categories most of them would be more qualitative than quantitative (i.e. developing tools, research, initial builds of agentic infra, etc). Ultimately, we hope that all of these contribute to the overall growth of the COW Protocol in terms of transactions and usage.

We intentionally decided to keep the criteria open because (as mentioned above) our challenge traditionally has not been an abundance of applications but a lack thereof. This is in part why we framed this as an elongated hackathon in the original post.

Early RetroFunding programs were more experimental and that is what we hope to achieve here to see 1) if it makes sense for COW and 2) what ideas builders might come to us with.

Sure, as mentioned we will keep our existing program and run it side by side sharing with the same budget for this very reason.

1 Like