CoW DAO's First Retro Funding Round

The Grants Council is proposing CoW DAO’s first retroactive funding round, a new approach to rewarding ecosystem contributions that focuses on demonstrated impact rather than upfront promises.

Why Retroactive Funding?

The concept of retroactive funding was pioneered by Optimism. Many of you may already be familiar with the concept; if you would like a refresher, it’s described in this article:

The core insight is simple: it’s easier to agree on what was useful than on what will be useful.

Traditional grants require us to predict which projects will create value. Retroactive funding flips this model. Instead of funding promises, we fund results. Builders create first, then we reward based on actual impact. This approach:

  • Reduces risk - We only fund work that demonstrably benefits the ecosystem

  • Rewards execution - Success is measured by what you built, not what you promised

  • Attracts serious builders - Those confident in their ability to create value will build knowing quality work gets rewarded

How We’re Adapting This for CoW DAO

While Optimism’s model has evolved over the years (great information on it is available here from Carl Cervone and the team at OSO), we’re tailoring our approach specifically for CoW Protocol’s needs.

Our retroactive round will run more like a focused hackathon: we announce priority areas, provide a 4-month build period, and then evaluate and reward based on the measurable impact over that time frame.

Just as our batch auctions find the best prices through competition, retroactive funding finds the best contributions through demonstrated value.

Proposed Program Details

Timeline

  • Launch: August 2025 (pending feedback from the community)

  • Build Period: August - November 2025 (4 months for development)

  • Evaluation: December 2025

  • Distribution: End of 2025

What We’re Looking to Fund

We’ve identified five key areas where the ecosystem needs development:

  • Solver Infrastructure - The solver ecosystem needs updated templates and better tooling. Many existing resources haven’t been touched in years. We aim to fund infrastructure that facilitates the entry of new solvers into the ecosystem, rather than specific solver implementations.

  • Developer Tools - SDKs, debugging tools, monitoring dashboards, and anything that reduces integration time or improves the developer experience when building on CoW Protocol.

  • MEV Protection Research - Academic research and novel protection mechanisms with clear implementation paths. The emphasis is on practical solutions that can be integrated into the protocol.

  • AI Agent Infrastructure - With AI agents becoming more prevalent, we need MCP servers, trading bot frameworks, and APIs designed for programmatic access that leverage CoW’s batch auction benefits.

  • User Experience & Education - Practical educational content that explains intent-based trading, interface improvements, and localization efforts. Not marketing materials, but resources that help users understand and use the protocol effectively.

How It Works

Builders work on their contributions during the 4-month period. At the end, they submit applications documenting their work, its impact, and how to measure that impact. The Grants Council, supported by external technical reviewers, evaluates submissions based on implementation quality, ecosystem impact, and alignment with CoW values.

Awards start at a minimum of 5,000 xDAI plus 5,000 vested COW tokens for approved contributions, scaling up based on impact and quality. The total budget is allocated from the existing Grants Council budget and will be distributed based on merit, rather than predetermined limits.

Key Dependencies

The program relies on the completion of the CoW SDK to provide the necessary infrastructure foundation. We’re also coordinating with the marketing team for developer outreach.

What We Need From You

Before we finalize this program, we want community input on:

  1. Are these the right focus areas? What’s missing?

  2. Is the timeline realistic for meaningful contributions?

  3. What success metrics should we prioritize?

  4. How can we best support builders during the development period?

Please share your thoughts, questions, and suggestions below. We’ll incorporate community feedback before the final proposal.

3 Likes

Thank you for putting forward this proposal — it’s great to see CoW DAO exploring a retroactive funding model tailored to its ecosystem needs. The outlined focus areas and structure seem well considered, and this could meaningfully encourage impactful contributions.

Regarding the points mentioned:

  1. In areas like MEV protection and AI agent infrastructure, impact may be more difficult to quantify within four months. How will the Council weigh early-stage work with long-term potential versus immediately deployable outputs?

  2. Have you considered offering technical support during the build period — such as access to protocol engineers for code reviews, feedback sessions, or security audits? Structured technical support could both raise the overall quality of submissions and reduce the evaluation burden later.

  3. On a practical note, would the applicants need to demontrate their interest at the beginning of period or just at the end, when submitting their work?

Thanks @kpk we very much appreciate you checking in! To answer your questions:

For early-stage work in areas like MEV protection and AI agents, we’d weight progress indicators differently than finished products. Applicants working on longer-term projects should provide their own impact tracking methodology including research milestones, performance metrics, or community engagement with their work. We’d like to ask for publicly verifiable data sources where possible (GitHub commits, research papers, deployments etc).

Since this is our first retro funding round, we expect to learn what evaluation methods work best for different project types and will iterate based on community feedback like this.

This is a great suggestion we hadn’t fully fleshed out. Our current thinking is to leverage the growing developer community around the JavaScript and Python SDKs (built by non-core contributors).

We could formalize office hours or feedback sessions with developers but we also have to be cognizant of time commits and balance. We have also discussed internally tapping other members of the community with specific expertise in these domains so that may be another way to achieve what you are suggesting.

We’re open to experimenting with different support models during this round. The goal is finding what actually helps builders ship quality work without creating bottlenecks.

We recognize some builders might want early feedback or to ensure their work aligns with CoW’s needs so we’re considering an “intent to build” (similar to the standard forum proposal process we have today) where builders could share what they’re working on without formal commitment and get feedback from us if it is something we could see funding.

This would help us prepare appropriate technical support and avoid duplicate efforts (along with aligning expectations on all sides).

Again, this being our first round, we’ll adapt based on what builders tell us works best for them.

Thanks again for your time!