Grant Application: CoW Skills

Grant Application: CoW Skills


Author: @bleu @yvesfracari @ribeirojose @mendesfabio


About You:

bleu collaborates with companies and DAOs as a web3 technology and user experience partner. We’re passionate about bridging the experience gap we see in blockchain and web3. We have completed 10+ grants for CoW Protocol, including Hook dApps, Framework-agnostic SDK, Python SDK, and various Safe apps.


Additional Links:

Our work for CoW Protocol includes:

  • Framework Agnostic SDK: Restructured the CoW SDK into the modular package architecture (sdk-config, sdk-common, sdk-trading, adapters for viem/ethers) that CoW Skills wraps directly.

  • CoW Hooks dApps: Built the CoW Shed and Weiroll hook integrations for CoW Swap, the same infrastructure the cow-hooks skill encodes.

  • Programmatic Orders API: Unified indexing API for Composable CoW orders (TWAP, stop-loss), giving deep knowledge of ConditionalOrderParams and order lifecycle covered by the cow-trading skill.

  • CoW Swap Frontend Migration to Viem & Wagmi: Migrated the full CoW Swap frontend to viem/wagmi, directly informing the adapter patterns in cow-common.

  • CoW Playground Offline Development Mode: Built developer tooling and local testing infrastructure for CoW Protocol, relevant to the eval suite approach.

  • Mintlify Docs: We recently built the new CoW Protocol documentation site on Mintlify, which offers per-page “View as Markdown”, “Open in Claude”, and “Connect to Cursor” buttons for AI agent consumption.


Grant Category:

Core Infrastructure & Developer Tooling


Grant Description:

AI coding agents — Claude Code, Cursor, Copilot — are now the primary way many developers write software. Integrating CoW Protocol through one of these agents today requires the developer to manually feed the agent documentation, find the right SDK packages, and hope the generated code is correct. There is no structured layer that lets an agent confidently and correctly build CoW Protocol integrations out of the box.

The Mintlify docs we recently built offer per-page “View as Markdown”, “Open in Claude”, and “Connect to Cursor” buttons. The older docs site at docs.cow.fi publishes llms.txt and llms-full.txt following the llmstxt.org standard.

The difference is concrete. When a developer asks an agent to “create a TWAP order for 100 USDC into ETH across 10 intervals on Gnosis Chain”, an agent reading the Mintlify docs has to find the right page, parse the content, infer which SDK packages to combine, construct the correct ConditionalOrderParams struct, and handle EIP-1271 signing. An agent with a CoW Skill loaded produces verified, runnable code, because the skill encodes not just what the API looks like, but what patterns work, what goes wrong, and what correct output looks like through an eval suite.

Uniswap AI and the 0x MCP shipped exactly this for their protocols. CoW Skills does the same for CoW Protocol.

The intended end state: a developer runs npx skills add cowprotocol/cow-skills and their agent can immediately write correct CoW Protocol code and execute live protocol actions, without browsing docs.

Skills Architecture

Each skill is a structured SKILL.md file consumed directly by AI agents. Unlike a documentation page, each skill is workflow-first — end-to-end flow before API surface. Skills are versioned and pinned to cow-sdk version ranges so agents know when one is stale.

cow-common

SDK packages: sdk-config, sdk-common, sdk-viem-adapter, sdk-ethers-v5-adapter, sdk-ethers-v6-adapter

Not a standalone skill — the shared foundation that every other skill imports. Chain IDs, contract addresses, SupportedChainId, OrderKind, provider setup, SDK initialization patterns. Each skill references cow-common for initialization and moves straight to the task.

cow-trading

SDK packages: sdk-trading, sdk-order-book, sdk-order-signing, sdk-app-data, sdk-composable-cow

The core integration path: get a quote, sign an order, post it, track it, cancel it. Includes TWAP orders using sdk-composable-cow. sdk-order-signing is an implementation detail of trading; sdk-app-data (referral codes, hook metadata) is always attached to orders.

cow-hooks

SDK packages: sdk-cow-shed, sdk-weiroll, hooks patterns via sdk-app-data

CoW Shed is necessary for permissioned hooks. An agent building hooks will hit the CoW Shed requirement during the same task. Weiroll (bytecode scripting for advanced hook composition) lives here too — it only applies in a hooks context.

cow-bridging

SDK packages: sdk-bridging

Cross-chain token transfers. Clean standalone scope — no coupling to other skills.

cow-widget

Package: @cowprotocol/widget-lib

A different surface area entirely: frontend developers embedding a ready-made trading interface rather than building against the protocol SDK. createCowSwapWidget, CowSwapWidgetParams, theme configuration, partner fee setup. The lowest-friction CoW integration path and likely the most commonly requested by developers who just need to add trading functionality to an app.

MCP Server

Beyond skills, we will build a CoW Protocol MCP server that exposes protocol actions as live agent tools. This includes market, limit, and TWAP orders:

  • get_quote — fetch a price quote for any token pair on any supported chain

  • post_order — submit a signed order to the CoW Protocol orderbook

  • get_order_status — query order status by UID

  • get_user_orders — list all orders for a given address

  • cancel_order — cancel an order that is already posted

This moves agents from documentation readers to protocol participants.

Eval Suite

Each skill ships with an evaluation suite that answers one question: if an AI agent is given this skill, does it produce code that actually works?

Each eval case is a file containing:

  1. A prompt — a realistic developer request (e.g., “Create a TWAP order for 100 USDC into ETH across 10 intervals on Gnosis Chain”)

  2. The skill — injected as agent context

  3. Validation criteria — checks applied to the generated code

The eval harness sends each prompt + skill to an AI model, collects the generated code, and runs validation:

  • Import correctness — does the code import the right @cowprotocol/sdk-* packages?

  • Type checking — does the generated TypeScript compile without errors?

  • Parameter validity — are order structs constructed with correct fields, valid chain IDs, proper token addresses?

  • Execution — does the code run successfully? The order were posted and executed?

Example structure:


evals/

cow-trading/

create-market-order.eval.ts

create-twap-order.eval.ts

cancel-order.eval.ts

cow-hooks/

pre-hook-approval.eval.ts

post-hook-cowshed.eval.ts

cow-widget/

embed-widget-react.eval.ts

run-evals.ts

Evals run in CI (GitHub Actions) on every PR to the skills repo. When the SDK publishes a new version or a skill is updated, the evals catch regressions before developers are affected.


Grant Goals and Impact:

  • Give agents a single installable entry point to CoW Protocol, equivalent to what Uniswap shipped with uniswap-ai

  • Reduce broken integrations by grounding agent output in eval-verified, SDK-accurate skill definitions

  • Catch SDK regressions early: when a skill breaks due to an SDK update, CI catches it before developers are affected

  • Open source from day 0 — community contributors can add skills for new order types via PRs


Architecture:

Each skill is a SKILL.md file structured as:

  1. Workflow — end-to-end steps for the task (e.g., “create TWAP order”)

  2. SDK packages — which @cowprotocol/sdk-* packages are needed

  3. Patterns — correct code patterns with working examples

  4. Common errors — what goes wrong and how to fix it

  5. Version pinning — compatible cow-sdk version range

The MCP server wraps the CoW Protocol orderbook API and SDK into callable tools that agents can invoke directly during a conversation.

The eval suite sends realistic prompts to an AI agent with the skill loaded, then validates the generated code: correct imports, TypeScript compilation, valid order parameters, and execution against a forked chain.


Milestones:

Milestone Duration Payment
Repository setup + cow-common, cow-trading skills + evals 2 weeks 6,000 xDAI
cow-hooks, cow-bridging, cow-widget skills + evals 2 weeks 6,000 xDAI
MCP server 2 weeks 6,000 xDAI
Agent plugin packaging + documentation + review 2 weeks 6,000 xDAI

Milestone 1: Core skills + evals (2 weeks)

  • Create the cow-skills public repository with contribution guidelines and SKILL.md template

  • Build cow-common: chain setup, provider initialization, adapter patterns for viem / ethers-v5 / ethers-v6

  • Build cow-trading: quotes, order signing, posting orders (market and limit), order management, TWAP via sdk-composable-cow, slippage handling, supported chains

  • Build the eval harness: prompt runner that sends skill + prompt to an AI model, collects generated code, runs validation pipeline

  • Write eval cases for cow-common and cow-trading: market orders, TWAP construction, order cancellation, chain-specific behavior

  • Validation layers: import checking, TypeScript compilation, parameter validation, execution against Anvil/Tenderly forks

  • CI integration via GitHub Actions — evals run on every PR and on SDK version bumps

Milestone 2: Advanced skills + evals (2 weeks)

  • Build cow-hooks: pre-hooks, post-hooks, CoW Shed integration for permissioned actions, Weiroll advanced scripting patterns

  • Build cow-bridging: cross-chain token transfers, @cowprotocol/sdk-bridging, supported chain pairs

  • Build cow-widget: createCowSwapWidget, CowSwapWidgetParams, theme configuration, partner fee setup, React and vanilla JS integration

  • Write eval cases for each skill: hook composition, cross-chain transfers, widget embedding

  • Published benchmark results (pass rates per skill, per model) in the repository README

Milestone 3: MCP server (2 weeks)

  • Build and deploy the CoW Protocol MCP server with tools: get_quote, post_order, get_order_status, get_user_orders, cancel_order

  • Support for market, limit, and TWAP orders

  • Test against Claude Code, Cursor, Copilot, Windsurf, and other compatible agent environments

Milestone 4: Agent plugin packaging + documentation + review (2 weeks)

  • Package skills for all major AI coding agents: Claude Code, Cursor, Copilot, Windsurf

  • Single-command installation: npx skills add cowprotocol/cow-skills

  • Skills are SKILL.md files — model-agnostic by design, consumed by any agent that reads markdown context

  • Documentation site: skills reference, installation, quick start, MCP server setup, contribution guide

  • Link from the Mintlify docs under an “AI / Agent Integration” section

  • Address feedback from CoW core team review

  • Final README and changelog


Funding Request:

We propose that milestone payments be released upon each milestone’s approval.


Budget Breakdown:

  • 24,000 xDAI split across milestones as listed above (~3,000 xDAI per week per contributor)

  • 42,000 COW tokens vested over 12 months

The COW vesting covers ongoing maintenance: keeping current skills up-to-date, add new chains and investigating user reported issues.


Gnosis Chain Address (to receive the grant):

0x5D40015034DA6cD75411c54dd826135f725c2498 (bleubuilders.eth)


Other Information:

  • All code and documentation open-source from day 0 (MIT License)

  • We will coordinate with the CoW Protocol SDK and docs teams to ensure skills accurately reflect the current API and are updated promptly for breaking changes

  • We welcome community PRs to extend the skill library as new order types and integrations land in the protocol


Terms and Conditions:

By submitting this grant application, I acknowledge and agree to be bound by the CoW DAO Participation Agreement and the CoW Grant Terms and Conditions.

Hey @bleu team

Thanks for putting this together. We’ve reviewed the proposal internally and after discussion the committee has decided not to move forward with this as a standalone grant.

To be clear, we think the skills themselves are useful and we’d encourage you to open source them for the community. We just don’t think a separate funding track is the right approach here.

We appreciate the continued work you all do for the protocol. This doesn’t change our view of bleu as a builder in the ecosystem.

Sov

Thanks for the transparency @Sov . Appreciate it.

One clarification on the “open source them” point — these skills and the MCP server don’t exist yet. That’s what the grant was for. We used AI agents during previous grant work, but as general dev tools — not as CoW-specific skills that help developers build protocol integrations through agents.

So there isn’t anything to open source here today. If the committee’s view is that this work is useful but shouldn’t be a standalone grant, we’re open to discussing other structures — folding it into an existing workstream, scoping it differently, whatever makes sense. Happy to jump on a call if that’s easier.

Thanks for the clarification. I think, for now, we’d like to see the current grants completed and closed out then we can look at potential follow on opportunities after that.

Overall, I don’t think skills like this would be something we would fund unless it was clear we’d see enough adoption to justify.