Grant Application: CoW Skills
Author: @bleu @yvesfracari @ribeirojose @mendesfabio
About You:
bleu collaborates with companies and DAOs as a web3 technology and user experience partner. We’re passionate about bridging the experience gap we see in blockchain and web3. We have completed 10+ grants for CoW Protocol, including Hook dApps, Framework-agnostic SDK, Python SDK, and various Safe apps.
Additional Links:
Our work for CoW Protocol includes:
-
Framework Agnostic SDK: Restructured the CoW SDK into the modular package architecture (
sdk-config,sdk-common,sdk-trading, adapters for viem/ethers) that CoW Skills wraps directly. -
CoW Hooks dApps: Built the CoW Shed and Weiroll hook integrations for CoW Swap, the same infrastructure the
cow-hooksskill encodes. -
Programmatic Orders API: Unified indexing API for Composable CoW orders (TWAP, stop-loss), giving deep knowledge of
ConditionalOrderParamsand order lifecycle covered by thecow-tradingskill. -
CoW Swap Frontend Migration to Viem & Wagmi: Migrated the full CoW Swap frontend to viem/wagmi, directly informing the adapter patterns in
cow-common. -
CoW Playground Offline Development Mode: Built developer tooling and local testing infrastructure for CoW Protocol, relevant to the eval suite approach.
-
Mintlify Docs: We recently built the new CoW Protocol documentation site on Mintlify, which offers per-page “View as Markdown”, “Open in Claude”, and “Connect to Cursor” buttons for AI agent consumption.
Grant Category:
Core Infrastructure & Developer Tooling
Grant Description:
AI coding agents — Claude Code, Cursor, Copilot — are now the primary way many developers write software. Integrating CoW Protocol through one of these agents today requires the developer to manually feed the agent documentation, find the right SDK packages, and hope the generated code is correct. There is no structured layer that lets an agent confidently and correctly build CoW Protocol integrations out of the box.
The Mintlify docs we recently built offer per-page “View as Markdown”, “Open in Claude”, and “Connect to Cursor” buttons. The older docs site at docs.cow.fi publishes llms.txt and llms-full.txt following the llmstxt.org standard.
The difference is concrete. When a developer asks an agent to “create a TWAP order for 100 USDC into ETH across 10 intervals on Gnosis Chain”, an agent reading the Mintlify docs has to find the right page, parse the content, infer which SDK packages to combine, construct the correct ConditionalOrderParams struct, and handle EIP-1271 signing. An agent with a CoW Skill loaded produces verified, runnable code, because the skill encodes not just what the API looks like, but what patterns work, what goes wrong, and what correct output looks like through an eval suite.
Uniswap AI and the 0x MCP shipped exactly this for their protocols. CoW Skills does the same for CoW Protocol.
The intended end state: a developer runs npx skills add cowprotocol/cow-skills and their agent can immediately write correct CoW Protocol code and execute live protocol actions, without browsing docs.
Skills Architecture
Each skill is a structured SKILL.md file consumed directly by AI agents. Unlike a documentation page, each skill is workflow-first — end-to-end flow before API surface. Skills are versioned and pinned to cow-sdk version ranges so agents know when one is stale.
cow-common
SDK packages: sdk-config, sdk-common, sdk-viem-adapter, sdk-ethers-v5-adapter, sdk-ethers-v6-adapter
Not a standalone skill — the shared foundation that every other skill imports. Chain IDs, contract addresses, SupportedChainId, OrderKind, provider setup, SDK initialization patterns. Each skill references cow-common for initialization and moves straight to the task.
cow-trading
SDK packages: sdk-trading, sdk-order-book, sdk-order-signing, sdk-app-data, sdk-composable-cow
The core integration path: get a quote, sign an order, post it, track it, cancel it. Includes TWAP orders using sdk-composable-cow. sdk-order-signing is an implementation detail of trading; sdk-app-data (referral codes, hook metadata) is always attached to orders.
cow-hooks
SDK packages: sdk-cow-shed, sdk-weiroll, hooks patterns via sdk-app-data
CoW Shed is necessary for permissioned hooks. An agent building hooks will hit the CoW Shed requirement during the same task. Weiroll (bytecode scripting for advanced hook composition) lives here too — it only applies in a hooks context.
cow-bridging
SDK packages: sdk-bridging
Cross-chain token transfers. Clean standalone scope — no coupling to other skills.
cow-widget
Package: @cowprotocol/widget-lib
A different surface area entirely: frontend developers embedding a ready-made trading interface rather than building against the protocol SDK. createCowSwapWidget, CowSwapWidgetParams, theme configuration, partner fee setup. The lowest-friction CoW integration path and likely the most commonly requested by developers who just need to add trading functionality to an app.
MCP Server
Beyond skills, we will build a CoW Protocol MCP server that exposes protocol actions as live agent tools. This includes market, limit, and TWAP orders:
-
get_quote— fetch a price quote for any token pair on any supported chain -
post_order— submit a signed order to the CoW Protocol orderbook -
get_order_status— query order status by UID -
get_user_orders— list all orders for a given address -
cancel_order— cancel an order that is already posted
This moves agents from documentation readers to protocol participants.
Eval Suite
Each skill ships with an evaluation suite that answers one question: if an AI agent is given this skill, does it produce code that actually works?
Each eval case is a file containing:
-
A prompt — a realistic developer request (e.g., “Create a TWAP order for 100 USDC into ETH across 10 intervals on Gnosis Chain”)
-
The skill — injected as agent context
-
Validation criteria — checks applied to the generated code
The eval harness sends each prompt + skill to an AI model, collects the generated code, and runs validation:
-
Import correctness — does the code import the right
@cowprotocol/sdk-*packages? -
Type checking — does the generated TypeScript compile without errors?
-
Parameter validity — are order structs constructed with correct fields, valid chain IDs, proper token addresses?
-
Execution — does the code run successfully? The order were posted and executed?
Example structure:
evals/
cow-trading/
create-market-order.eval.ts
create-twap-order.eval.ts
cancel-order.eval.ts
cow-hooks/
pre-hook-approval.eval.ts
post-hook-cowshed.eval.ts
cow-widget/
embed-widget-react.eval.ts
run-evals.ts
Evals run in CI (GitHub Actions) on every PR to the skills repo. When the SDK publishes a new version or a skill is updated, the evals catch regressions before developers are affected.
Grant Goals and Impact:
-
Give agents a single installable entry point to CoW Protocol, equivalent to what Uniswap shipped with uniswap-ai
-
Reduce broken integrations by grounding agent output in eval-verified, SDK-accurate skill definitions
-
Catch SDK regressions early: when a skill breaks due to an SDK update, CI catches it before developers are affected
-
Open source from day 0 — community contributors can add skills for new order types via PRs
Architecture:
Each skill is a SKILL.md file structured as:
-
Workflow — end-to-end steps for the task (e.g., “create TWAP order”)
-
SDK packages — which
@cowprotocol/sdk-*packages are needed -
Patterns — correct code patterns with working examples
-
Common errors — what goes wrong and how to fix it
-
Version pinning — compatible
cow-sdkversion range
The MCP server wraps the CoW Protocol orderbook API and SDK into callable tools that agents can invoke directly during a conversation.
The eval suite sends realistic prompts to an AI agent with the skill loaded, then validates the generated code: correct imports, TypeScript compilation, valid order parameters, and execution against a forked chain.
Milestones:
| Milestone | Duration | Payment |
|---|---|---|
Repository setup + cow-common, cow-trading skills + evals |
2 weeks | 6,000 xDAI |
cow-hooks, cow-bridging, cow-widget skills + evals |
2 weeks | 6,000 xDAI |
| MCP server | 2 weeks | 6,000 xDAI |
| Agent plugin packaging + documentation + review | 2 weeks | 6,000 xDAI |
Milestone 1: Core skills + evals (2 weeks)
-
Create the
cow-skillspublic repository with contribution guidelines and SKILL.md template -
Build
cow-common: chain setup, provider initialization, adapter patterns for viem / ethers-v5 / ethers-v6 -
Build
cow-trading: quotes, order signing, posting orders (market and limit), order management, TWAP viasdk-composable-cow, slippage handling, supported chains -
Build the eval harness: prompt runner that sends skill + prompt to an AI model, collects generated code, runs validation pipeline
-
Write eval cases for
cow-commonandcow-trading: market orders, TWAP construction, order cancellation, chain-specific behavior -
Validation layers: import checking, TypeScript compilation, parameter validation, execution against Anvil/Tenderly forks
-
CI integration via GitHub Actions — evals run on every PR and on SDK version bumps
Milestone 2: Advanced skills + evals (2 weeks)
-
Build
cow-hooks: pre-hooks, post-hooks, CoW Shed integration for permissioned actions, Weiroll advanced scripting patterns -
Build
cow-bridging: cross-chain token transfers,@cowprotocol/sdk-bridging, supported chain pairs -
Build
cow-widget:createCowSwapWidget,CowSwapWidgetParams, theme configuration, partner fee setup, React and vanilla JS integration -
Write eval cases for each skill: hook composition, cross-chain transfers, widget embedding
-
Published benchmark results (pass rates per skill, per model) in the repository README
Milestone 3: MCP server (2 weeks)
-
Build and deploy the CoW Protocol MCP server with tools:
get_quote,post_order,get_order_status,get_user_orders,cancel_order -
Support for market, limit, and TWAP orders
-
Test against Claude Code, Cursor, Copilot, Windsurf, and other compatible agent environments
Milestone 4: Agent plugin packaging + documentation + review (2 weeks)
-
Package skills for all major AI coding agents: Claude Code, Cursor, Copilot, Windsurf
-
Single-command installation:
npx skills add cowprotocol/cow-skills -
Skills are SKILL.md files — model-agnostic by design, consumed by any agent that reads markdown context
-
Documentation site: skills reference, installation, quick start, MCP server setup, contribution guide
-
Link from the Mintlify docs under an “AI / Agent Integration” section
-
Address feedback from CoW core team review
-
Final README and changelog
Funding Request:
We propose that milestone payments be released upon each milestone’s approval.
Budget Breakdown:
-
24,000 xDAI split across milestones as listed above (~3,000 xDAI per week per contributor)
-
42,000 COW tokens vested over 12 months
The COW vesting covers ongoing maintenance: keeping current skills up-to-date, add new chains and investigating user reported issues.
Gnosis Chain Address (to receive the grant):
0x5D40015034DA6cD75411c54dd826135f725c2498 (bleubuilders.eth)
Other Information:
-
All code and documentation open-source from day 0 (MIT License)
-
We will coordinate with the CoW Protocol SDK and docs teams to ensure skills accurately reflect the current API and are updated promptly for breaking changes
-
We welcome community PRs to extend the skill library as new order types and integrations land in the protocol
Terms and Conditions:
By submitting this grant application, I acknowledge and agree to be bound by the CoW DAO Participation Agreement and the CoW Grant Terms and Conditions.