CIP-Draft: Align Solver Rewards with Protocol Revenue and introduce a volume based fee

I vote FOR this proposal.

The main positive aspect is that the payout cap for solvers is being removed. It was previously 0.012 ETH and is now essentially limited only by the protocol’s revenue.

There is a controversial 2bp base volume for the protocol, which could significantly reduce the volume of assets of similar value.
However, for the protocol, as far as I understand, this volume was practically empty for revenue.
Also, following the discussion, the ability to change the bp values ​​for certain pairs was added.

2 Likes

I agree that from the perspective of long-term sustainability, maintaining revenue and avoiding losses is essential. However, after the proposed changes, the risks borne by the solver and the protocol are not at the same level.

When an auction solution is successfully executed, the protocol will not incur losses since the solver’s reward is capped by the protocol fee. Although the solver’s maximum reward is also limited by the protocol fee, it must still be reduced by the second-highest bid, which means that in most cases, the remaining surplus can be retained by the protocol. This effect is especially noticeable on L2s, possibly due to the characteristics of the order being placed there or auction on different chains. On the other hand, when execution fails, the protocol can still receive the solver’s second-highest bid (fixed and capped).

Given this asymmetry in risk exposure, perhaps we could consider alternative mechanisms that give solvers higher incentives when successful execution occurs, as long as such mechanisms do not cause losses to the protocol. For example, the reference score could be weighted so that the winning solver is guaranteed a higher minimum return.

Although I’m not sure if raising this point at the current voting stage can still make a difference, it’s worth noting that under the current proposal, the potential benefits and risks associated with order itself will both be amplified compared to the existing system. This is an important consideration for both solvers and the swapping service provided by the protocol.

2 Likes

Indeed, I don’t think it is possible to modify the proposal at this stage. Nonetheless, I’d like to understand why you say that the risks are amplified. The proposal only concerns the positive cap (i.e., the maximum a solver can earn) and doesn’t change anything regarding how penalties for reverts are computed

As you said that the positive cap is changed, the expected value of executing a transaction will be different though the penalties stay with the same fixed value I suppose.

True, but we expect the average the value to increase. In other words, although the protocol will pay less for some orders (small orders) and more for others (large orders), our back-testing simulation shows that solver rewards should increase.

I agree that the back-testing indicates an overall increase in solver rewards. However it’s mainnly on mainnet, on L2s are in different situation. Also what I am trying to highlight is not only about solver profitability, but also about how the expected value of finding a solution for a given order changes under the new structure. The variance of the expected value becomes more sensitive to the characteristics of each order, especially its volume and trading conditions. Because different chains have different execution environments, this effect tends to be more pronounced on L2s.

Given this, I think it is worth exploring whether there could be mechanisms that preserve protocol safety (i.e., no negative auction outcomes) while still giving certain orders, particularly those that might become less attractive under the new proposal, enough competitiveness to avoid being implicitly filtered out. This would help ensure that CoW Swap maintains broad coverage as a swapping service.

Of course, if the intention of the proposal is precisely to let these types of orders phase out naturally, then it ultimately becomes a matter of solver strategy. But from a service-completeness perspective, I believe considering how to handle these boundary cases could still be valuable.

1 Like

Let me introduce a simple mathematical model to fix ideas. Suppose the probability of a revert is p, and the max penalty for a revert is c. There is an order with sell amount s and limit price l. A solver can generate a surplus of X for the user.

If at the bidding stage the solver proposes to return
X − p·c / (1 − p) + s·l
to the trader, then—on average—the solver always makes a profit. Note that this is not the optimal bidding strategy, which would require modeling rewards and penalties. But it is a strategy that guarantees “safety” as long as X > p·c / (1 − p).

This simple model also illustrates that you’re right that this “safety” is achieved only when X is large relative to c. However, since the probability of revert is around 10%, the condition holds as long as X is at least one-ninth of the penalty c. Even if c were set to the maximum penalty across all networks, the only orders that would be negatively affected would those with surplus below 4 USD. Note also that if instead of employing this specific bidding strategy a solver figures out the optimal bidding strategy that guarantees safety, then the threshold for what order are executed is even lower.

In any case, this CIP focuses on rewards, but we also plan to revisit how penalties are calculated. When we do, we will definitely keep your remarks in mind.

EDIT: it turns out that the above is not quite correct. See the next post

Actually, there is another strategy that guarantees safety for any X: just bid X(1-p) +s*l, which is equivalent to leaving pX in the settlement contract as positive slippage. That is because the penalty for revert is at most the surplus you promised to provide, which in this case is (1-p)X (and will usually be smaller because of the cap).

So I take back my earlier claim: there is always a strategy that guarantees “safety” for all X. But, again, this has nothing to do with the optimal strategy

1 Like

Yesterday the 2bips volume fee has been activated. Results have been positive in the sense that the ratio between volume and fees has improved dramatically. (See graphic, from DefiLama, green:volume, red: fees - see the last day)

It is too early to judge what effects the fee will have on volume. However, IMO the goal of Cowswap should be to provide DESPITE this new fee the best option for users to do their trades. Now, from various analysis in the past we know that this has been the case in the past. However, wether the market structure of Cowswap (good solver competition, + possiblility of CoWs) can “beat” the 2bips in fees very much depends on the trading pair.

Simple speaking, the more simple the swap is, the less likely is it that CoWswap can improve the execution in a way to create more extra surplus than the 2bips.

Therefore my proposal here is to selectively reduce the volume fee dependent on the volatility of the trading pair. Luckiliy Cowswap already calculates volatility to set the dynamic slippage tolerance.
A few examples from mainnet:

USDC → USDT auf 0.01%
WETH → wsETH → 0.09%
sDAI → stkGHO → 0.17%
sDAI → ETH → 1.83%

My proposal now is to set the fee at 10% of the slippage tolerance. In simpler words: if slippage tolerance is <0.1% (just for very low volatility pairs → set volume based fee to 0, for assets with a slippage tolerance between 0.1% and 0.2%, set it to 0.01% and for anything above 0.02%)

This should ensure that Cowswap remains the best option for traders.

7 Likes

not sufficiently tuned in the solver reward latest updates, but a user in discord mentioned;

’’the problem with 0 fee is that now the fee is the max solver reward, so 0 fee means 0 solver reward for that trade.’’

Not sure how accurate would that be, but worth to bring into the dicussion I guess. Apart from that I fully agree with @koeppelmann ; rigid fee structure was something that worked well in the past but the space has evolved to be extremely competitive, now every bp counts.

It all resolves to; is CoW able to deliver best absolute pricing and still make money from it? and adjust fees accordingly. Before implementing, imo testing w/ more relevant competitors is important so we know where cow sits w.r.t to them

Btw love the direction this is taking, revenue (and cashflow) should be a top priority in every startup

Addressing Community Feedback on Volume Fees & Next Steps

Hi everyone,

I’ve been closely monitoring the conversation here and in Discord since the 2 bps fee went live on Wednesday.

First, I want to acknowledge the feedback and frustration expressed by some community members regarding execution prices on specific pairs. While the immediate revenue data is positive (as @koeppelmann pointed out), it is crucial that we don’t erode our competitive edge on pairs where a 2 bps fee simply doesn’t fit the market structure.

I want to share some data on why a 2 bps fee is generally viable, before addressing the specific cases where it isn’t.

The Data Case: Why 2 bps Works (On Average)

We recently conducted a “0-Minute Markout” analysis comparing CoW Protocol against major DEX aggregators for trades between $10k-$100k on Ethereum during September 2025. This metric compares the execution price against a neutral price oracle at the exact minute of the trade.

The results show that CoW Protocol’s execution quality is sufficiently superior to absorb a fee in many cases while still leaving the user better off than they would be elsewhere.

  • Median Performance (p50): CoW Protocol achieved a median markout of -0.97 bps, significantly outperforming 1inch Fusion (-2.30 bps), Kyberswap (-1.70 bps), and Uniswap X (-30.00 bps).

  • Upside Potential (p75): CoW was the only protocol to show positive upside (+1.01 bps) at the 75th percentile, proving the value of our solver competition and surplus capture.

When our execution beats the next best alternative by ~1.3 bps (vs Fusion) or more, a small volume fee is often “paid for” by the superior price improvement the protocol generates.

The Exception: Low-Volatility Pairs

However, averages can hide specific outliers. While the data proves our model works for the majority of flow, it also highlights why a rigid fee is problematic for stable-to-stable or LST pairs.

On a USDC/DAI or WETH/wsETH swap, the “execution advantage” margin is extremely thin—often less than 1 basis point. In these specific scenarios, a 2 bps fee can exceed the surplus we generate, leading to the uncompetitive quotes some of you have flagged.

On @koeppelmann’s Proposal (Volatility-Based Fees)

Martin, I think your proposal to link fees to the volatility/slippage of the pair is directionally 100% correct. High volatility pairs can absorb higher fees; low volatility pairs cannot. However, there are two critical constraints we must consider before automating this:

  1. The “Zero Reward” Issue: As @tanglin correctly identified, under the current mechanism, the protocol fee caps the solver reward. If we set fees to 0% for low-volatility pairs, we risk removing the incentive for solvers to settle these batches entirely. We likely need a minimum viable fee (e.g., 0.5 bps) rather than 0 bps to ensure solver participation.

  2. Data Reliability: While using dynamic slippage tolerance as a proxy for volatility is a clever heuristic, we need to verify if this data stream is comprehensive and robust enough to dictate protocol revenue policy programmatically. We need to be careful not to introduce instability into the fee model based on data that was originally intended for a different purpose (slippage protection).

Next Steps

The passed proposal explicitly included the flexibility for the Core Team to manually adjust fees per trading pair (0-5 bps), so we have the mandate to address this without passing another CIP.

We are taking this feedback seriously. We are currently reviewing the examples provided by the community to understand the full scope of the impact on stable/LST pairs. Because adjustments here directly impact solver incentives, we want to ensure any changes are backed by data and don’t inadvertently break the reward mechanism.

We will continue to monitor the situation and discuss the best path forward for these specific pairs.

6 Likes