A token conservation test for preventing unfair surplus shifting

Note: this post is the result of a collaboration between team members of CoW Protocol (@marco , @tom , Alex and myself) and @voyta.eth , a very active community member.

Introduction

At CoW Protocol, multiple orders can be matched against each other, or jointly against external liquidity - a major advantage over DEX aggregators where orders are treated in isolation. However, this flexibility comes with a potential for unfair solutions in which surplus is shifted illegitimately from one order to another, i.e., an order that was necessary for generating a certain surplus might end up not “receiving” it.

In this post, we aim to raise awareness on this phenomenon by first presenting a simple example of an instance where surplus can be unfairly shifted between different user orders. In order to address this, we then describe a test that detects this particular issue; if added to the rules of the solver competition, such a test would help provide stronger fairness guarantees to the users, and would be one more step towards our broader goal of providing users with the best and “most fair” trades. Finally, towards the end of this post, we also address limitations of the proposed test and point towards further steps that need to be taken in order for the solver competition to be in close alignment with the protocol’s goals.

A simple example of unfair surplus shifting

Consider the following two user orders and two liquidity orders:

  • User order o1 is selling 3000 USDC for at least 1 ETH
  • User order o2 is selling 3000 DAI for at least 1 ETH
  • Liquidity order l1 is buying 3000 USDC for 1.01 ETH
  • Liquidity order l2 is buying 3000 DAI for 1 ETH

For simplicity, we assume that fees match exactly the execution costs, and thus a solution that decides to match user order o1 with liquidity order l1, and user order o2 with liquidity order l2, has an objective function value that is equal to the total surplus generated. It is easy to see that in case we choose to execute the two CoWs (Coincidence of Wants), we get a total surplus of 0.01 ETH.

An unfair solution

We now present a solution that we believe is very clearly “unfair”. The solution is presented in the picture below, where we use the convention that an order is depicted with a directed arrow starting from the buy_token and pointing to the sell_token of the order.

The clearing exchange rates are given as ETH/USDC = 3000 and DAI/USDC = 1.01.

This solution can be objectively considered unfair as it shifts surplus between orders. User order o1 should have gotten a surplus of 0.01 ETH given there is a liquidity order (l1) trading directly with it. Instead, the solution proposes that user order o1 gets no surplus whereas user order o2 gets a surplus of 0.01 ETH.

Note that the objective function is simply equal to the total surplus generated, which is 0.01 ETH.

A key property that is violated here is what we call “token conservation per user order”. More precisely, if we were to “split” the graph into two graphs, each containing the trading cycle of each user order, then it would be easy to see that the trading cycle of o1 is “losing” 0.01 ETH. In other words, some tokens are extracted out of that trading cycle.

One way to demonstrate this is by using the exchange rates as follows (where, as a reminder, the exchange rate of an order is defined as the ratio of the sell amount over the buy amount):

  • Trading cycle of o1:

exchange_rate(o1) ⚯ exchange_rate(l1) = (3000 / 1) * (1.01 / 3000) = 1.01.

  • Trading cycle of o2:

exchange_rate(o2) ⹯ exchange_rate(l2) = (3000 / 1.01) * (1 / 3000) ≅ 0.99.

A fair solution

The desirable solution to the above batch would be to give this 0.01 ETH to user order o1, as already mentioned, and as shown in the picture below.

In this case, the clearing exchange rates would be ETH/USDC ≅ 2970 and DAI/USDC ≅ 0.99. It is also easy to see that the product of exchange rates in each trading cycle is now equal to 1:

  • Trading cycle 1:

exchange_rate(o1) ⚯ exchange_rate(l1) = (3000 / 1.01) * (1.01 / 3000) = 1.

  • Trading cycle 2:

exchange_rate(o2) ⚯ exchange_rate(l2) = (3000 / 1) * (1 / 3000) = 1.

Note that again, as before, the objective function is simply equal to the total surplus generated, which is 0.01 ETH; this indeed reveals that the objective function itself cannot distinguish between a fair and an unfair solution, both having the same surplus.

Local token conservation test

To mitigate solutions such as the unfair one presented above, we need to ensure that for every user order α, no external tokens “enter” or “exit” the trading cycles that contain α (and consequently, no surplus can be shifted between orders on independent trading cycles). We refer to this condition as “token conservation per order" or “local token conservation”.

More formally, for each user order α, we require that a certain convex combination of the products of exchange rates over all trading cycles that contain user order α is equal to 1:

where λ(C) and r(C) are well-defined non-negative weights and exchange rates, respectively, of a trading cycle C in the trading subgraph C(α) of user order α (see here for a proper definition of these terms; and here a more in-depth write-up).

Roughly speaking, this weighted sum (which, more precisely, is a convex combination, i.e., the sum of λ’s adds up to 1) ensures that the resources (i.e., AMMs/liquidity orders) used to execute a given batch are “accessed” by all user orders in the “same rate”, and thus result in a fair execution.

Benefits

This test would filter out a certain kind of unfair solutions that may otherwise be considered valid, and also prevent some internal buffer attacks. There are no known drawbacks compared to the status quo.

Implementation

The test could simply be implemented in the backend code. For solvers that use private liquidity, it requires trusting that the advertised traded amounts match the settled transactions, which needs to be verified retrospectively.

Conclusion & Outlook

As of now, the above test, which is already described in the documentation, is not explicitly done when validating solutions, and is only enforced on a solver-by-solver basis, since all solver teams have been asked to ensure that the solutions they currently submit do pass the test. Nevertheless, we believe that it could be a formal test that every solution is required to pass before considering it valid.

Of course, this test does not solve all known issues with unfair surplus shifting. In particular, it does not address the fundamental bargaining problem that arises when there is a perfect CoW between two user orders (or in general, with more user orders creating a single trading cycle). In such cases, there is usually an interval of candidate prices that all generate the same total surplus, and thus are considered equivalent by the current objective function. Again, currently all solver teams participating in the competition have been asked to ensure that the prices in their computed solutions follow what we call “market prices”, and the CoW Protocol team has been closely monitoring executed batches to ensure that this is indeed the case.

A fundamentally stronger and arguably more complex solution to all unfair surplus shifts can be given by enforcing some version of envy-freeness. The solver team of CoW Protocol has been working on this topic, which will be addressed in another post in the near future.

Until then, we believe that, on top of the proposed test presented above, “social consensus” rules should be in place so as to ensure that a solver executing a settlement that unfairly shifts surplus could potentially be penalized.

11 Likes

Thanks for the great writeup! Let me try to give a ELI5 summary for the less mathematical literate reader.

If the example that Harris shared had been executed sequentially outside of CoW Protocol, the execution would have been very clearly:

  1. order 1 gets 1.01 ETH for their 3000 USDC (0.1 ETH surplus)
  2. order 2 gets 1 ETH for their 3000 DAI (no surplus)

When batched together into a single settlement transaction however, the total surplus (0.1 ETH) can in theory be arbitrarily assigned to order 2 (since the objective just maximizes the sum of surplus, it’s indifferent who gets it).

This is of course very undesirable because order1 would in this case rather trade in isolation to guarantee it’s getting the surplus it deserves.

What the proposed “token conservation check” does intuitively speaking is trying to reconstruct the individual execution paths each order is taking in the batched solution graph (referred to as “independent trading cycle”). The check then ensures that surplus that is generated on one path is distributed only between orders on that path and not shifted into orders that are on in principle unrelated paths.

5 Likes

I believe I have found a vulnerability of the local conservation constraints to an ‘attack’ of sorts, where a user can essentially soak up all the surplus created in a coincidents of wants.

It uses that fact that if a batch consists of two orders trading the same pair in opposite directions, where one order is slightly larger than the other, then the trades will be cleared at whatever price the remaining part of the larger order is matched on chain (it is easy to check this must be true if the local token conservation constraints hold). In other words, the attacker can ensure that they get matched at a maker price, even though they are essentially a taker.

A concrete example could look like this:

Suppose user 1 puts in an order to sell 1 unit of token A for token B. They received a quote promising at least $x$ token $B$ (this is visible in the order book).

Now suppose our attacker observes that the best marginal rate at which token A can be swapped for token B on chain is $y$, where $y>x$. The attacker can force a batch where it sells token B for token A at this rate $y$.

It does so by submitting a sell order for an $x-epsilon$ amount of token B for token A. This just short of enough to fill user order 1, so the solver will need to trade a small (roughly epsilon/y) amount of token A for token B on chain, at a marginal rate of y. This in turn means that the batch must clear at a price that sets the exchange rate for token A to token B to y, given all the surplus to the attacker.

This vulnerability is relatively easy to exploit, as it only requires monitoring the order book. The attacker doesn’t need to know the best marginal rate y precisely. In fact even if a solver finds a better price than y, the solution will adjust to this better price. The only risk is that at solving time, no solver manages to find marginal liquidity at a price better than x (eg due to a sudden price move), in which case the attacker’s order might be executed alone (and therefore doesnt extract any surplus). The time window in which this problem can develop is naturally very small though, and even when it happens the solvers are still optimizing for surplus, so relatively mild overall.

While this strategy doesn’t imply that user 1 gets worse prices than they would have gotten had the attacker not participated at all, I do feel it goes against the notion of fairness we are trying to enforce with the constraints in the first place.

In particular, using the capped surplus idea without the local token conservation constraints could lead to a much fairer outcome in the same situation (ie clearing at a price close to the spot price).

Thanks for the observation @tbosman ! Indeed, this is a good catch. Let me first reiterate what you wrote with a concrete example with numbers, just to make sure we are on the same page. So, the setting is the following:

Suppose we have a sell order #1 that sells 10k of token A for at least 9950 of token B.

Suppose there is a Uni v2 pool that has a reserve of 1B of token A and 1B of token B.

If #1 is matched against the pool, then it gets back y = 9969.90 of token B, since y must satisfy

(1B + 0.997 * 10k)(1B - y) >= 1B * 1B.

Now, suppose that there is a buy order #2 that buys 10k of token A by selling at most 10050 of B. If we ignore order #1 and just match order #2 against the above pool, we get that order #2 must sell x = 10030.19 units of token B, since x must satisfy

(1B + 0.997 * x)(1B - 10k) >= 1B * 1B.

We now observe that if a solver decides to execute the perfect CoW between orders #1 and #2, there is a range of prices that it can pick from, and arguably a reasonable one seems to be the spot price of the AMM, i.e., set p(A) = p(B). In that case, both orders “see” an exchange rate of exactly 1.

The “attack” now looks as follows. Order #2 is updated and now buys 9999 units of token A by selling at most 10050 units of B. If a solver now decides to again execute the CoW, we have the following:

  • Order #2 buys 9999 units of token A, and thus there is 1 additional unit of token A left.
  • This is sent to the pool, and we get back 0.997 units of token B. Thus, the Uniswap pool has an effective exchange rate of 0.997.

The key observation that @tbosman made now is the following. Suppose that we impose local token conservation. Considering order #1, local token conservation implies that

t * r(o1) * r(Uniswap) + (1-t) * r(o1) * r(o2) = 1,

where 0 <t < 1 is some number that we don’t really need to compute here, and r() denotes the exchange rate.

Since we always enforce Uniform Clearing Prices, we must have

r(o1) * r(o2) = 1

Plugging this into the equation above, we get that

r(Uniswap) = r(o2) = 1 / r(o1).

Thus, order #2 must have the same exchange rate as the Uniswap pool, which means that order #2 sells 0.997 * 9999 = 9969.003 units of token B.

Putting everything together, we have the following execution:

  • Order #1 sells 10k units of token A and buys 9970 units of token B
  • Order #2 buys 9999 units of token A by selling 9969.003 units of token B.

This shows that although in principle there is a range of prices that could be considered acceptable, and could give order #1 some number between 9969.90 and 10k of B, the “attacker” managed to push the price to the very left of that interval and get a better price for themselves, by reducing their buy amount by a tiny bit and “exploiting” the fact that local token conservation imposes certain exchange rates when an AMM is touched.

This is definitely a valid issue, although as mentioned already, it is not trivial to execute such an “attack” and the extracted amount is usually small. Also, order #1 still gets a price protection, as it will still get a strictly better deal than by being matched against the AMM. However, order #2 indeed gets treated favorably, as it manages in a way to avoid paying an amount equal to the fee that would have otherwise been paid to the pool. Nevertheless, up until recently, I personally considered such instances a feature of local token conservation, i.e., the fact that it can introduce “market prices” as a biproduct into our problem the way we saw above. Still, I agree that such edge cases are tricky, and I do acknowledge that in the above instance the correct price should probably still be closer to 1.

Would be interested to hear more opinions about how much of a problem you think this is.

1 Like

As a follow-up observation, and the credit for this observation goes to @tbosman who communicated this to me, the above “attack” actually is a result of just Uniform Clearing Prices (UCP) and global token conservation! In other words, the example described above happens exactly as is, even without enforcing local token conservation.

To see this, note that because of UCP, we have r(o1) = 1 / r(o2). Moreover, due to global token conservation, we have:

  • amount_o1_sells = amount_o2_buys + amount_Uniswap_buys

  • amount_o1_buys = amount_o2_sells + amount_Uniswap_sells

Thus, we have

amount_Uniswap_sells = amount_o1_buys - amount_o2_sells
= amount_o1_buys - r(o2) * amount_o2_buys

and

amount_Uniswap_buys = amount_o1_sells - amount_o2_buys
= r(o1) * amount_o1_buys - amount_o2_buys

Putting everything together, we get that r(Uniswap) = r(o2).

I still think that one can construct more involved examples where the above phenomenon is caused by local token conservation and not just global and UCPs, but I don’t have any such concrete example yet.

Since I think that global token conservation is natural to ask here, this, in my opinion, actually shows some of the limitations of UCP.

1 Like

This strategy actually looks a lot like the just-in-time liquidity MEV strategy in Uniswap v3. As @bertcmiller from flashbots pointed out, it’s not so clear if that one is actually a problem for users. Indeed it can only improve the price for users. It’s mostly the LP providers who were there earlier who suffer from it.

I think it’s a bit similar in our case. The ‘attack’ doesn’t reduce prices for users in the batch, but it does impose negative externalities on market makers that (constantly) provide liquidity to the platform.

However, atm liquidity is mostly provided by non-cow-native sources, so it would be those 3rd parties that are most affected. Native orders are also lower gas cost and carry no execution risk, so from the platform perspective it’s completely positive.

I am now wondering if we should actually be happy if people start exploiting this?

1 Like

Thanks for the detailed explanation. I have a comment/concern regarding the implementation of the proposed constraints. I believe the proposed constraints can be formally modelled using exponential cones (for a single cycle) and via a combination of other cones for orders being part of multiple trading cycles. For those using a mathematical programming approach, these constraints would make the problem significantly harder, even to provide a feasible solution in under 10 seconds. I might think of viable well-known strategies for replacing exponential cones, such as using piecewise approximations to avoid resorting to MISOCP solvers. However, the usage of this approximation might complicate keeping the original constraints feasible.

Similar to the surplus objective, where external prices replace actual prices, I wonder if some flexibility (a ± epsilon) could be incorporated to reduce complexity.

thanks for the great explanation of the problem and possible solution!

This is indeed a valid concern. Complexity-wise, as of now, we don’t have an efficient way (i.e., polynomial-time) to even check the constraint. Of course, the instances that we have in practice are very small so one can indeed test if the constraint is satisfied in way less than 10 seconds (for the vast majority of cases). Reasonable relaxations that introduce minimal errors while allowing for significantly faster computations are of course more than welcome, so I would definitely be interested in, say, a (1+epsilon)-approx of the test that runs very fast and scales well for larger instances.

In any case, I do agree that incorporating the constraint in a math programming model is very tricky. However, there are reasonable strategies (that, of course, will not always lead to optimal solutions) that allow for generating feasible solutions that satisfy the constraint quite easily (e.g., if your solution consists of a bunch of completely disjoint cycles, then it will satisfy the contsraint, modulo the slippage of AMM interactions that might happen on-chain).

1 Like

One reasonable avenue for solvers would be to go for a piecewise linear approximation and then check for the exact constraints afterwards, and if they fail don’t submit that solution (or fall back to other strategy).

The way I think about it: suppose we looks at solutions that satisfy every other constraint and optionally the local token conservation constraints (LTC). Such solutions fall into natural classes with the same ‘topology’ (in the sense of filling the same orders and using the same liquidity sources). Some of those classes may have solutions with different levels of LTC violation. Let’s call a solution class epsilon-bad if all solutions in it have greater than epsilon LTC violation. Similarly, call solution class epsilon-nice if all solutions have at most epsilon LTC violation. More generally call a solution class good if it contains at least 1 solution that doesn’t violate LTC.

Now if a batch only contains epsilon-bad and 0-nice classes, running an optimizer with epsilon-approximate LTC constraints will give you the correct answer.

The reason that this is a useful idea is that the solutions found by mathematical programming solvers that violate LTC often come from epsilon-bad classes. Basically, there will be an order whose limit price is too tight to be filled directly, but can be added by stealing surplus from some other user (and perhaps the additional fee would outweigh the loss of surplus). This could be the case if we change the example of @harisang so that o2 wants at least 1.01 eth for its 3000 DAI. On the other hand, a solution with one order or several orders sitting in unconnected components never has LTC violation.

Of course, the tricky case is if the solver returns a solution that has an epsilon violation of LTC. It could be that it’s from a delta-bad class with 0 < delta< epsilon, and we should reject the whole thing. It could also come from a good class that simply contains some bad points, i.e. it’s delta-nice but not 0-nice. Maybe it even has an optimal good solution, but a bad point that is an extreme points while the good point isn’t, or they are all extreme points, it simply found one of the bad ones etc.

I have actually implemented this, and I feel like in practice most solutions with LTC violations will be from a good class if you make epsilon small enough, and in fact, most of them simply have no LTC violation at all. However, exceptions happen. There is definitely a trade-off where bad solution classes become less likely as the accuracy improves, but I haven’t been able to crank it up to a point where it removes them all and still runs within the time limit. Ie it’s spitting out solutions that are genuinely bad and shouldn’t pass.

Another sidenote: I suspect it’s a lot easier to find a solution thas satisfies LTC within a fixed solution class than optimizing over all classes simultaneously (I haven’t thought of a clean algorithm yet though). One could argue that solvers should try to fix those almost-correct-solutions post-hoc if they want to compete.

So I do feel that there is value in letting solvers innovate in this area. On the other hand it’s been an absolute pain in the ass to work with them, so I would love it if someone could come up with something simpler.

1 Like

Indeed, this seems like the first thing one could attempt; restrict the solution space and only consider solutions that by construction (almost) satisfy the constraint. This is definitely a very interesting topic, and indeed solvers can innovate in this area.

In the long run, one could maybe hope that we will find some more structured solution space that is almost as competitive as the whole space and satisfies local token conservation by construction.