CIP-67 : Moving from batch auction to the fair combinatorial auction

CIP-Number: 67
title: Moving from batch auction to the fair combinatorial auction
author: Haris Angelidakis, Andrea Canidio, Felix Henneke, Bram van den Berg
status: Draft
created: 2025-04-17

Simple Summary

We propose to change the CoW Protocol’s auction mechanism from the current batch auction to the fair combinatorial auction (described in this research paper and forum post). The goal is to increase the protocol throughput and provide stronger fairness guarantees to traders. Changing the auction mechanism also requires changing the calculations of solvers’ rewards and penalties (see here for an in-depth discussion). At the same time, like the batch auction, the fair combinatorial auction can guarantee uniform directional clearing prices (as defined in CIP-38): if two traders trade the same tokens in the same direction in the same auction, they will receive the same price.

Motivation

By allowing trades to be executed together, batch auctions deliver additional efficiencies and price improvement relative to alternatives such as Dutch auctions. However, batch auctions also have limitations that are becoming more and more evident as CoW Protocol grows. By forcing a single winner per auction, a batch auction may artificially discard valid solutions that could have been executed in parallel to the winning one, therefore limiting the protocol’s throughput. Also, it is liable to “surplus shifting”, that is, the possibility that the benefit of batching multiple trades together accrues disproportionately to some of these trades. CoW Protocol currently tackles this problem via the EBBO test, an ex-post test that checks that each trade on the batch receives an execution above a minimum quality. The EBBO test is, however, not a long-term solution as it represents both a significant operational overhead and a centralization factor.

Specifications

To solve these problems, we propose moving away from the current batch auction and implementing the fair combinatorial auction. In the fair combinatorial auction, each solver can submit multiple bids. A solver can send several bids on individual orders, with each bid corresponding to “how many tokens I can return if I only win this order.” A solver can also submit multiple batched bids (or bids on groups of orders), corresponding to “how many tokens I can return if I only win these orders”. The main restriction on the solver’s bids is uniform directional clearing prices within each bid. So, for example, if a solver’s bid contains two orders selling ETH for USDC (perhaps together with other orders), then the price received by these two orders must be the same. If the same two orders also appear in another bid (perhaps alone), then again these two orders must receive the same price, which, however, can be different from the price received when they are batched with other orders. Deviations from uniform directional prices are allowed to account for hook costs of orders. The restriction of uniform clearing prices across token pairs is removed. (More details on the definition of uniform directional clearing prices can be found in the text of CIP-38; see specifically the “Extension to protocol fee” section).

After receiving all the bids, the protocol first considers bids containing only orders on the same token pair and in the same direction and computes the best bid on each directed token pair (where the best bid is the one generating the most surplus). The collection of these best bids is the reference outcome for each directed token pair, representing the best possible execution against outside liquidity.

The protocol then uses the reference outcome to filter out the remaining batched bids: a batched bid is filtered out whenever it generates a lower surplus for a given directed token pair than the reference outcome. To say it differently, a batched bid is not filtered out whenever it generates a higher surplus for all its directed token pairs than the reference outcome. In the final step, the protocol considers all the batched bids that survived the filtering and the best bids on individual directed token pairs and computes the surplus-maximizing collection of winning bids, under the constraint that all orders on the same directed token pair are part of the same winning bid. Because of this constraint and the fact that solvers must respect uniform directional clearing prices within each bid, the auction guarantees uniform directional clearing prices.

Note that finding the surplus-maximizing collection of winning bids is a well known NP-hard problem, hence some approximations may be required. A particular simple approach is the following:

  • Among the batched solutions surviving filtering, choose the one that generates the largest surplus. This is the first winner.
  • If there are additional batched solutions surviving the filtering, find the ones generating the highest surplus among the ones with no directed token pairs in common with the first winner. This is the second winner.
  • Repeat as long as possible.

It is easy to see that, if only one or two batched solutions containing orders on different directed token pairs survive the filtering, then the above algorithm achieves the theoretical maximum surplus. If there are more batched bids surviving the filtering, then this may not be the case. Below, we argue that, given the current state of the competition, the simple approach highlighted above is an appropriate first step, but as the protocol grows a more sophisticated algorithm may be required in the future.

For further details on the fair combinatorial auction, see the research paper and the corresponding forum post.

Rewards

The most delicate part of this change is to adjust the calculations for solvers’ rewards and penalties. In the current batch auction, rewards and penalties are computed via two metrics:

  • The score of a solution, which is the surplus generated by the winning bid
  • The reference_score of a solution, which is the highest surplus that the auction would have achieved if the winning solver had not participated.

The reward for the winning solver is then score - reference_score (capped), as a measure of the winning solver’s contribution to the auction. If the winning solver reverts, then the protocol pays the solver -reference_score (i.e., imposes a penalty, which is also capped), as a measure of the opportunity cost of the winning solver’s revert.

The same logic can be applied to the fair combinatorial auction. Given the set of bids that survive the filtering, we can define

  • Score as the surplus generated by the collection of winning bids
  • Reference_score(i) as the score of a counterfactual auction in which the bids from solver i are removed

To reiterate an important point: Reference_score(i) is computed using only bids that survived the filtering. Hence, whatever bid was deemed unfair at the filtering stage will continue being excluded from the calculations of the reference score.

We can then specify that winning solver i receives score - reference_score(i) (capped) in case of successful execution. In case of (possibly partial) reverts, the solver receives score - reference_score(i) - L (also capped below) where L is the surplus loss due to the solver’s revert (i.e. the surplus that the solver promised in the auction but then did not deliver).

However, the above approach has a subtle problem. If the algorithm used to select the winners is an approximation and not the exact combinatorial surplus-maximizing problem, then there is no guarantee that score - reference_score(i) is positive, nor that score - reference_score(i) - L is negative, even in case of full revert. To illustrate this problem, suppose that the algorithm is the one introduced earlier: choose a global winner and then, among the orders with no tokens in common, choose a second winner and so on. Then it is easy to construct cases where score - reference_score(i) is negative. For example, when there are two orders and two solvers, one solver bids only on individual orders delivering a surplus of 0.1 ETH and 0.05 ETH, and a second solver bids on both orders with surplus 0.11 ETH, then the second solver wins with score 0.11 and reference score 0.15. It is easy to see that the winning solver receives a negative reward in case of a successful execution, and a positive reward in case of a revert.

As we discuss below, we expect this case to arise rarely (i.e., approximately 1% of auctions), at least initially. So, for the initial phase, we propose to use the above, simple algorithm for winners selection while defining reference_score(i) as the minimum of the winning score and the score of the counterfactual solution in such cases. This guarantees that rewards cannot be negative and penalties cannot be positive. Nonetheless, even with this fix, a winning solver may receive zero rewards. If/when these cases become frequent, they may distort solvers’ bidding behavior, which is undesirable. To prevent that, we reserve the right to introduce a more complex winners selection algorithm.

Expected Impact of the Proposal

To evaluate the impact of our proposal, we conducted experiments on historical auction data as well as on synthetic auctions.

Based on experiments with historical data (code here), we expect the following changes:

  • Order throughput will increase by around 33%.
  • Rewards in the solver competition increase by around 25%.
  • We do not expect a change in total surplus but the surplus will be distributed more fairly among traders, rendering EBBO rules almost obsolete.
  • There is little difference between choosing winners based on simple heuristics and full combinatorial maximization. This indicates that there is little potential for exploiting skewed incentives from the simplistic selection of winners.

Experiments with synthetic auction data (code here) suggest that with more order flow and more complex synergies between trades there can easily be auctions where a simple heuristic for selecting winners deviates from the full combinatorial selection. Cases in which winning solvers receive zero rewards could become more frequent, which, as already mentioned, may distort solvers’ bidding behavior. The script also considers as a possible winners’ selection algorithm a simple search for the theoretically optimal combination of bids, with a maximum number of runs (after which the best combination of bids found is returned). This winner selection algorithm seems to perform well even in cases in which there are many orders in the batch.

More information on these experiments is added in a post below.

Timeline of Implementation

If this CIP passes we propose to switch to fair combinatorial auctions as soon as the required changes are implemented by the core team. We expect testing of the feature to start around May 6th, 2025, and the change to be effective on all chains around May 20th, 2025.

If we observe a large discrepancy between selecting winners based on our used heuristic and the full combinatorial maximization, we reserve the right to update the selection of winners later on.

[Update 2025-05-07: Two clarifying sentences on uniform prices were added.]

6 Likes

Experiments for combinatorial auctions

To assess the impact of switching to fair combinatorial auctions, we conducted experiments on historical data as well as on synthetic auctions.

Historical auctions

We estimated the impact of the proposed changes by running counterfactual experiments on a set of 35k auctions from Feb-28-2025 to Mar-10-2025. The code is available here.

We tested the following mechanisms:

  • Current mechanism: Selecting a single winner
  • Proposal of the CIP: Splitting solutions by solvers to simulate them submitting multiple solutions; Filtering solutions for fairness based on reference solutions; Selecting winners as above
  • Same as Step 2 but with full combinatorial maximization

For a given mechanism, we reran a sequence of ~35k auctions and computed winners and rewards. To avoid double counting of rewards, we remove an order chosen for execution from all subsequent auctions. We ignore reverts. Scores are based on surplus alone, i.e., protocol fees are ignored.

From these auction outcomes we compute:

  • the sum of scores; this gives an idea of price improvement from changing the mechanism
  • the sum of rewards; this gives an idea of change of rewards budget
  • the throughput (defined as fraction of orders executed the first time that orders is proposed)
  • number of auctions where rewards are capped from above; this gives an idea of auctions with misaligned incentives
  • number of auctions where reference score is larger than winning score; this gives an idea of auctions with misaligned incentives (new to multiple winners)
The full output of the experiment.
Fetching auctions from 10272553 to 10322553...

[...]

Splitting solutions with efficiency loss 0.01 and approach "complete"...
Using reward caps of 0.012 and 0.01
Running counterfactual analysis...
Number of auctions: 35501
Statistics:

Score:
mechanism 0 generated scores of 5731.94347573129 ETH (relative change: 0.00%)
mechanism 1 generated scores of 5799.302470709414 ETH (relative change: 1.18%)
mechanism 2 generated scores of 5799.335866880556 ETH (relative change: 1.18%)

Reward:
mechanism 0 generated rewards of 64.64420265083265 ETH (relative change: 0.00%)
mechanism 1 generated rewards of 81.55322176180395 ETH (relative change: 26.16%)
mechanism 2 generated rewards of 81.59196944038145 ETH (relative change: 26.22%)

Throughput (percentage of orders executed the first time a solution is proposed):
mechanism 0 generated throughput of 68.79% (relative increase: 0.00%)
mechanism 1 generated throughput of 91.59% (relative increase: 33.15%)
mechanism 2 generated throughput of 91.59% (relative increase: 33.14%)

Capping:
mechanism 0 has capped rewards in 7.72% of auctions
mechanism 1 has capped rewards in 8.01% of auctions
mechanism 2 has capped rewards in 8.01% of auctions

Negative reward:
mechanism 0 generated negative rewards in 0.01% of auctions
mechanism 1 generated negative rewards in 0.04% of auctions
mechanism 2 generated negative rewards in 0.00% of auctions

According to these experiments, the impact of the CIP would be as follows.

  • There is a small increase in overall score. It is noteworthy, though, that this experiment would not be able to accurately capture the subtle balance of increase in scores due to combining multiple solutions and decrease in scores due to excluding unfair solutions.
  • Rewards increase by around 25%. This increase will not require allocation of additional COW from the DAO to the solver competition as it would be covered by our current protocol fees.
  • Throughput increases by 33%. We expect that this value will in practice be even larger and almost 100% of orders with a solver proposing a solution will also have a solver winning the right to execute them. Reverts will then become the main bottleneck for throughput. This is beyond the scope of this experiment.
  • The fraction of auctions of auctions with capped rewards increases to 8.0%. This fraction is still sufficiently small to not lead to strategic underbidding in most auctions.
  • The fraction of auctions affected by reference scores being larger than winning scores is about 0.04%. There were only a few auctions affected by this in this experiment. This indicates that in our current competition landscape the function to select winners in the fair combinatorial auction would not have a major impact. We therefore propose to use the simplest possible function for now. However, experiments with synthetic auctions suggest that this can very well change.

Fair combinatorial auction with full combinatorial maximization:

  • Selecting winners by choosing winners via full combinatorial maximization allows for deriving results on truthfulness of bidding. But the algorithm for selecting winners would be NP-hard.
  • The results of the experiment are almost identical to the mechanism based on the simple selection of winners. This indicates that for now we can use the simpler algorithm and still rely on truthful bidding of solvers.
  • This can change in the future with a larger volume of orders and a different set of solvers, as indicated below in the experiment with synthetic auctions.

We also conducted comparable experiments on Gnosis Chain and Base. The changes in rewards and throughput seem a lot smaller. The main driver for our decision is therefore the expected changes on Ethereum mainnet.

The experiments on historical auctions give us the best indication of changes we would expect to happen with fair combinatorial auctions. There are, however, several systematic biases in the experiment:

  • Focus on the current solver landscape
  • Simplistic assumptions on efficiency gains from batching
  • Ignoring reverts
  • Ignoring protocol fees
  • Partially fillable orders are removed after the first execution

The first and second point are addressed in the following experiment with synthetic bids.

Synthetic auctions

Synthetic bids can help us understand what will happen under various possible future scenarios: for example, in case we double the number of orders or solvers. The script is available here.

The script starts with a number of orders, and it checks how long it takes to settle them using our current mechanism, and the resulting surplus. It then computes the outcome of the fair combinatorial auction as described in the CIP, as well as the fair combinatorial auction with full combinatorial maximization (with a maximal iteration number in the search loop). It also checks how many times winning solvers would have received negative rewards and how many times the timeout in the search is reached. It repeats everything also for the auctions without fairness filtering.

The experiment is done assuming 15 active solvers bidding on 7 orders and different directed token pairs (of course, these parameters can be freely adjusted). The average number of orders proposed by solvers is currently significantly smaller than 7. But we aim to settle more orders and it already happens regularly that more than 10 orders are proposed for execution in an auction.

The full output of the experiment.

This is the outcome for 7 orders, 15 solvers and certain parameters to generate the bids, all repeated 20 times

Average Batch Auction Score: 143.04, Average Rewards: 11.61, Negative Rewards: 0, Average number of auctions 1.45

Average Simple Combinatorial Auction Score: 143.04, Average Rewards: 15.61, Negative Rewards: 6

Average Fair Simple Combinatorial Auction Score: 142.31, Average Rewards: 8.97, Negative Rewards: 19

Average Combinatorial Auction Score: 193.87, Average Rewards: 18.16, Negative Rewards: 0, fraction of timeouts: 1.0

Average Fair Combinatorial Auction Score: 193.87, Average Rewards: 18.35, Negative Rewards: 0, fraction of timeouts: 1.0

  • Average scores for full combinatorial maximization are significantly larger than for the simplistic algorithm for selecting winners. This indicates that the simple heuristic would leave lots of surplus for users on the table.
  • Filtering unfair solutions does not decrease scores significantly.
  • Rewards are significantly increased with combinatorial auctions, even more so with full combinatorial maximization. If the competition landscape changes to what is used in this experiment, the selection of winners as well as the capping of rewards need to be reevaluated.
  • Rewards are negative for a significant fraction of auctions. This indicates that incentives could be disaligned for solvers and the protocol.
  • The computational complexity of selecting winners in a reproducible way is significant and reaching maximum iteration numbers could be common.

Due to these results, it would be useful to reserve the right to change the function for selecting winners. Any solver behavior tuned differences of the implemented selection of winners is not intended by the CIP.

2 Likes

Great to see this initiative being pushed and the data and simulations shared.
My overall comment is that the extra expenses in rewards are not expected to put a risk to the DAOs financial solvency as current Protocol revenue will be sufficient to ensure zero net emissions.

Overall, this move will strenghten competition and continue improving traders outcomes, ensuring that CoW Swap and CoW Protocol keeps being the best venue for price discovery - and that users are provided the best prices / outcomes.

Highly support this initiative.

1 Like

This is a big change and will significantly impact order batching in its current form. It makes sense that user fairness, timely settlement and protocol throughput must come first though.

Some thoughts/questions from a solver’s perspective:

  1. A good batch can be eliminated because for a single one of its orders there exists a very slightly better settlement (due to a minimal gas optimization for instance). Could there be some kind of tolerance threshold (say 0.0005 eth) that would allow these batches to still be valid?
  2. Solvers who charge a high priority fee (which ensures timely settlements) can get batches disqualified if a single solver submits individual orders with extremely low priority fees (and therefore lower fees). Could these be enforced at protocol level?
  3. Does this preserve UDP but remove UCP ? for instance, can trades in different directions be executed at different prices?
  4. Let’s imagine there exists a COW between two orders and two solvers find that COW. One solver distributes all the surplus to one order, and the other solver distributes all the surplus to the other order (and let’s assume none of them can be settled without the other). Then both solutions will be eliminated because each of them contains a suboptimal solution for one of its orders. Shouldn’t the reference score for orders only consider the bids that only contain that order?
  5. Solvers will try to return solutions for each individual order they can route, and every possible combination of orders that can be batched. This may create very large amounts of submitted solutions in high volatility/volume environments. for instance if there is a batch of 8 orders that makes 2**8-1 =255 combinations of these orders… should there be a way to submit batches progressively to avoid timeouts close to deadline or large payloads?
2 Likes

Thanks @tokenwood for providing a solvers perspective on the change.

Regarding your questions

  1. In principle it might be possible to have a tolerance for filtering. I am not convinced that it would help to have it set as large as 0.0005 ETH. If a solver finds a nice routing for order 1 without using much gas, that should benefit order 1. If some other solver finds a solution settling order 1 and 2 and order 2 uses a lot of gas, order 1 should not pay for the execution of order 2. Distributing the larger execution cost evenly among the two orders will result in correctly filtering out that solution as unfair. A good strategy of the second solver would be to also submit solutions routing order 2 separately.
  2. We do not intend to enforce priority fees. It is true that this allows for strategic behavior as in making a solution better in terms of score at the cost of a larger revert risk. This is already possible with out current mechanism. One pragmatic solution is to, again, submit multiple solutions such that a single order being executed a bit better by a competitor will not filter out all of a solvers solutions.
  3. The rules regarding uniformity of prices are not changing significantly. We will enforce uniform directional prices. Uniform clearing prices were already violated in most CoWs due to the need to charge fees for execution costs. It is true that we now allow for different solvers to settle trades on the same token pairs in the same direction. This could even lead to arbitrage opportunities. It will remain to be seen if that can become an issue in practice.
  4. The wording of the CIP might not have been clear here. The intention is to use solutions only containing single directed token pairs for filtering. If there is a CoW and each order cannot be executed individually, all executions of these orders are considered fair.
  5. It is true that there will be cases where submitting a lot of solutions is required. The communication between autopilot and driver should hopefully not suffer too much as each solutions is relatively small. For the communication between driver and solver, problems might appear. Improving that interface would be a welcome improvement to the reference driver. There are other changes as well which would make the reference driver more competitive with multiple winners, e.g., being able to settle multiple solutions of a single solver. At the moment there is no plan of the core team to adapt the reference driver. External contributions are very welcome, though.
1 Like

Thanks for your response @felixhenneke

I just want to go back to point 3 because I don’t understand what you are saying. UCP forces there to be a single price per token after deducing fees ( (Y - f) / X = p_B / p_A as defined in CIP 38) , and as far as I know it is enforced, even for COWs. However, with combinatorial auctions, there is no single token price vector accross different winning solutions, so does it still make sense to keep that constraint for each individual solution? I’m asking because it makes batching circular trades very difficult (A->B, B->C, C->A while respecting P_ab * P_bc * P_ca = 1 )

1 Like

@tokenwood I believe Felix is saying that the UCP constraint is not going to be required anymore. I think the “it wasn’t already being enforced now because of fees” argument a bit weak for such a foundational change in the protocol (going forward we cannot claim there is a single price vector per auction anymore), but I guess something’s got to give. It is the first time I see a change that makes the problem easier to solve, so I suppose that is also a good thing.

1 Like

I’d like to ask about technicalities.

If I understand correctly, the reference driver will not be updated to keep up with this change. Currently a solver communicates a price vector to the reference driver. I think that means a solver will not be able to submit a single batched solution for which the UCP constraint is violated, right?

If that is true, then by using the reference driver, a solver will not be able to:

  • Submit multiple solutions because the driver ranks them and always picks just one to send to the autopilot
  • Submit a single competitive batch solution, because it needs to satisfy an important constraint which is no longer required at the autopilot level.

I suppose this means that the reference driver becomes pretty useless under the new CIP. Or am I missing something?

I was a bit imprecise when suggesting that uniform clearing prices are not used at the moment. We currently have uniform clearing prices and use them as follows:

  1. they are part of the on-chain settlement (i.e. the clearing price vector is part of call data),
  2. the autopilot<>driver interface (i.e. drivers report clearing prices as part of their bid),
  3. the reference driver<>solver interface (i.e. solvers report clearing prices and fees instead of executed amounts for trades), and
  4. the accounting (i.e. network fees are reconstructed using clearing prices and handles with a special price in buffer accounting)

All those places could in principle be removed without changing the rest of the mechanism.

  1. Just remove that part of the call data which is not used by the settlement contract.
  2. Remove the clearing price from the bid as it is not used in the selection of winners.
  3. Let solvers report traded amounts directly for all trades.
  4. Remove special handling of network fees and treat them the same as slippage in buffer accounting.

There are some places where uniform clearing prices do have an effect on solvers using the reference driver. It is not possible (please try to prove me wrong) to settle a direct CoW while also executing on-chain arbitrage and give that to users. It would imply setting a negative fee, which is currently not possible.

Uniform clearing prices also have not proven particularly relevant in terms of fairness or efficiency. Surplus shifting can still happen, bad executions can still happen. We can reimburse users in post processing but it is even still unclear how to efficiently prove fairness violations (see EBBO). It is also not clear what unfair means in our context, see e.g. here. The additional constaint has also pushed solvers away from settling CoWs, potentially leaving efficiency gains from batching on the table.

Some of the literature uses uniform clearing prices and shows nice properties of the resulting market equilibria and efficient algorithms to achieve them. But it is not obvious how to translate those results to our use case which includes friction in the form of execution costs and fill-or-kill orders. And even without these complications, some price manipulation can be possible. See e.g. here for some negative results and the references therein for positive results. On property we lose with that change is protection from users doing atomic arbitrage trading using CoW Protocol. Such trading is, however, difficult to do in practice and seems to always involve risk.

One approach forward could have been to shift more responsibility to the protocol as to compute fair prices itself and measure solvers against that. Or to compute fair prices and stitch together solutions by solvers. All such approaches seem to hinge on the protocol being able to be a good solver as well, which will probably not be possible.

Thus, we use solvers directly in the mechanism to achieve some notion of fairness. As soon as multiple solvers execute independently, we cannot easily enforce uniform prices across all traded tokens. Thus the proposal to formally remove the restriction to uniform clearing prices from our rule set.

2 Likes

Thanks for clarifying, I’m very much in favor of removing it and it should be stated clearly in the proposal.

In some of the use cases you mention like the on-chain settlement, the price vector can have multiple times the same token with different settlement prices. So using a price vector doesn’t necessarily mean respecting UCP.

1 Like

This change brings more fairness to traders as we move from a Batch Auction to a Fair Combinatorial Auction. It helps fill a gap in the protocol that previously, the winner was selected based on which solver could find the most surplus, even if part of that surplus came from routing through AMMs, which didn’t always benefit the trader. With filtering of batched bids now applied before selecting winners, each order must receive at least as good an outcome as it would have on its own. It’s a major shift for both CoW Swap and the CoW Protocol.

I’m wondering, since this makes the auction mechanism more complex, could it affect performance or be harder to maintain in the future?

1 Like

The complexity of the core mechanism does increase with switching to combinatorial auctions. One issue is that the selection of winners is a difficult problem in itself. Another is that there are potentially multiple executions per auction, complicating their execution and accounting.

One huge reduction in complexity comes from the fairness filtering. While we currently have to check every auction for violation of social consensus rules, with fair combinatorial auctions, this would not be required in practice. For that reason, fair combinatorial auctions are a step towards making the protocol easier to maintain and more decentralized.

To clarify the handling of uniform prices, I added the following sentences to the CIP:

This sentence is added to allow solvers to deviate from uniform directional prices in cases where they are obviously not desirable: in case of expensive hooks. For example, if there are two orders on the same token pair in the same direction with one order having an expensive hook, the order without hook should not pay for the hook of the other order. The intention of the exception is that the uniform price should only hold for the swap part, not for hooks. We will follow up on this in another forum post to discuss how to make this a well-specified rule.

This sentence clarifies that uniform clearing prices are not required within individual bids of solvers. This is consistent with the rest of the mechanism which does not enforce uniform clearing prices across different bids.

As a delegate I want to express my support of this CIP.
Migrating CoW Protocol to a combinatorial auction represents the culmination of a long research process based on real world experience from settling billions in trading volume through the existing batch auction mechanism.
It is expected to fix or improve a number of problems in the existing design, namely surplus shifting and individual order execution quality.
Looking forward to see it implemented!