Measuring and banning overbidding

Motivation

There have been some interesting discussions in the last few weeks about overbidding in the solver competition and its impact on solvers and users. In this post, we propose a criterion to measure overbidding. The goal is to find a criterion we can turn into a social consensus rule and a CIP.

What is overbidding

According to CIP-20, solvers report a number, denominated in ETH, that is called score, and they are ranked according to their scores. The solver with the highest score wins the auction and, in the case of a successful settlement, receives surplus + fees - second_best_score as a payment, capped by the quantity gas_in_eth + 0.01, where gas_in_eth denotes the ETH spent in gas for the settlement. Since in order to be a winner you need to report the highest score, it is easy to see that the winning solver in case of success receives at least

minimum_payment = min{surplus + fees - winning_score, gas_in_eth + 0.01}

as a payment. Note that the actual payment can indeed be equal to the above quantity.

We now define a score to be reasonable if it is such that it guarantees that the solver will cover the ETH spent in gas for the settlement (denoted as gas_in_eth), in case of a successful settlement onchain.

Definition [Reasonable score]. A score is reasonable if the corresponding minimum_payment, in case of a successful settlement, covers the ETH spent in gas for the settlement, which means that a score is reasonable if and only if it satisfies

surplus + fees - score >= gas_in_eth,

or equivalently,

score <= surplus + fees - gas_in_eth.

We say that a solver overbids in an auction if its score, in case it wins and executes its solution onchain, is not reasonable. Note that since gas prices fluctuate, accidental small-scale overbidding can happen. For this reason, we relax the notion and define an average case form of it.

Definition [Reasonable bidding]. Let {A(1), …, A(n)} be a set of n auctions that a solver wins and successfully executes onchain. We say that this solver bids reasonably if

ÎŁ score(i) <= ÎŁ (surplus(i) + fees(i) - gas_in_eth(i)),

where score(i), surplus(i), fees(i) and gas_in_eth(i) refer to the corresponding quantities in auction A(i).

Social consensus rule on overbidding

We propose the following rule and test: All solvers have to follow a reasonable bidding strategy. If there is a statistically significant deviation from reasonable bidding, i.e.,

1/n*ÎŁ score(i) > 1/n*ÎŁ (surplus(i) + fees(i) - gas_in_eth(i)) + epsilon,

for some tolerance epsilon, a solver breaks the reasonable bidding rule. Solvers who break this rule can be penalized or slashed.

The differences

1/n*(ÎŁ (surplus(i) + fees(i) - gas_in_eth(i)) - ÎŁ score(i))

for the different solvers are shown in gray (for two weeks, 1.8. - 15.8., from this dashboard):

Negative values mean overbidding. This means that the solvers Barter, Baseline, Otex, PropellerSwap, Raven, and SeaSolver have been overbidding according to the test defined above, either intentionally or accidentally (the latter can happen due to miscalculations of gas usage and/or gas prices), and would need to change their score submission strategies.

Discussion

  • Why create a social consensus rule instead of ‘fixing’ the mechanism?
    Let’s try both. Since designing a mechanism with perfect alignment of incentives for users, solvers, and the protocol has proven difficult in the past, using social consensus rules for some part of the mechanism seems a good intermediary step. The proposed rule is compatible with other proposals to improve the mechanism, e.g., this proposal.
    Also, solvers have asked in the past for guidelines for score submission. The proposed rule can be used as such a guideline.

  • Why not just use the old objective for ranking?
    Using the old objective as score is generally not a reward maximizing strategy for solvers. This is because revert risk is not taken into account and the cost term in the old objective is not an accurate estimate for actual execution costs on chain.

  • Why not just define a new and better objective?
    We might add the option to submit revert risk and execution costs and have the protocol compute a score from that. This will make it easier to participate in the auction for solvers. It does not, however, fix all problems with overbidding since solvers can still under-report revert risk and costs.
    Also, since the general direction for the protocol is to simplify maintaining the protocol, moving more responsibilities to solvers is unavoidable. This includes estimating costs.

Open questions

  • Is it always possible to follow a reasonable bidding strategy? (We think it is, but feel free to object.)
  • The outlined rule ignores reverts. Should we rather use a criterion which can take reverts into account?
  • How should the tolerance epsilon for the test be chosen? Should we use a fixed tolerance or a tolerance derived from a statistical test (e.g. a Student’s t-test)?
  • Are there better approaches to stop overbidding?
4 Likes

Very nice analysis and proposal!

It might be obvious, but just highlighting that in the vast majority of cases solvers will receive more than that minimum payment. The only way how the payment is bound by surplus + fees - winning_score is if the second best score is infinitesimal close to the winning score (e.g. for solutions where the optimal route is obvious and found by a lot of solvers).

Also obvious, but the difference in overbidding severity between PropellerSwap/Seasolver vs the rest (where issues in the bidding strategy have been identified and are addressed) is quite significant. Maybe this can serve as a guidance for where to set the threshold.

Good point. We are actively discussing things with most solver teams so I think once some remaining bugs/issues/misunderstandings have been clarified, I agree we should take into account the data to:

  • convince ourselves the aforementioned test makes sense, and
  • pinpoint a reasonable threshold that will minimize the chance of a false positive.

Representing Enso, the solver currently “underbidding” the most and thus probably the most affected by this, we obviously would like to see some changes made to disincentivize overbidding. We also think it could be valuable to incentivize correct bidding. Incentivizing both directions might help us arrive at an equilibrium.

We especially like the “reasonable bidding” equation to disincentivize overbidding but perhaps we could also have some inverse equation which incentivizes the other direction? Of course, this would need some cap in order to avoid solvers gaming that side too heavily.

We would be in favor of solvers being required to return a tuple of (revert risk, execution cost in ETH). While these values could still be gamed, we think they would be much harder to game as they are more transparent. This, coupled with the “reasonable bidding” equation which is applied in retrospect, as both a penalty and capped reward, we believe would reduce overbidding.

Happy to hear other’s thoughts on this.

1 Like

Thanks for chiming in.

In my opinion, both overbidding and underbidding are not good for the protocol. Both can lead to the best solution not winning the auction.
If a solver maximizes rewards in our current mechanism, they should not overbid and not underbid. Both are disincentivized via rewards.
But it turned out that this incentive is not enough to prevent either. There is already a safeguard against severe overbidding in the form of the inequality score <= surplus + fee. But this still allows for overbidding of the order of execution costs. Thus the stronger inequality score <= surplus + fee - cost is proposed here.
Stopping underbidding looks to be more in the interest of solvers already. The effect is to me comparable to a solver not participating in the auction.

Are there benefits to underbidding I am not seeing right now?

I do not thing that applying penalties in retrospect is on the table.

Hi @devanoneth , welcome to the forum! In the documentation, we have an equation determining how a solver should bid based on its costs (see here). It turns out that the higher are these costs, the lower is the score that a solver should submit so that when it wins, it wins more (for given quality).

So, to determine whether a solver is underbidding, we would need to know all its costs, which we do not. To determine overbidding, instead, it is enough to check whether the score reported is too high, even assuming zero revert risk and no other costs but gas.

This is to say, while in principle, it is a good idea to incentivize solvers to bid correctly., defining what “bidding correctly” means depends on information that we do not have and that solvers are probably unwilling to share with us.

Appreciate the replies.

You’re right on this. I don’t see any benefits, except for other solvers having a higher chance of winning which is counter-productive to the competition. We just need to start scoring better.

I phrased this poorly, but I just mean that we cannot prevent overbidding in real-time so we need to penalize / slash solvers after the fact, if they pass some threshold (tolerance epsilon). I’m basically just saying that we support the “social consensus rule on overbidding” you proposed.

Understood, this and @felixhenneke’s explanation above helped explain why we shouldn’t need more incentives for “correct bidding”. Thank you.

Thanks for the clarification, I misunderstood what you said.

It is true that the proposed rule can only be checked after some number of settlements. I am personally not a fan of having a specified monetary penalty. Instead I would say it should be a hard rule. If a solver breaks it, we can still decide on a case by case basis what needs to be done. Hopefully, having a rule all solvers agree upon will prevent (this type of) overbidding in almost all cases.

1 Like