RFP: CoW Protocol Playground Performance Testing Suite

Preamble

Requests for proposals are not intended to be prescriptive or exhaustive. The community is encouraged to submit proposals that expand upon the ideas presented in this post. The scope of the project may change based on the proposals received. The primary intent of this document is to provide a starting point to achieve the outlined goals, and the final implementation may differ from the initial proposal.

All applications will follow the standard Grants DAO process. This request should not be interpreted as an offer.

Simple Summary

We seek proposals for a performance benchmarking and load testing suite for the CoW Protocol Playground to enable testing of performance improvements without production deployment.

Goal

Testing performance improvements currently requires deployment to real environments with actual traffic. We need tools to generate synthetic load, measure performance, and identify bottlenecks within the playground environment.

Deliverables

We are looking for solutions that provide:

  1. Load generation - Ability to simulate realistic order flow and user behavior

  2. Performance benchmarking - Measure and compare performance across changes

  3. Metrics and visualization - Integration with existing Prometheus/Grafana setup

  4. Test scenarios - Reusable test configurations for common use cases

Specification

Problems to Solve

  • Cannot test performance improvements without real traffic

  • No way to identify bottlenecks before production

  • Difficult to measure impact of optimizations

  • Cannot simulate edge cases or stress conditions

  • No standardized performance testing methodology

Desired Capabilities

  • Generate synthetic order flow at scale

  • Measure system performance under various load patterns

  • Visualize performance metrics and bottlenecks

  • Compare performance between different versions

  • Reproduce and debug performance issues

Integration Requirements

  • Work with existing playground services

  • Utilize current Prometheus/Grafana infrastructure

  • Compatible with offline mode (primary requirement)

  • Stretch goal: Fork mode compatibility (note: this may be challenging due to potential Anvil limitations; proposers may need to consider Reth which could be incompatible with fork mode)

  • Minimal resource impact when not in use

Method

We are open to different approaches for load generation and performance testing. Proposers should explain their methodology and how it addresses the stated problems. Solutions should be maintainable and well-documented.

Evaluation Criteria

Proposals will be evaluated on:

  • Approach to load generation and testing

  • Quality of metrics and insights provided

  • Ease of use for developers

  • Integration with existing tools

  • Maintainability and documentation

  • Cost and timeline

Values of Grants DAO and its Grants

These values may evolve and are listed in no particular order:

  • Open Source: Integrations should be open source.

  • Milestones: Milestones should be attainable and well-defined to ensure easy verification of completion.

  • Price Transparency: Pricing should be broken down into optional and core metrics/deliverables to allow selective implementation.

  • Sustainability: Address the sustainability of deliverables (e.g., who will manage, maintain, and for how long, including associated costs).

  • Simplicity: Aim for simplicity. Completion is often more valuable than striving for perfection—except for critical components, which must meet the highest standards.

  • Documentation: Provide solid documentation to ensure that others can build on your work smoothly, where applicable.

  • Flexibility: We recognize that some processes require flexibility (e.g., adding new features, adapting to changes in technology or infrastructure). Open communication is encouraged to adapt to these changes. Scope extensions and pricing changes typically require a committee vote.

Call for Action

  • Community: Community members are encouraged to provide feedback on testing scenarios and metrics of interest.

  • Applicants: Proposals should be submitted by November 17, 2025 using the standard Grants Program template.

Additional Resources

Selection Process

The selection will be made by the Grants DAO committee at their discretion. The committee will consider the above values, cost, timing, quality, and scope in their decision-making. Committee members may ask questions or make a decision independently. The committee can also decide to close or extend the timeline or go with none of the submitted proposals.

Currently, there are no official rejection criteria. If the forum discussion does not provide a clear outcome, an applicant can post their proposal to the Grants DAO Snapshot space and request a committee vote if needed.

2 Likes

FYI – we posted our proposal here! Happy to work on any feedback.

Thanks to all that applied. All applications will be reviewed over the next week with anticipated responses by 28th of November.