Skip to content

Commit

Permalink
Update research-overview.mdx
Browse files Browse the repository at this point in the history
Minor stylistic edits
  • Loading branch information
elizarileyoak authored Oct 24, 2024
1 parent d41adcf commit ef4004e
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions pages/research/research-overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Overview of research and experimentation at Optimism

# Why We Experiment: Building a Culture of Experimentation

At Optimism, we’re committed to a bold vision: **Build an equitable internet, where ownership and decision-making power is decentralize**d across developers, users, and creators. We’ve realized that if we want to achieve this goal and pioneer a new model of digital democratic governance, we need to understand what works and what doesn’t. And just like clinical drug trials, or impact evaluations in developmental economics, running **controlled experiments is how we truly learn** about cause and effect**.**
At Optimism, we’re committed to a bold vision: **Build an equitable internet, where ownership and decision-making power is decentralize**d across developers, users, and creators. We’ve realized that if we want to achieve this goal and pioneer a new model of digital democratic governance, we need to understand what works and what doesn’t. And just like clinical drug trials, or impact evaluations in developmental economics, running **controlled experiments is how we truly learn** about cause and effect.

Designing a successful decentralized governance system is uncharted territory, so there’s no shortage of open questions about cause and effect that we need to understand. For instance: *Do delegation reward programs improve delegation? Do prediction markets make better decisions than councils? Do veto powers increase legitimacy? Do various voting mechanisms decrease collusion? Does deliberation increase consensus? Do airdrops increase engagement?* To name just a few.

Expand All @@ -20,9 +20,9 @@ Below are the key principles guiding our approach to experimental design. Our go

![Principles for Designing Experiments](/img/research/experiment-principles.png)

*A note on the principle that **randomization = causal learning**:* Across disciplines and settings, causal learning requires randomization, because random assignment to treatment group ensures that the observed and unobserved characteristics are balanced evenly between treatment and control groups. Importantly, this removes selection bias (or other types of omitted variable bias) that otherwise influences results. Because of this, when possible, we aim to randomly select treatment and control group participants in our experiments.
*A note on the principle that **randomization = causal learning**:* Random assignment of treatment ensures that other characteristics that might affect the outcome are balanced evenly between treatment and control groups. This removes bias that otherwise confounds results, so when possible we try to randomly select treatment and control group participants in our experiments.

In practice, however, it’s sometimes impractical or even unethical to randomly assign participants to an intervention. If this is the case and we still want to understand cause and effect, we can leverage a **quasi-experiment** to evaluate the effects of an intervention even without random assignment.
In practice, though, it’s sometimes impractical or even unethical to randomly assign participants to an intervention. If this is the case and we still want to understand cause and effect, we can leverage a **quasi-experiment** to evaluate the effects of an intervention even without random assignment.

Examples of quasi-experimental approaches to teasing out causal learning might include:

Expand Down Expand Up @@ -80,6 +80,6 @@ Some of these non-experimental approaches (and specific examples) include:
| Network analysis | Social graph data analysis (Github, Twitter, and Farcaster) across the Collective; Measuring the Concentration of Power in the Collective [Mission Request](https://github.com/orgs/ethereum-optimism/projects/31/views/1?pane=issue&itemId=61734705) |
| Performance tracking | OP Labs data team’s [OP Superchain Health dashboard](https://docs.google.com/spreadsheets/d/1f-uIW_PzlGQ_XFAmsf9FYiUf0N9l_nePwDVrw0D5MXY/edit?gid=915250487#gid=915250487) |
| Recurring survey data | Badgeholder post-voting survey; Collective Feedback Commission participant survey |
| Voting behavior analysis | Analysis of Retro Funding vote clustering; Analysis of Retro Funding [capital allocation](https://gov.optimism.io/t/new-rpgf3-distribution-disparity-data/7521) distributions and [growth grants](https://github.com/ethereum-optimism/ecosystem-contributions/issues/244) |
| Voting behavior analysis | Analysis of Retro Funding vote clustering; Analysis of Retro Funding [capital allocation](https://docs.opensource.observer/blog/rf4-ballot-box/) distributions and [growth grants](https://github.com/ethereum-optimism/ecosystem-contributions/issues/244) |

Does any of this sound interesting and you’d like to be involved? Please visit our [Foundation Mission Requests](https://community.optimism.io/grant/grant-overview) page with RFPs that we are looking for collaborators to help us with.
Does any of this sound interesting and you’d like to be involved? Please visit our [Grants](https://community.optimism.io/grant/grant-overview) page with details on how to get a grant, including links to open RFPs.

0 comments on commit ef4004e

Please sign in to comment.