-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parametrize & synthesize parameter (formerly 'trust') packages #168
Comments
From today's 7.0.0 scope-reduction exercise; trust/parameter packages can also incorporate distinct (and transparent) choices between i. semi-decentralized highly reliable long-term cohorts (NuCypher nodes + professionalized stakers + adopters own nodes) and ii. fully decentralized, permissionless cohorts sampled from the entire node population and relatively susceptible to cohort degradation (until cohort refreshing is rolled out in v7.1.0). Noting that:
|
Proposal for the CBD MVP (v7.0.0):
*Initializations may be subsidized by the network initially, if (a) makes integration less complex and (b) there's a robust way to limit the number of total cohorts on the network. The figure below is illustrative of how the parameter packages may look at genesis: |
For v7, we've coalesced around offering a single parameter package, a template for the cohorts generated at genesis. Because it will be permissionless – i.e. open to any staker who authorizes T token to the TACo app – cohort degradation must be considered (although a permissioned cohort could also degrade, potentially in more catastrophic ways e.g. regulatory shutdown). In any case, it leads to the question: what is the optimum cohort size for longevity? There are numerous factors that affect the decision to deauthorize away from TACo, most of which we cannot control for. However, starting with the assumption that the cohort size itself does not impact this decision, we can statistically compare cohort sizes (8-of-15 & 16-of-30):
The probability that 4 or more of the deauthorizing operators are placed into the cohort of 30: h(x>=4 ; 50, 30, 5) = [ (5C4) (45C26) / (50C30) ] + [ (5C5) (45C25) / (50C30) ] = 0.32595 The probability that 2 or more of the deauthorizing operators are placed into the cohort of 15: h(x>=2 ; 50, 15, 5) = [ (5C2) (45C13) / (50C15) ] + ... + [ (5C5) (45C10) / (50C15) ] = 0.47609 So ceteris paribus, the chance of failure is greater when the cohort size is smaller. It is true that a larger fixed cohort size implies more overlap – i.e. the same operator who plans to deauthorize will be placed in more cohorts. However (I think) this does not increase the chance that any individual cohort will reach failure state. It's worth mentioning that larger cohorts also offer more collusion-resistance; specifically with respect to:
|
On a call today, we resolved:
|
@arjunhassard can we close this issue? |
No description provided.
The text was updated successfully, but these errors were encountered: