You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Put differently, proof of knowledge (of commitment openings — or of Lurk data content more generally) must remain a strong guarantee of Lurk proofs that imply it.
As an aside, note that this has implications with regard to optimization. In that sense, we can consider open to also be a second operator with the side effect of exhibiting its result in the (hidden-in-zero-knowledge) execution trace of the program, which must not be optimized away. Code using open therefore cannot be considered purely functional, even if emit is not used.
We could relax the above requirement in order to facilitate some optimizations. This might be purely a matter of making a decision about language-design intent — so I mention the above in order to clarify the status quo.
The relaxation would be that we can force programmers who want the 'side effect' of opening to use the result of opening such that no correct compiler can optimize it away.
For clarity, take the example of a notional fail function which — if called — guarantees no proof can be produced. Under current assumptions, we could implement it as follows:
(let ((fail (lambda () (open 0)))) ; No one knows a preimage for commitment 0
With the proposed change of policy, we would need instead to write something like
(let ((fail (lambda () (emit (open 0))))) ; Any proof must have known a preimage for 0 in order to emit it.
This still leaves the problem of wanting to ensure real knowledge without emitting it, though. We could use the opening in a computation, but that still runs the risk of dead-code elimination. We could the result of such a blinding computation, but that still leaves the issue of a potentially unwanted side effect.
Based on this, as far as I see, there are two viable options:
Retain open as an inherently side-effecting operation with no other effect than to guarantee successful performance.
Introduce a new side-effecting operator to be used for this (or other equivalent) purpose(s).
The second option will allow users to express intent and facilitate optimizations or formal reasoning based on the assumption of pure functional code when identifiable. Since open is likely to be heavily relied on for opaque but not necessarily 'credential-like' data, this may have value when we have to consider the issue at all.
Meanwhile, the status quo is simpler and all that is required.
The main reason to consider a change is that the decision conditions coding recommendations. If we make the change later, that will constitute a subtle but security-critical change in the semantics of Lurk code. Therefore, the safest thing to do would be to make a decision once and for all.
Introducing a new black-box operator which cannot be optimized out is probably the globally best option for these reasons.
The text was updated successfully, but these errors were encountered:
From #841 (comment):
As an aside, note that this has implications with regard to optimization. In that sense, we can consider
open
to also be a second operator with the side effect of exhibiting its result in the (hidden-in-zero-knowledge) execution trace of the program, which must not be optimized away. Code usingopen
therefore cannot be considered purely functional, even ifemit
is not used.We could relax the above requirement in order to facilitate some optimizations. This might be purely a matter of making a decision about language-design intent — so I mention the above in order to clarify the status quo.
The relaxation would be that we can force programmers who want the 'side effect' of opening to use the result of opening such that no correct compiler can optimize it away.
For clarity, take the example of a notional
fail
function which — if called — guarantees no proof can be produced. Under current assumptions, we could implement it as follows:With the proposed change of policy, we would need instead to write something like
This still leaves the problem of wanting to ensure real knowledge without emitting it, though. We could use the opening in a computation, but that still runs the risk of dead-code elimination. We could the result of such a blinding computation, but that still leaves the issue of a potentially unwanted side effect.
Based on this, as far as I see, there are two viable options:
open
as an inherently side-effecting operation with no other effect than to guarantee successful performance.The second option will allow users to express intent and facilitate optimizations or formal reasoning based on the assumption of pure functional code when identifiable. Since
open
is likely to be heavily relied on for opaque but not necessarily 'credential-like' data, this may have value when we have to consider the issue at all.Meanwhile, the status quo is simpler and all that is required.
The main reason to consider a change is that the decision conditions coding recommendations. If we make the change later, that will constitute a subtle but security-critical change in the semantics of Lurk code. Therefore, the safest thing to do would be to make a decision once and for all.
Introducing a new
black-box
operator which cannot be optimized out is probably the globally best option for these reasons.The text was updated successfully, but these errors were encountered: