Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add black-box so open need not be considered side-effecting for security. #848

Open
Tracked by #657
porcuquine opened this issue Nov 3, 2023 · 0 comments
Open
Tracked by #657
Labels
enhancement New feature or request

Comments

@porcuquine
Copy link
Collaborator

From #841 (comment):

Put differently, proof of knowledge (of commitment openings — or of Lurk data content more generally) must remain a strong guarantee of Lurk proofs that imply it.

As an aside, note that this has implications with regard to optimization. In that sense, we can consider open to also be a second operator with the side effect of exhibiting its result in the (hidden-in-zero-knowledge) execution trace of the program, which must not be optimized away. Code using open therefore cannot be considered purely functional, even if emit is not used.

We could relax the above requirement in order to facilitate some optimizations. This might be purely a matter of making a decision about language-design intent — so I mention the above in order to clarify the status quo.

The relaxation would be that we can force programmers who want the 'side effect' of opening to use the result of opening such that no correct compiler can optimize it away.

For clarity, take the example of a notional fail function which — if called — guarantees no proof can be produced. Under current assumptions, we could implement it as follows:

(let ((fail (lambda () (open 0)))) ; No one knows a preimage for commitment 0

With the proposed change of policy, we would need instead to write something like

(let ((fail (lambda () (emit (open 0))))) ; Any proof must have known a preimage for 0 in order to emit it.

This still leaves the problem of wanting to ensure real knowledge without emitting it, though. We could use the opening in a computation, but that still runs the risk of dead-code elimination. We could the result of such a blinding computation, but that still leaves the issue of a potentially unwanted side effect.

Based on this, as far as I see, there are two viable options:

  • Retain open as an inherently side-effecting operation with no other effect than to guarantee successful performance.
  • Introduce a new side-effecting operator to be used for this (or other equivalent) purpose(s).

The second option will allow users to express intent and facilitate optimizations or formal reasoning based on the assumption of pure functional code when identifiable. Since open is likely to be heavily relied on for opaque but not necessarily 'credential-like' data, this may have value when we have to consider the issue at all.

Meanwhile, the status quo is simpler and all that is required.

The main reason to consider a change is that the decision conditions coding recommendations. If we make the change later, that will constitute a subtle but security-critical change in the semantics of Lurk code. Therefore, the safest thing to do would be to make a decision once and for all.

Introducing a new black-box operator which cannot be optimized out is probably the globally best option for these reasons.

@porcuquine porcuquine mentioned this issue Nov 3, 2023
3 tasks
@porcuquine porcuquine added the enhancement New feature or request label Nov 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant