-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement persistent commitments #543
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice improvement on the existing but I would urge you to consider whether a field label is everything you need, or if you want to start thinking about some data you can deserialize to a Lang<F, Coproc<F>>
src/cli/commitment.rs
Outdated
|
||
/// Holds data for commitments. | ||
/// | ||
/// WARNING: CONTAINS PRIVATE DATA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that warning differ qualitatively from any other instance of a store?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this one contains secrets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That I would expect. The idea that other stores don't is more surprising.
In practical terms, consider the code base you're editing here will be reviewed by a professional cryptography auditor. How does the comment you're offering here help them contextualize how files for Commitment
should be considered, as opposed to other store files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will address this tomorrow
src/cli/field_data.rs
Outdated
} | ||
|
||
#[allow(dead_code)] | ||
impl FieldData { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, but with this version you are moving the conversion to/from bytes at the wrapping time. I suspect you could peg what you want on a
struct Labeled<T: Serialize + DeserializeOwned> {
label: Language field,
val: T,
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this case I do intend to deserialize in two steps. We first read from FS to know the field and then we read from the vector to get the data with the correct field elements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. You can definitely have those two steps, but in sequence within the same function, with the above structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get the idea. I don't want to deserialize the vector of bytes if there is some inconsistency in the field. I want to error earlier. In other words, the vector is desirable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the record, I did try something like you propose in my first attempt but then I got stuck because Rust doesn't have dependent types. I wanted T<F>
where F: LurkField
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm.
As partially unrelated topic, I don't think any strategy that persists important Lurk data using an ad-hoc/bincode serialization of LanguageField
is a good idea. It would be A Bad Thing if changes to that enum (which are so likely they are predictable, so predictable we should plan for them) led to supposedly durable data becoming unreadable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we encode that information, then? Should we create another enum that we try to assure its stability?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to have a completely general, 'dynamic' serialization, then it's going to require more design.
But is that really what is needed here? I think you just need to know what LanguageField
you are currently using. Then everything will be written and read using that LanguageField
. Moreover, it follows from our general cryptographic assumptions that any value used as a commitment (or as the hash part of a Lurk expression) cannot be produced by hashing some preimage in another LanguageField
.
Therefore, as long as you are looking values up by their hash/digest, then it's fine to completely segregate them by field. So, given that this work is still using a filesystem-based commitment store, you could have a directory structure like:
.lurk/commitments/pallas
.lurk/commitments/vesta
.lurk/commitments/bls12-81
.lurk/commitments/grumpkin
…
If the current LanguageField
is pallas
, then you can write all commitments to the .lurk/commitments/pallas
directory and look all commitments up there too. There is never any possibility that a valid commitment (say you are looking it up) expressed as an element of pallas::Scalar
will be stored elsewhere.
Also, since (as above), we also cannot have collisions between fields (assuming indexes are always hashes, and the chosen field/hash combinations still preserve our security assumptions) you could even get the 'dynamic' behavior by searching for a given commitment in all available field directories. To prevent having to search in multiple places, you could use symlinks (for example) to provide a single index (perhaps hierarchically structured to avoid too-large directories, etc.)
Obviously, with a more powerful database management system than 'the file system', a different approach could be taken. But I think the above (especially the simplest version, which is probably all we need initially) should be fine.
The point is: I think you may be trying to solve the wrong problem. Certainly if the goal is a quick PR that decrees a format through code then that is the case.
While we may want to eventually have a format that allows dynamically mixing field types, that will need to be worked out as a careful extension of z_data
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That works for commitments, but lurk verify <proof_id>
would need to ask the user for the field, which is annoying. So I thought that we might as well just use the same infra that's already available to give us extra consistency, assuring that we won't open a commitment that was generated in a field while we're in another field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't proofs also always with respect to a known prover/verifier, which is itself necessarily parameterized on curve+field?
Also, why the heck are proofs (apparently) being stored with an id that is the timestamp rather than using content-addressing as previously?
Question: Are you trying to support verifying any Lurk proof, or do we assume that a given REPL session will only verify proofs of statements in the current field?
Either way, if you content-address the proofs, you should be able to use the symlink approach as above. (NOTE: in that model, you will need to check the actual location of resolved symlinks to get that meta-data — but I still think that's not what should be needed here.)
Opinion: Proof IDs should be the content address of the claim — just as they are in fcomm
and clutch
. This is actually important because it allows for caching of proofs. It's easy to imagine applications for which equivalent proofs are requested more than once (even many, many times). For example, that's how the current Lurk website works (or was designed to): we serve real proofs of expected claims in a way that is cost effective but still accurate.
A real outsourced-but-provable computation service could do the same.
Yeah, it will be easier to see once we generalize Lurk proofs across more backends (and fields). For now, I think it is enough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going back to the core point: https://gist.github.com/huitseeker/2c6bc8bbecba59cdd4e8c6cbfadebc5c
Note this is independent of what you actually use for a type label, so you could extend this solution to carry field, group, versioning information (...) to your satisfaction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! this looks💥!
FieldData
(credits to @huitseeker)Closes #529