Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ruanpc/production #81

Merged
merged 12 commits into from
Sep 9, 2024
Merged

Ruanpc/production #81

merged 12 commits into from
Sep 9, 2024

Conversation

RUAN0007
Copy link
Collaborator

@RUAN0007 RUAN0007 commented Sep 9, 2024

No description provided.

}

None => {
println!("============Validation started============");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename validation to verification.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is part of requirement from production team to make the output as identical to v1

let equity_offset = RecursiveTargets::<RECURSION_BRANCHOUT_NUM>::pub_input_equity_offset();
let equity_sum = root_proof.proof.public_inputs[equity_offset].to_canonical_u64();

let debt_offset = RecursiveTargets::<RECURSION_BRANCHOUT_NUM>::pub_input_debt_offset();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

offsets should be constants? It seems a bit weird logically to get offsets from targets.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

offsets is determined by targets. So it is better to put offset close to targets.

Comment on lines 69 to 86
let invalid_proof_paths = user_proof_paths
.chunks(chunk_size)
.map(|chunks| {
let invalid_proof_paths = chunks
.par_iter()
.map(|user_proof_path| {
let merkle_path = File::open(&user_proof_path).unwrap();
let reader = std::io::BufReader::new(merkle_path);
let proof: MerkleProof = from_reader(reader).unwrap();
let result = proof.verify_merkle_proof(root_hash);
(result, user_proof_path)
})
.filter(|(result, _)| result.is_err())
.map(|(_, invalid_proof_path)| invalid_proof_path.to_str().unwrap().to_owned())
.collect::<Vec<String>>();
if verbose {
bar.inc(chunks.len() as u64);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You dont need to manually select chunk size to be num cpus, the way the rayon threading works, it will automatically schedule num_cpus number of threads and will spin up a new thread every time one completes. Unless you specify the number of threads in a threadpool rayon will use num cpu threads at a time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cliff determines the chunk size as the number of cpus. So I think I will better keep it unchanged.

Copy link
Collaborator

@EkamSinghPandher EkamSinghPandher Sep 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah but this version is more inefficient and perhaps cliff is unaware about this feature of rayon. Its a pretty trivial change.
Btw I didnt notice that this was how we were doing parallelism in other sections. There is no need to concern ourselves with thread nums, especially since rayons threading is optimised for exactly this kind of task (work stealing greedy scheduler). We should change it because it is less efficient that simply using rayons own scheduler.

Copy link
Collaborator

@EkamSinghPandher EkamSinghPandher Sep 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just push a commit to remove chunking as you suggested. Pls approve this pr.

Copy link
Collaborator

@EkamSinghPandher EkamSinghPandher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@RUAN0007 RUAN0007 merged commit 595a181 into dev Sep 9, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants