Skip to content

Commit

Permalink
[pie] add evaluation details for pie
Browse files Browse the repository at this point in the history
  • Loading branch information
madaan committed Oct 15, 2023
1 parent 568a1f4 commit 33732dd
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/pie_eval.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Instructions for evaluating runtime for PIE experiments

TLDR: From the self-refine outputs, create a flattened version of the outputs, and then use the PIE repo to evaluate the runtime and get a report. Parse the report using `src/pie/pie_eval.py`.
*TLDR: From the self-refine outputs, create a flattened version of the outputs, and then use the PIE repo to evaluate the runtime and get a report. Parse the report using `src/pie/pie_eval.py`.*

1. **Step 1** (construct yaml): For evaluating runtime for PIE experiments, we need a yaml file that contains information about the dataset, the model outputs, and the reference file. Note that self-refine generates outputs in a slightly different format. While Self-Refine generates the outputs in an array (one version per refinement step), the evaluation requires the program to be present in a single column as a script. You can optionally use [https://github.com/madaan/self-refine/tree/main/src/pie](prep_for_pie_eval.py) for this. `prep_for_pie_eval.py` creates a single file where the output from the i^th step is present in the `attempt_i_code` column. The following is an example for evaluating the initial output (`y0`).

Expand Down

0 comments on commit 33732dd

Please sign in to comment.