-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add basic smoke tests for topology branch #897
Add basic smoke tests for topology branch #897
Conversation
Nope, none of those are in the files you've touched. We should all be using the Within the pre-commit install It will then highlight all these problems before changes can be pushed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bunch of comments in-line, great work writing all these tests @SylviaWhittle 👍
The tests/resources/
is starting to look somewhat cluttered. I think many of the .csv
files could be removed if we used pytest-regtest
(some comments in-line on this).
That still leaves a lot of files though, many of the filenames carry common suffixes and its clear that many of the objects (.pkl
and .npy
) are related. I can think of two options to improve organisation...
- Bundle similar objects into a dictionary with the keys formed from the component that distinguishes them and save as a single pickle.
example_rep_int_all_images.pkl
example_rep_int_all_images_nodestats.pkl
example_rep_int_disordered_crop_data.pkl
example_rep_int_disordered_tracing_stats.csv
example_rep_int_grainstats_additions_df.csv
example_rep_int_grainstats_additions_nodestats.csv
example_rep_int_labelled_grain_mask_thresholded.npy
example_rep_int_nodestats_branch_images.pkl
example_rep_int_nodestats_data.pkl
example_rep_int_ordered_tracing_data.pkl
example_rep_int_ordered_tracing_full_images.pkl
example_rep_int_ordered_tracing_grainstats_additions.csv
example_rep_int_splining_data.pkl
example_rep_int_splining_grainstats_additions.csv
example_rep_int_splining_molstats.csv
Would go into a dictionary with keys of...
{
"all_images": <object>
"all_images_nodestats": <object>
"disordered_crop_data": <object>
"disordered_tracing_stats": <object>
"grainstats_additions_df": <object>
"grainstats_additions_nodestats": <object>
"labelled_grain_mask_thresholded": <object>
"nodestats_branch_images": <object>
"nodestats_data": <object>
"ordered_tracing_data": <object>
"ordered_tracing_full_images": <object>
"ordered_tracing_grainstats_additions": <object>
"splining_data": <object>
"splining_grainstats_additions": <object>
"splining_molstats": <object>
}
...and that could be saved as tests/resources/example_rep_int.pkl
.
- Alternatively create a nested directory structure under
tests/resources
reflecting the common prefixes...
tests/resources/node/
tests/resources/catenanes/
tests/resources/example_rep_int/
...drop the prefixes from the filenames.
np.testing.assert_equal(node_dict_result, nodestats_catenane_node_dict) | ||
np.testing.assert_equal(image_dict_result, nodestats_catenane_image_dict) | ||
np.testing.assert_array_equal(nodestats_catenane.all_connected_nodes, nodestats_catenane_all_connected_nodes) | ||
# Debugging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is presumably to update the test files when the underlying code changes?
The syrupy package which is compatible with pytest
might be an alternative to this. Not used it yet, only became aware of it ar RSECon2024, but its similar to pytest-regtest
I think.
tests/tracing/test_nodestats.py
Outdated
# ) | ||
|
||
# Load the nodestats catenane node dict from pickle | ||
with Path(RESOURCES / "nodestats_analyse_nodes_catenane_node_dict.pkl").open("rb") as f: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the arrays and dictionaries aren't too large I'd be inclined to use the pytest-regtest
approach to comparing these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think perhaps we should use pytest-regtest
to compare the statistics that are produced rather than having to have code to update the CSV files when methods change and then read them in to pd.DataFrame()
and pd.testing.assert_frame_equal()
.
We use this approach for other CSVs that the pipeline produces so should be ok here unless there is a specific reason for this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very happy to have pushback on this, but my thinking was trying to keep it all the same style of test with assertions and variables loaded explicitly since when the test fails, I'm finding it easier to debug using a debugger & IDE tools? I tried debugging tests that use pytest-regtest
and it's rather difficult with large objects. Perhaps I went about it wrong?
I think we are anticipating quite a bit of iteration and so making it as smooth as possible to view what exactly changed and if it's valid and then to easily update the values is useful.
IIRC pytest-regtest
has the excellent override method of pytest --regtest-reset
which makes updating tests a dream but I always have to add code and dig to see if the change is legitimate. Do you have a good way of inspecting changes in regtests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I've had to update tests from pytest-regtest
it is in essence a diff
which is what many of the assert
show (in one form or another, whether that is default Python / pytest / np.testing.assert*
/ pd.testing.assert*
).
Diffs can be tricky to use and understand at times particularly when its a few numbers in the middle of a large CSV and perhaps Numpy/Pandas help with this but there are some alternatives and features that make it easier.
I find the specific changes within a line that Git can be configured to show really useful, one tool for this is delta but personally I use difftastic as it also understands structure of different programming languages and Git can be easily configured to work with it.
The --regtest-reset
is in my view a lot quicker than having to uncomment a bunch of lines to write a CSV or pkl out.
Perhaps we should look at how syrupy compares to pytest-regtests
and the manual approach?
Broadly I think it is useful to be consistent across a project though, pick one approach and stick with it, it reduces cognitive overhead and makes it easier for others to get to grips with how the repository works (this is true of more than just tests, e.g. always using pathlib
rather than mixing it up with os
modules).
I do have pre-commit, I was going to tidy it up in a follow-up PR that was just focused on that, keeping this PR self-contained but am happy to fix it in this PR 👍 |
f86f074
to
58a4b99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too bothered about having fixtures only being used once since they may be of use in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of in-line comments.
I might take a look at what can be shifted into pytest-regtest
(or syrupy
) as I feel we should make updating tests as simple as possible.
.../resources/tracing/disordered_tracing/rep_int_disordered_tracing_grainstats_additions_df.csv
Outdated
Show resolved
Hide resolved
Yes please, if you don't mind. I think names should cascade through and be consistent across functions, it makes it easier to follow the flow of data through processing as it doesn't incur the mental overhead of translating |
Done! There should now be a new commit to maxgamill-sheffield/topology` which can be pulled / rebased :) |
9d9075a
to
bcd2777
Compare
Rebased successfully 👍 |
@ns-rse given the complexity with finding the best alternative to |
I think I've addressed all your points except the ones relating to how we assert objects are equal to each other. Apologies if I've skipped anything else, it's unintentional |
Yep, fine with that, just write up an issue if there isn't one already so we have it tracked in the backlog. |
Co-authored-by: Neil Shephard <n.shephard@sheffield.ac.uk>
Co-authored-by: Neil Shephard <n.shephard@sheffield.ac.uk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, happy to deal with checking dictionaries in a separate issue/branch and sorry for forgetting about #913 which already documents the issue.
I noticed when checking that we currently exclude ^nodestats
and tracingfuncs
from being checked with numpydoc-validation
so we should do that in a separate issues (see #919 and #920).
This PR adds basic smoke tests for disordered tracing, node stats, ordered tracing and splining.