You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For backup and reproducibility, we should probably have a function that downloads the files.
rdt = RdtClient ...
rdt.listArtifacts
loop to download each of them
compress the folder
Put it somewhere when you publish a paper.
Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...
This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.
Probably other uses, too.
The text was updated successfully, but these errors were encountered:
This idea has some features in common with a feature in RenderToolbox called a recipe, which packages up all the instructions and data needed to reproduce a rendering into a tarball. Here we probably want something more generic, but there might be some use in thinking about the two concepts together, particularly since on Ben’s list is making RTB play nice with the archiva server.
I have been thinking about this as something we would do on the upload side, but I can see how doing it at the download side would have advantages.
For backup and reproducibility, we should probably have a function that downloads the files.
rdt = RdtClient ...
rdt.listArtifacts
loop to download each of them
compress the folder
Put it somewhere when you publish a paper.
Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...
This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.
Probably other uses, too.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHubhttps://github.com//issues/70
For backup and reproducibility, we should probably have a function that downloads the files.
rdt = RdtClient ...
rdt.listArtifacts
loop to download each of them
compress the folder
Put it somewhere when you publish a paper.
Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...
This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.
Probably other uses, too.
The text was updated successfully, but these errors were encountered: