Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Download all the artifacts (files) in a remote directory and zip them into a single file #70

Open
wandell opened this issue Apr 28, 2016 · 1 comment
Assignees

Comments

@wandell
Copy link
Contributor

wandell commented Apr 28, 2016

For backup and reproducibility, we should probably have a function that downloads the files.

rdt = RdtClient ...
rdt.listArtifacts
loop to download each of them
compress the folder
Put it somewhere when you publish a paper.

Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...

This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.

Probably other uses, too.

@wandell wandell self-assigned this Apr 28, 2016
@DavidBrainard
Copy link
Contributor

This idea has some features in common with a feature in RenderToolbox called a recipe, which packages up all the instructions and data needed to reproduce a rendering into a tarball. Here we probably want something more generic, but there might be some use in thinking about the two concepts together, particularly since on Ben’s list is making RTB play nice with the archiva server.

I have been thinking about this as something we would do on the upload side, but I can see how doing it at the download side would have advantages.

Best,

David

On Apr 28, 2016, at 7:57 PM, Brian Wandell <notifications@github.commailto:notifications@github.com> wrote:

For backup and reproducibility, we should probably have a function that downloads the files.

rdt = RdtClient ...
rdt.listArtifacts
loop to download each of them
compress the folder
Put it somewhere when you publish a paper.

Do we ignore subfolders? Or ... recursively? Or take a list of subfolders ...

This seems useful when we publish a paper that needs the files and we want to put the data on a permanent repository (e.g., the stanford digital repository). Then if the AWS site goes away, we still have the files available.

Probably other uses, too.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHubhttps://github.com//issues/70

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants