Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warn and skip on glacier objects that may fail for s3 commands #1581

Merged
merged 10 commits into from
Nov 4, 2015

Conversation

kyleknap
Copy link
Contributor

This pull request does the following:

  • Skips operations that involve transferring a glacier object. This includes downloading, copying, and moving an object when the source is a glacier object. If the glacier object is skipped, a warning is displayed causing the return code to be 2. The most important part of this functionality is that we will not even try performing an operation, which could slow down a command especially if you have a lot of or large glacier objects.
  • Add --ignore-glacier-warnings argument that allows you to hide the glacier warnings and not cause the return code to be 2 if a glacier object is encountered. Note that glacier objects are still always skipped if the command involves downloading, copying, or moving an object when the source is a glacier object.

All of the integration tests pass.

Fixes #748

cc @jamesls @mtdowling @rayluo @JordonPhillips

@kyleknap kyleknap added the pr:needs-review This PR needs a review from a Member. label Oct 21, 2015
@@ -324,4 +326,4 @@ def _list_single_object(self, s3_path):
file_size = int(response['ContentLength'])
last_update = parse(response['LastModified'])
last_update = last_update.astimezone(tzlocal())
return s3_path, file_size, last_update
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a weird API. We're returning parts of the response (such as LastModified) as well as just the entire response whole sale. This can probably be cleaned up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept it like this to keep parity with the output for local file listings. There is no response associated with a local listing. I will probably break up the logic such that the listing of files and listing of s3 objects do not have the same interface.

@kyleknap kyleknap added incorporating-feedback and removed pr:needs-review This PR needs a review from a Member. labels Oct 27, 2015
@kyleknap kyleknap force-pushed the ignore-glacier branch 3 times, most recently from 7014821 to 20a8a07 Compare October 28, 2015 01:14
@kyleknap
Copy link
Contributor Author

Alright, I think I incorporated the feedback. Refactored the FileGenerator a bit. Mainly it allowed the list_files and list_objects to yield data that may differ as opposed to forcing the two methods to return the same size tuples that may or may not be completely filled. Ready to get looked at again.

@kyleknap kyleknap added pr:needs-review This PR needs a review from a Member. and removed incorporating-feedback labels Oct 28, 2015
Before we would try to process an operation (FileInfo) no matter what it was
if it reached the S3Handler. Now we do some checks to see if the operation
will fail given we know the FileInfo object represents a Glacier objects. If
it is we know skip it as opposed to letting the error handling handle it
which is much slower if you have large/many Glacier objects.
This argument will still cause glacier objects to be skipped over but
the skip warning will no longer be printed to stderr for glacier warnings
and not affect the return code.
The list_files and list_objects no longer yield the same data. The data
required from s3 was no longer the same as the data required from a local
file system. The data yielded back from both methods was minimized such
that a minimized number of variables needed to be yielded.
Specifically consolidated the local file data into an object when yielding
from ``list_files``. It parallels to how the s3 yields it response data.
@kyleknap
Copy link
Contributor Author

kyleknap commented Nov 4, 2015

Alright I cleaned up the code a bit more. Should be good to look at.

@jamesls
Copy link
Member

jamesls commented Nov 4, 2015

:shipit:

'help_text': (
'Turns off glacier warnings. Warnings about operations that cannot '
'be performed because it involves copying, downloading, or moving '
'a glacier object will no longer be printed to standard error and '
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing this to "and will no longer cause the return code of the command to be 2" might make it a bit more clear that using this option will prevent an error exit rather than cause one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@mtdowling
Copy link
Member

🚢

kyleknap added a commit that referenced this pull request Nov 4, 2015
Warn and skip on glacier objects that may fail for s3 commands
@kyleknap kyleknap merged commit 1bc2929 into aws:develop Nov 4, 2015
@kyleknap kyleknap deleted the ignore-glacier branch November 4, 2015 18:55
thoward-godaddy pushed a commit to thoward-godaddy/aws-cli that referenced this pull request Feb 12, 2022
* fix: add version to `samconfig.toml` file

- support version key, any float is okay.
- if a config file is present and the version key is missing, we do not
  process it.
- if a config file is missing, thats fine. this check does not get in
  the way.
- validation logic to determine if a SAM CLI version is compatible can
  be written later.

* bugfix: do not continously read everytime a samconfig.put is called
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr:needs-review This PR needs a review from a Member.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants