-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example for downloading full RDS log doesn't actually work #2268
Comments
Could you post the |
@JordonPhillips hey there. I get several of those
Before each one of those but the first I get this:
With that I’m getting, out of three tries, a 17MB file, when the file on the Web Console says it’s 2.3GB. Let me know if you need anything else. |
@anbotero It looks like what we're sending to the service is correct, so whatever the service is returning must be strange. Your debug log should also contain sections that have |
@JordonPhillips indeed, it indicates something like that. This is the last of those responses:
Previous iterations of |
@anbotero With that in mind, and since it seems to be happening on the console, I would recommend raising this issue on the service forums. I'll let them know as well. |
Hi, |
Having issues with this also. I'm unable to download more than 1.3-1.5 GB of a log and then I get either the following errors
or
Using the following version on an EC2 instance aws-cli/1.10.1 Python/3.5.2 Linux/4.4.0-43-generic botocore/1.3.23 and on my laptop aws-cli/1.10.56 Python/2.7.11 Darwin/16.1.0 botocore/1.4.46 |
I am also seeing this error. See the output below. Additionally, I think it's because the messages are being truncated. There is a truncation for each log file portion except for the last one.
|
This is a horrible bit code (did not have much time to spend on it) but it does let me get the entire log file (I hope) ...
so I pass in the logfile name ... it loops through until it gets no more new markers
and I end up with a file called postgresql.log.2017-02-09-15 that has the entire log (I hope) As I mentioned this is very quick so feel free to improve.... |
Same issue here. Using the last Marker as a start-token value allows to grab the rest of the log . |
@stealthycoin any reason why this was closed? Was this fixed? |
@jlintz asking myself the same question. seeing the same/similar problem with
only gives me ~300mb of a ~950mb logfile. |
The same here. Cannot download even 100mb log.
|
I don't know why that issue has been closed. I said that using the proposed shell script was a workaround but not that the code is fixed ! |
Anyone had a fix yet or tried to contact aws? |
@chefone I use script from @fmmatthewzeemann as a workaround |
Yeah but it should be fixed at the API level ... |
i solved this by using the golang sdk instead of the aws cli. |
Workaround didn't work for me as I guess the output is different in my version of aws-cli but I'm just chiming in to say that I am seeing this issue with 1.11.183.
|
I checked with AWS support about this and they said that the implementation doesnt work in the CLI or Boto3. They gave me some code that works on REST and I tweaked it into a module. I apologize if this is off-topic for this forum, but I thought it would help a lot of people on this thread. Attached is the starter code the tech gave me. It seems to work so far. |
Another alternative to try, which worked for me: the deprecated
|
@marksher thank you very much for that code. I cleaned it up just a little bit and turned it into something I can run in a slightly more automated fashion: https://gist.github.com/joer14/4e5fc38a832b9d96ea5c3d5cb8cf1fe9 |
never mind what is the option as said in the doc "Downloads all or a portion of the specified log file, up to 1 MB in size." |
I modified the code of @andrewmackett 's version of @joer14 's gist to support retries in case of instance metadata based credentials expire during continuous fetching of logs along with some CLI arguments magic. Also supported usage of profiles. https://gist.github.com/rams3sh/15ac9487f2b6860988dc5fb967e754aa |
If anybody is still running into this problem, I created a small Python script that you can use to download RDS log files and save them in S3: https://github.com/ccampo133/rds-logs-to-s3 |
The instructions from #1617 do not download the entire log file as documented.
Note the log file has size 1908814, but downloaded it is only 1212017 bytes.
It's unclear if it's even possible to download a log file in a simple shell script as the pagination tokens do not seem to be available with
--output text
. I'm guessing one would need to parse JSON or XML to get them.The text was updated successfully, but these errors were encountered: