Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc fixes and bump to v4.1.9 #204

Merged
merged 2 commits into from
Mar 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
## 4.1.9
- Added configuration information for multiple s3 outputs to documentation [#196](https://github.com/logstash-plugins/logstash-output-s3/pull/196)
- Fixed formatting problems and typographical errors [#194](https://github.com/logstash-plugins/logstash-output-s3/pull/194), [#201](https://github.com/logstash-plugins/logstash-output-s3/pull/201), and [#204](https://github.com/logstash-plugins/logstash-output-s3/pull/204)

## 4.1.8
- Add support for setting mutipart upload threshold [#202](https://github.com/logstash-plugins/logstash-output-s3/pull/202)

Expand Down
56 changes: 26 additions & 30 deletions docs/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,43 +21,41 @@ include::{include_path}/plugin_header.asciidoc[]

==== Description

INFORMATION:

This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).

Requirements:

* Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
* S3 PutObject permission

S3 outputs create temporary files into the OS' temporary directory, you can specify where to save them using the `temporary_directory` option.
S3 outputs create temporary files into the OS' temporary directory.
You can specify where to save them using the `temporary_directory` option.

IMPORTANT: For configurations containing multiple s3 outputs with the restore option enabled, each output should define its own 'temporary_directory'
IMPORTANT: For configurations containing multiple s3 outputs with the restore
option enabled, each output should define its own 'temporary_directory'.

S3 output files have the following format
===== Requirements

ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt
* Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
* S3 PutObject permission

===== S3 output file

[source,txt]
-----
`ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt`
-----

|=======
| ls.s3 | indicate logstash plugin s3 |
| ls.s3 | indicates logstash plugin s3 |
| 312bc026-2f5d-49bc-ae9f-5940cf4ad9a6 | a new, random uuid per file. |
| 2013-04-18T10.00 | represents the time whenever you specify time_file. |
| tag_hello | this indicates the event's tag. |
| part0 | this means if you indicate size_file then it will generate more parts if your file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed |
| tag_hello | indicates the event's tag. |
| part0 | If you indicate size_file, it will generate more parts if your file.size > size_file.
When a file is full, it gets pushed to the bucket and then deleted from the temporary directory.
If a file is empty, it is simply deleted. Empty files will not be pushed. |
|=======

Crash Recovery:

* This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true
===== Crash Recovery

This plugin will recover and upload temporary log files after crash/abnormal termination when using `restore` set to true






==== Usage:
===== Usage
This is an example of logstash config:
[source,ruby]
output {
Expand Down Expand Up @@ -124,11 +122,11 @@ output plugins.

This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:

1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config
2. External credentials file specified by `aws_credentials_file`
3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY`
5. IAM Instance Profile (available when running inside EC2)
. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config
. External credentials file specified by `aws_credentials_file`
. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY`
. IAM Instance Profile (available when running inside EC2)

[id="plugins-{type}s-{plugin}-additional_settings"]
===== `additional_settings`
Expand Down Expand Up @@ -393,8 +391,6 @@ Specify how many workers to use to upload the files to S3
The common use case is to define permission on the root bucket and give Logstash full access to write its logs.
In some circumstances you need finer grained permission on subfolder, this allow you to disable the check at startup.



[id="plugins-{type}s-{plugin}-common-options"]
include::{include_path}/{type}.asciidoc[]

Expand Down
2 changes: 1 addition & 1 deletion logstash-output-s3.gemspec
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Gem::Specification.new do |s|
s.name = 'logstash-output-s3'
s.version = '4.1.8'
s.version = '4.1.9'
s.licenses = ['Apache-2.0']
s.summary = "Sends Logstash events to the Amazon Simple Storage Service"
s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program"
Expand Down