Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Backport to 6.x) Log rotation by time interval (#8349) #8489

Merged
merged 1 commit into from
Oct 1, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ https://github.com/elastic/beats/compare/v6.4.0...6.x[Check the HEAD diff]

*Affecting all Beats*

- Added time-based log rotation. {pull}8349[8349]
- Add backoff on error support to redis output. {pull}7781[7781]
- Allow for cloud-id to specify a custom port. This makes cloud-id work in ECE contexts. {pull}7887[7887]
- Add support to grow or shrink an existing spool file between restarts. {pull}7859[7859]
Expand Down
11 changes: 9 additions & 2 deletions auditbeat/auditbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -530,11 +530,11 @@ output.elasticsearch:
# and retry until all events are published. Set max_retries to a value less
# than 0 to retry until all events are published. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Logstash request. The
# default is 2048.
#bulk_max_size: 2048

# The number of seconds to wait for responses from the Logstash server before
# timing out. The default is 30s.
#timeout: 30s
Expand Down Expand Up @@ -1089,6 +1089,13 @@ logging.files:
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600

# Enable log file rotation on time intervals in addition to size-based rotation.
# Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
# are boundary-aligned with minutes, hours, days, weeks, months, and years as
# reported by the local system clock. All other intervals are calculated from the
# unix epoch. Defaults to disabled.
#interval: 0

# Set to true to log messages in json format.
#logging.json: false

Expand Down
11 changes: 9 additions & 2 deletions filebeat/filebeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1203,11 +1203,11 @@ output.elasticsearch:
# and retry until all events are published. Set max_retries to a value less
# than 0 to retry until all events are published. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Logstash request. The
# default is 2048.
#bulk_max_size: 2048

# The number of seconds to wait for responses from the Logstash server before
# timing out. The default is 30s.
#timeout: 30s
Expand Down Expand Up @@ -1762,6 +1762,13 @@ logging.files:
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600

# Enable log file rotation on time intervals in addition to size-based rotation.
# Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
# are boundary-aligned with minutes, hours, days, weeks, months, and years as
# reported by the local system clock. All other intervals are calculated from the
# unix epoch. Defaults to disabled.
#interval: 0

# Set to true to log messages in json format.
#logging.json: false

Expand Down
11 changes: 9 additions & 2 deletions heartbeat/heartbeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -650,11 +650,11 @@ output.elasticsearch:
# and retry until all events are published. Set max_retries to a value less
# than 0 to retry until all events are published. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Logstash request. The
# default is 2048.
#bulk_max_size: 2048

# The number of seconds to wait for responses from the Logstash server before
# timing out. The default is 30s.
#timeout: 30s
Expand Down Expand Up @@ -1209,6 +1209,13 @@ logging.files:
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600

# Enable log file rotation on time intervals in addition to size-based rotation.
# Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
# are boundary-aligned with minutes, hours, days, weeks, months, and years as
# reported by the local system clock. All other intervals are calculated from the
# unix epoch. Defaults to disabled.
#interval: 0

# Set to true to log messages in json format.
#logging.json: false

Expand Down
11 changes: 9 additions & 2 deletions libbeat/_meta/config.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -423,11 +423,11 @@ output.elasticsearch:
# and retry until all events are published. Set max_retries to a value less
# than 0 to retry until all events are published. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Logstash request. The
# default is 2048.
#bulk_max_size: 2048

# The number of seconds to wait for responses from the Logstash server before
# timing out. The default is 30s.
#timeout: 30s
Expand Down Expand Up @@ -982,6 +982,13 @@ logging.files:
# Must be a valid Unix-style file permissions mask expressed in octal notation.
#permissions: 0600

# Enable log file rotation on time intervals in addition to size-based rotation.
# Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
# are boundary-aligned with minutes, hours, days, weeks, months, and years as
# reported by the local system clock. All other intervals are calculated from the
# unix epoch. Defaults to disabled.
#interval: 0

# Set to true to log messages in json format.
#logging.json: false

Expand Down
194 changes: 194 additions & 0 deletions libbeat/common/file/interval_rotator.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
// Licensed to Elasticsearch B.V. under one or more contributor
// license agreements. See the NOTICE file distributed with
// this work for additional information regarding copyright
// ownership. Elasticsearch B.V. licenses this file to you under
// the Apache License, Version 2.0 (the "License"); you may
// not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

package file

import (
"errors"
"fmt"
"sort"
"strconv"
"time"
)

type intervalRotator struct {
interval time.Duration
lastRotate time.Time
fileFormat string
clock clock
weekly bool
arbitrary bool
newInterval func(lastTime time.Time, currentTime time.Time) bool
}

type clock interface {
Now() time.Time
}

type realClock struct{}

func (realClock) Now() time.Time {
return time.Now()
}

func newIntervalRotator(interval time.Duration) (*intervalRotator, error) {
if interval == 0 {
return nil, nil
}
if interval < time.Second && interval != 0 {
return nil, errors.New("the minimum time interval for log rotation is 1 second")
}

ir := &intervalRotator{interval: (interval / time.Second) * time.Second} // drop fractional seconds
ir.initialize()
return ir, nil
}

func (r *intervalRotator) initialize() error {
r.clock = realClock{}

switch r.interval {
case time.Second:
r.fileFormat = "2006-01-02-15-04-05"
r.newInterval = newSecond
case time.Minute:
r.fileFormat = "2006-01-02-15-04"
r.newInterval = newMinute
case time.Hour:
r.fileFormat = "2006-01-02-15"
r.newInterval = newHour
case 24 * time.Hour: // calendar day
r.fileFormat = "2006-01-02"
r.newInterval = newDay
case 7 * 24 * time.Hour: // calendar week
r.fileFormat = ""
r.newInterval = newWeek
r.weekly = true
case 30 * 24 * time.Hour: // calendar month
r.fileFormat = "2006-01"
r.newInterval = newMonth
case 365 * 24 * time.Hour: // calendar year
r.fileFormat = "2006"
r.newInterval = newYear
default:
r.arbitrary = true
r.fileFormat = "2006-01-02-15-04-05"
r.newInterval = func(lastTime time.Time, currentTime time.Time) bool {
lastInterval := lastTime.Unix() / (int64(r.interval) / int64(time.Second))
currentInterval := currentTime.Unix() / (int64(r.interval) / int64(time.Second))
return lastInterval != currentInterval
}
}
return nil
}

func (r *intervalRotator) LogPrefix(filename string, modTime time.Time) string {
var t time.Time
if r.lastRotate.IsZero() {
t = modTime
} else {
t = r.lastRotate
}

if r.weekly {
y, w := t.ISOWeek()
return fmt.Sprintf("%s-%04d-%02d-", filename, y, w)
}
if r.arbitrary {
intervalNumber := t.Unix() / (int64(r.interval) / int64(time.Second))
intervalStart := time.Unix(0, intervalNumber*int64(r.interval))
return fmt.Sprintf("%s-%s-", filename, intervalStart.Format(r.fileFormat))
}
return fmt.Sprintf("%s-%s-", filename, t.Format(r.fileFormat))
}

func (r *intervalRotator) NewInterval() bool {
now := r.clock.Now()
newInterval := r.newInterval(r.lastRotate, now)
return newInterval
}

func (r *intervalRotator) Rotate() {
r.lastRotate = r.clock.Now()
}

func (r *intervalRotator) SortIntervalLogs(strings []string) {
sort.Slice(
strings,
func(i, j int) bool {
return OrderIntervalLogs(strings[i]) < OrderIntervalLogs(strings[j])
},
)
}

// OrderIntervalLogs, when given a log filename in the form [prefix]-[formattedDate]-n

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on exported function OrderIntervalLogs should be of the form "OrderIntervalLogs ..."

// returns the filename after zero-padding the trailing n so that foo-[date]-2 sorts
// before foo-[date]-10.
func OrderIntervalLogs(filename string) string {
index, i, err := IntervalLogIndex(filename)
if err == nil {
return filename[:i] + fmt.Sprintf("%020d", index)
}

return ""
}

// IntervalLogIndex returns n as int given a log filename in the form [prefix]-[formattedDate]-n
func IntervalLogIndex(filename string) (uint64, int, error) {
i := len(filename) - 1
for ; i >= 0; i-- {
if '0' > filename[i] || filename[i] > '9' {
break
}
}
i++

s64 := filename[i:]
u64, err := strconv.ParseUint(s64, 10, 64)
return u64, i, err
}

func newSecond(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Second() != currentTime.Second() || newMinute(lastTime, currentTime)
}

func newMinute(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Minute() != currentTime.Minute() || newHour(lastTime, currentTime)
}

func newHour(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Hour() != currentTime.Hour() || newDay(lastTime, currentTime)
}

func newDay(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Day() != currentTime.Day() || newMonth(lastTime, currentTime)
}

func newWeek(lastTime time.Time, currentTime time.Time) bool {
lastYear, lastWeek := lastTime.ISOWeek()
currentYear, currentWeek := currentTime.ISOWeek()
return lastWeek != currentWeek ||
lastYear != currentYear
}

func newMonth(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Month() != currentTime.Month() || newYear(lastTime, currentTime)
}

func newYear(lastTime time.Time, currentTime time.Time) bool {
return lastTime.Year() != currentTime.Year()
}
Loading