diff --git a/Alerting/Sample Watches/README.md b/Alerting/Sample Watches/README.md
index f6b7f48f..7f22625e 100644
--- a/Alerting/Sample Watches/README.md
+++ b/Alerting/Sample Watches/README.md
@@ -2,22 +2,22 @@
## Overview
-This package provides a collection of example watches. These watches have been developed for the purposes of POC's and demonstrations. Each makes independent assumptions as to the data structure, volume and mapping. For each watch a description, with assumptions is provided, in addition to a mapping file. Whilst functionally tested, these watches have not been tested for effectiveness or query performance in production environments. The reader is therefore encouraged to test and review all watches with production data volumes prior to deployment.
+This package provides a collection of example watches. These watches have been developed for the purposes of POC's and demonstrations. Each makes independent assumptions as to the data structure, volume and mapping. For each watch a description, with assumptions is provided, in addition to a mapping file. Whilst functionally tested, these watches have not been tested for effectiveness or query performance in production environments. The reader is therefore encouraged to test and review all watches with production data volumes prior to deployment.
-# Generic Assumptions
+## Generic Assumptions
* Elasticsearch 6.2 + x-pack
* All watches use the log output for purposes of testing. Replace with output e.g. email, as required.
-* Painless scripts, located within the "scripts" folder of each watch, must be indexed first.
+* Painless scripts, located within the "scripts" folder of each watch, must be indexed first.
* All watches assume Watcher is running in the same cluster as that in which the relevant data is hosted. They all therefore use the search input. In a production deployment this is subject to change i.e. a http input maybe required.
-# Structure
+## Structure
In each watch directory the following is provided:
* README - describes the watch including any assumptions regards mapping, data structure and behaviour.
* mapping.json - An re-usable mapping which is also appropriate for the test data provided.
-* watch.json - Body of the watch. Used in the below tests.
+* watch.json - Body of the watch. Used in the below tests.
* /tests - Directory of tests. Each test is defined as JSON file. See Below.
* /scripts - Directory of painless scripts utilised by the watch.
@@ -32,11 +32,11 @@ The parent directory includes the following utility scripts:
If username, password, and protocol are not specified, the above scripts assume the x-pack default of "elastic", "changeme", and "http" respectively.
-# Watches
+## Watches
* Errors in log files - A watch which alerts if errors are present in a log file. Provides example errors as output.
* Port Scan - A watch which aims to detect and alert if a server established a high number of connections to a destination across a large number of ports.
-* Social Media Trending - A watch which alerts if a social media topic/tag begins to show increase activity
+* Twitter Trending - A watch which alerts if a social media topic/tag begins to show increase activity
* Unexpected Account Activity - A watch which aims detect and to alert if a user is created in Active Directory/LDAP and subsequently deleted within N mins.
* New Process Started - A watch which aims to detect if a process is started on a server for the first time.
* New User-Server Communication - A watch which aims to detect if a user logs onto a server for the first time within the current time period.
@@ -46,7 +46,7 @@ If username, password, and protocol are not specified, the above scripts assume
* Monitoring Cluster Health - A watch which monitors an ES cluster for red or yellow cluster state. Assumes use of X-Pack Monitoring.
* Monitoring Free Disk Space - A watch which monitors an ES cluster for free disk usage on hosts. Assumes use of X-Pack Monitoring.
-# Testing
+## Testing
Each watch includes a test directory containing a set of tests expressed as JSON files. Each JSON file describes a single isolated test and includes:
@@ -70,7 +70,7 @@ The run_test.py performs the following when running a test file:
1. Refreshes the index.
1. Adds the watch
1. Executes the watch
-1. Confirms the watch matches the intended outcome. matched and confirms the output of the watch (log text)
+1. Confirms the watch matches the intended outcome. Matched and confirms the output of the watch (log text)
## Requirements
diff --git a/Alerting/Sample Watches/errors_in_logs/README.md b/Alerting/Sample Watches/errors_in_logs/README.md
index 469b209d..0b67270a 100644
--- a/Alerting/Sample Watches/errors_in_logs/README.md
+++ b/Alerting/Sample Watches/errors_in_logs/README.md
@@ -4,13 +4,13 @@
A watch which alerts if errors are present in a log file. Provides example errors as output.
-The following watch utilises a basic query_string search, as used by Kibana, to find all documents in the last N minutes which either contain the word “error” or have a value of “ERROR” for the field “loglevel” i.e. the log level under which the message was generated. The query returns the ids of upto 10 hits ordered by @timestamp in descending order.
+The following watch utilizes a basic query_string search, as used by Kibana, to find all documents in the last N minutes which either contain the word “error” or have a value of “ERROR” for the field “loglevel” i.e. the log level under which the message was generated. The query returns the ids of up to 10 hits ordered by @timestamp in descending order.
## Mapping Assumptions
A mapping is provided in mapping.json. Watches require data producing the following fields:
-* @timestamp - authorative date field for each log message
+* @timestamp - authoritative date field for each log message
* message (string) - contents of the log message as generated with Logstash
* loglevel (string not_analyzed) - field with the value "ERROR", "DEBUG", "INFO" etc
@@ -24,5 +24,5 @@ The Watch assumes each log message is represented by an Elasticsearch document.
# Configuration
-* The watch is scheduled to find errors very minute. Modify through the schedule.
-* The watch will raise a maximum of 1 alert every 15 minutes, even if the condition is satsified more than once. Modify through the throttle parameter.
\ No newline at end of file
+* The watch is scheduled to find errors every minute. Modify through the schedule.
+* The watch will raise a maximum of 1 alert every 15 minutes, even if the condition is satisfied more than once. Modify through the throttle parameter.
diff --git a/Alerting/Sample Watches/errors_in_logs/tests/test1.json b/Alerting/Sample Watches/errors_in_logs/tests/test1.json
index df67ca48..a3cd14dc 100644
--- a/Alerting/Sample Watches/errors_in_logs/tests/test1.json
+++ b/Alerting/Sample Watches/errors_in_logs/tests/test1.json
@@ -1,42 +1,41 @@
{
- "watch_name":"errors_in_logs",
- "mapping_file":"./errors_in_logs/mapping.json",
- "index":"logs",
- "type":"doc",
+ "watch_name": "errors_in_logs",
+ "mapping_file": "./errors_in_logs/mapping.json",
+ "index": "logs",
+ "type": "doc",
"match": true,
- "watch_file":"./errors_in_logs/watch.json",
- "events":[
- {
- "id":"1",
- "offset":-35,
- "message":"A normal message that should match due to the loglevel",
- "loglevel":"ERROR"
- },
- {
- "id":"2",
- "offset":-30,
- "message":"A normal log message",
- "loglevel":"INFO"
- },
- {
- "id":"3",
- "offset":-40,
- "message":"Error in this log message despite being INFO",
- "loglevel":"INFO"
- },
- {
- "id":"4",
- "offset":-15,
- "message":"Error in this log message",
- "loglevel":"ERROR"
- },
- {
- "id":"5",
- "offset":-90,
- "message":"Error in this log message but outside of the time range",
- "loglevel":"ERROR"
- }
+ "watch_file": "./errors_in_logs/watch.json",
+ "events": [
+ {
+ "id": "1",
+ "offset": -35,
+ "message": "A normal message that should match due to the loglevel",
+ "loglevel": "ERROR"
+ },
+ {
+ "id": "2",
+ "offset": -30,
+ "message": "A normal log message",
+ "loglevel": "INFO"
+ },
+ {
+ "id": "3",
+ "offset": -40,
+ "message": "Error in this log message despite being INFO",
+ "loglevel": "INFO"
+ },
+ {
+ "id": "4",
+ "offset": -15,
+ "message": "Error in this log message",
+ "loglevel": "ERROR"
+ },
+ {
+ "id": "5",
+ "offset": -90,
+ "message": "Error in this log message but outside of the time range",
+ "loglevel": "ERROR"
+ }
],
-"expected_response":"3 Errors have occured in the logs:4:1:3:"
+ "expected_response": "3 Errors have occurred in the logs:4:1:3:"
}
-
diff --git a/Alerting/Sample Watches/errors_in_logs/watch.json b/Alerting/Sample Watches/errors_in_logs/watch.json
index c0da9388..2e888c1e 100644
--- a/Alerting/Sample Watches/errors_in_logs/watch.json
+++ b/Alerting/Sample Watches/errors_in_logs/watch.json
@@ -55,8 +55,8 @@
"log": {
"logging": {
"level": "info",
- "text": "{{ctx.payload.hits.total}} Errors have occured in the logs:{{#ctx.payload.hits.hits}}{{_id}}:{{/ctx.payload.hits.hits}}"
+ "text": "{{ctx.payload.hits.total}} Errors have occurred in the logs:{{#ctx.payload.hits.hits}}{{_id}}:{{/ctx.payload.hits.hits}}"
}
}
}
-}
\ No newline at end of file
+}
diff --git a/Alerting/Sample Watches/filesystem_usage/tests/test1.json b/Alerting/Sample Watches/filesystem_usage/tests/test1.json
index c63b949c..42e47d4f 100644
--- a/Alerting/Sample Watches/filesystem_usage/tests/test1.json
+++ b/Alerting/Sample Watches/filesystem_usage/tests/test1.json
@@ -1,32 +1,36 @@
{
- "watch_name":"filesystem_usage",
- "mapping_file":"./filesystem_usage/mapping.json",
- "index":"logs",
- "type":"filesystem",
- "watch_file":"./filesystem_usage/watch.json",
- "comments":"Tests filesystem being above 0.9. Server 1 & 4 should alert as within 60 seconds. Server 2 should not (10 mins). 3rd server should not alert as < 0.9.",
- "scripts":[{"name":"transform","path":"./filesystem_usage/scripts/transform.json"}],
- "events":[
- {
- "hostname": "test_server1",
- "used_p": 0.99,
- "offset":"-60"
- },
- {
- "hostname": "test_server2",
- "used_p": 0.98,
- "offset":"-600"
- },
- {
- "hostname": "test_server3",
- "used_p": 0.89,
- "offset":"-60"
- },
- {
- "hostname": "test_server4",
- "used_p": 0.95
- }
+ "watch_name": "filesystem_usage",
+ "mapping_file": "./filesystem_usage/mapping.json",
+ "index": "logs",
+ "type": "filesystem",
+ "watch_file": "./filesystem_usage/watch.json",
+ "comments": "Tests filesystem being above 0.9. Server 1 & 4 should alert as within 60 seconds. Server 2 should not (10 mins). 3rd server should not alert as < 0.9.",
+ "scripts": [
+ {
+ "name": "transform",
+ "path": "./filesystem_usage/scripts/transform.json"
+ }
],
- "expected_response":"Some hosts are over 90% utilized:99%-test_server1:95%-test_server4:"
+ "events": [
+ {
+ "hostname": "test_server1",
+ "used_p": 0.99,
+ "offset": "-60"
+ },
+ {
+ "hostname": "test_server2",
+ "used_p": 0.98,
+ "offset": "-600"
+ },
+ {
+ "hostname": "test_server3",
+ "used_p": 0.89,
+ "offset": "-60"
+ },
+ {
+ "hostname": "test_server4",
+ "used_p": 0.95
+ }
+ ],
+ "expected_response": "Some hosts are over 90% utilized:99%-test_server1:95%-test_server4:"
}
-
diff --git a/Alerting/Sample Watches/lateral_movement_in_user_comm/README.md b/Alerting/Sample Watches/lateral_movement_in_user_comm/README.md
index d4d7cdea..59aed1c7 100644
--- a/Alerting/Sample Watches/lateral_movement_in_user_comm/README.md
+++ b/Alerting/Sample Watches/lateral_movement_in_user_comm/README.md
@@ -2,19 +2,19 @@
## Description
-A watch which aims to detect and alert if users log onto a server for which they have not accessed within the same time period previously. The time period here is a configurable window either side of time the watch is executed. For example if the watch checks at 11:15 and the window size is 1hr, the watch will check if any users who have logged in within the last N seconds had logged into the same servers between 10:15 and 12:15 previously. Any user-server communication which is "new", will result in an alert.
+A watch which aims to detect and alert if users log onto a server which they have not accessed within the same time period previously. The time period here is a configurable window either side of time the watch is executed. For example if the watch checks at 11:15 and the window size is 1hr, the watch will check if any users who have logged in within the last N seconds had logged into the same servers between 10:15 and 12:15 previously. Any user-server communication which is "new", will result in an alert.
-The watch achieves the above by using a three stage query chain. The first identifies a time window based on the configuration. The second periodically checks for user logins in the last N secs (default 30s), using a terms aggregation on the user_server field. This list is then used to query against the index during the calculated time period, again aggregating on the user_server. Values identified the list collected durting the second stage, which do not appear in the third stage list, are highlighted as new communication.
+The watch achieves the above by using a three stage query chain. The first identifies a time window based on the configuration. The second periodically checks for user logins in the last N secs (default 30s), using a terms aggregation on the user_server field. This list is then used to query against the index during the calculated time period, again aggregating on the user_server. Values identified in the list collected during the second stage, which do not appear in the third stage list, are highlighted as new communication.
-This watch represents a complex variant of the "first process exeuction" watch, which could be easily adapted to detect just new user logons to servers, adding a time period constraint.
+This watch represents a complex variant of the "first process execution" watch, which could be easily adapted to detect just new user logons to servers, adding a time period constraint.
## Mapping Assumptions
A mapping is provided in mapping.json. Watches require data producing the following fields:
-* user_server (non-analyzed string) - Contains the user and server as a concatenated string e.g. userA_testServerB. Watch assumes the delimiter is an _ char.
+* user_server (non-analyzed string) - Contains the user and server as a concatenated string e.g. userA_testServerB. Watch assumes the delimiter is an `_` char.
* @timestamp (date field) - Date of log message.
-* time (date field) - time at which the logon occured based on a strict_time_no_millis format.
+* time (date field) - time at which the logon occurred based on a strict_time_no_millis format.
## Data Assumptions
@@ -22,12 +22,12 @@ The watch assumes each document in Elasticsearch represents a logon to a server
## Other Assumptions
-* All events are index "log" and type "doc".
+* All events are index "log".
* The watch assumes no more than 1000 user logons occur within the time period i.e. by default the last 30s. This value can be adjusted, with consideration for scaling, for larger environments.
# Configuration
The following watch metadata parameters influence behaviour:
-* window_width- The period in N (minutes) during which the user should have logged onto the server previously. The window is calcuated as T-N to T+N, where T is the time the watch executed. Defaults to 30mins, giving a total window width of approximiately 1hr.
-* time_period - The period for which user server logons are aggregated, and compared against the time period to check as to whether they represent new communication. Defaults to 30s. This should be equal to the schedule interval to ensure no logins are not evaluated.
\ No newline at end of file
+* window_width- The period in N (minutes) during which the user should have logged onto the server previously. The window is calculated as T-N to T+N, where T is the time the watch executed. Defaults to 30mins, giving a total window width of approximately 1hr.
+* time_period - The period for which user server logons are aggregated, and compared against the time period to check as to whether they represent new communication. Defaults to 30s. This should be equal to the schedule interval to ensure no logins are not evaluated.
diff --git a/Alerting/Sample Watches/lateral_movement_in_user_comm/watch.json b/Alerting/Sample Watches/lateral_movement_in_user_comm/watch.json
index f0d1437b..b7946bcd 100644
--- a/Alerting/Sample Watches/lateral_movement_in_user_comm/watch.json
+++ b/Alerting/Sample Watches/lateral_movement_in_user_comm/watch.json
@@ -1,40 +1,39 @@
{
- "metadata": {
- "window_width": "30m",
- "time_period": "30s"
- },
- "trigger": {
- "schedule": {
- "interval": "30s"
- }
- },
- "input": {
- "chain": {
- "inputs": [
- {
- "get_time_period": {
- "search": {
- "request": {
- "indices": [
- "log"
- ],
- "body": {
- "size": 1,
- "script_fields": {
- "upper_time": {
- "script": {
- "id": "upper_time",
- "params": {
- "current_time": "{{ctx.trigger.triggered_time}}"
- }
+ "metadata": {
+ "window_width": "30m",
+ "time_period": "30s"
+ },
+ "trigger": {
+ "schedule": {
+ "interval": "30s"
+ }
+ },
+ "input": {
+ "chain": {
+ "inputs": [
+ {
+ "get_time_period": {
+ "search": {
+ "request": {
+ "indices": [
+ "log"
+ ],
+ "body": {
+ "size": 1,
+ "script_fields": {
+ "upper_time": {
+ "script": {
+ "id": "upper_time",
+ "params": {
+ "current_time": "{{ctx.trigger.triggered_time}}"
}
- },
- "lower_time": {
- "script": {
- "id": "lower_time",
- "params": {
- "current_time": "{{ctx.trigger.triggered_time}}"
- }
+ }
+ },
+ "lower_time": {
+ "script": {
+ "id": "lower_time",
+ "params": {
+ "current_time": "{{ctx.trigger.triggered_time}}"
}
}
}
@@ -42,105 +41,106 @@
}
}
}
- },
- {
- "user_server_logons": {
- "search": {
- "request": {
- "indices": [
- "log"
- ],
- "body": {
- "query": {
- "range": {
- "@timestamp": {
- "gte": "now-{{ctx.metadata.time_period}}"
- }
+ }
+ },
+ {
+ "user_server_logons": {
+ "search": {
+ "request": {
+ "indices": [
+ "log"
+ ],
+ "body": {
+ "query": {
+ "range": {
+ "@timestamp": {
+ "gte": "now-{{ctx.metadata.time_period}}"
}
- },
- "aggs": {
- "user_server": {
- "terms": {
- "field": "user_server",
- "size": 1000
- }
+ }
+ },
+ "aggs": {
+ "user_server": {
+ "terms": {
+ "field": "user_server",
+ "size": 1000
}
- },
- "size": 0
- }
+ }
+ },
+ "size": 0
}
}
}
- },
- {
- "new_user_server_logons": {
- "search": {
- "request": {
- "indices": [
- "log"
- ],
- "body": {
- "query": {
- "bool": {
- "filter": [
- {
- "range": {
- "time": {
- "gte": "{{ctx.payload.get_time_period.hits.hits.0.fields.lower_time.0}}",
- "lte": "{{ctx.payload.get_time_period.hits.hits.0.fields.upper_time.0}}"
- }
- }
- },
- {
- "terms": {
- "user_server": [
- "{{#ctx.payload.user_server_logons.aggregations.user_server.buckets}}{{key}}",
- "{{/ctx.payload.user_server_logons.aggregations.user_server.buckets}}"
- ]
+ }
+ },
+ {
+ "new_user_server_logons": {
+ "search": {
+ "request": {
+ "indices": [
+ "log"
+ ],
+ "body": {
+ "query": {
+ "bool": {
+ "filter": [
+ {
+ "range": {
+ "time": {
+ "gte": "{{ctx.payload.get_time_period.hits.hits.0.fields.lower_time.0}}",
+ "lte": "{{ctx.payload.get_time_period.hits.hits.0.fields.upper_time.0}}"
}
- },
- {
- "range": {
- "@timestamp": {
- "lt": "now/d"
- }
+ }
+ },
+ {
+ "terms": {
+ "user_server": [
+ "{{#ctx.payload.user_server_logons.aggregations.user_server.buckets}}{{key}}",
+ "{{/ctx.payload.user_server_logons.aggregations.user_server.buckets}}"
+ ]
+ }
+ },
+ {
+ "range": {
+ "@timestamp": {
+ "lt": "now/d"
}
}
- ]
- }
- },
- "aggs": {
- "user_server": {
- "terms": {
- "field": "user_server",
- "size": 1000
}
+ ]
+ }
+ },
+ "aggs": {
+ "user_server": {
+ "terms": {
+ "field": "user_server",
+ "size": 1000
}
- },
- "size": 0
- }
+ }
+ },
+ "size": 0
}
}
}
}
+ }
]
}
- },
- "condition": {
- "script": {
- "id":"condition"
- }
- },
- "transform": {
- "script": {
- "id":"transform"
- }
- },
- "actions": {
- "log": {
- "logging": {
- "text": "{{ctx.metadata.triggered_time}}The following users have logged onto a new server for the first time within the time period: {{#ctx.payload.new_starts}}{{.}}:{{/ctx.payload.new_starts}}"
- }
+ },
+ "condition": {
+ "script": {
+ "id": "condition"
+ }
+ },
+ "transform": {
+ "script": {
+ "id": "transform"
+ }
+ },
+ "actions": {
+ "log": {
+ "logging": {
+ "text": "{{ctx.metadata.triggered_time}}The following users have logged onto a new server for the first time within the time period: {{#ctx.payload.new_starts}}{{.}}:{{/ctx.payload.new_starts}}"
}
}
- }
\ No newline at end of file
+ }
+}
diff --git a/Alerting/Sample Watches/port_scan/README.md b/Alerting/Sample Watches/port_scan/README.md
index ed910d52..06836eae 100644
--- a/Alerting/Sample Watches/port_scan/README.md
+++ b/Alerting/Sample Watches/port_scan/README.md
@@ -6,7 +6,7 @@ A watch which aims to detect and alert if a server establishes a high number of
A port scan occurs when a high number of connections are established between two servers across a large number of ports. This can be detected as a high number of documents, with unique port values, for the same source-destination values. This also be described as an above than normal cardinality of the port field for a distinct source ip - destination ip pair.
-This alert avoids attaching an exact value to "high". Instead it aims to base the intepretation of high on available data and usual behaviour. Additionally this alert should be able to cope with a large number of devices > 100k.
+This alert avoids attaching an exact value to "high". Instead it aims to base the interpretation of high on available data and usual behaviour. Additionally this alert should be able to cope with a large number of devices > 100k.
## Mapping Assumptions
@@ -15,11 +15,11 @@ A mapping is provided in mapping.json. Watches require data producing the follo
* source_dest (non-analyzed string) - Contains the source and destination of the communication as a concatenated string e.g. testServerA_testServerB. Watch assumes the delimiter is an _ char.
* @timestamp (date field) - Date of log message.
* source_dest_port (non-analyzed string) - Contains the source, destination and port of the communication as a concatenated string e.g. testServerA_testServerB_5002. Watch assumes the delimiter is an _ char.
-* dest_port (integer) - port on which communication occured.
+* dest_port (integer) - port on which communication occurred.
## Data Assumptions
-The watch assumes each document in Elasticsearch represents a communication between 2 servers and conform to the above mapping.
+The watch assumes each document in Elasticsearch represents a communication between 2 servers and conform to the above mapping.
## Other Assumptions
@@ -28,8 +28,8 @@ The watch assumes each document in Elasticsearch represents a communication betw
### How it works
* Every time_period (default 1m) the watch executes and identifies those communications between two servers which have used the highest number of ports in the last time_window (default 30m). This is achieved using a terms agg on the source_dest field sorted by a cardinality of the dest_port. A date histogram inturn builds a profile of the dest_port cardinality over the time_window for each source_dest pair, bucketing by the time_period. The std. dev and median are inturn calculated for each source_dest profile using a extended_stats_bucket and percentiles_bucket aggregation respectively.
-* A portscan is considered to be occuring between two hosts when the number of ports in the last time_period (i.e. last bucket of the profile) is 2 std. deviations above the median. To avoid alerting on host pairs with steady connection counts, and a low std. deviation, the watch requires the std. dev to also be > 0. The window size and time period will need adjusting based on the data to tune both accuracy and performance.
-* The above are also likely to require explicit blacklisting of hosts/ports - to avoid alerting where scanning behaviour is considered to be normal behaviour.
+* A portscan is considered to be occurring between two hosts when the number of ports in the last time_period (i.e. last bucket of the profile) is 2 std. deviations above the median. To avoid alerting on host pairs with steady connection counts, and a low std. deviation, the watch requires the std. dev to also be > 0. The window size and time period will need adjusting based on the data to tune both accuracy and performance.
+* The above are also likely to require explicit blacklisting of hosts/ports - to avoid alerting where scanning behaviour is considered to be normal behaviour.
# Configuration
@@ -37,6 +37,6 @@ The following watch metadata parameters influence behaviour:
* time_window - The period N (mins) over which which the median and std. dev. is calculated for each source_dest pair. Defaults to 30m.
* time_period - The period X (mins) or size of each bucket. This defines the smallest period in which a port scan can be detected. Defaults to 1m.
-* sensitivity - The "sensitivity" of the watch to fluctuations in the number of ports used between 2 hosts. A smaller value means smaller flucutations from the median will result in an alert. This value is mulitplied by the std. dev. of the cardinality of dest_ports (per source_dest pair) and added to the median. Defaults to 2.0.
+* sensitivity - The "sensitivity" of the watch to fluctuations in the number of ports used between 2 hosts. A smaller value means smaller fluctuations from the median will result in an alert. This value is multiplied by the std. dev. of the cardinality of dest_ports (per source_dest pair) and added to the median. Defaults to 2.0.
-The number of buckets used to compute the average will significantly affect performance i.e. the window_size/time_period.
\ No newline at end of file
+The number of buckets used to compute the average will significantly affect performance i.e. the window_size/time_period.
diff --git a/Alerting/Sample Watches/port_scan/tests/test1.json b/Alerting/Sample Watches/port_scan/tests/test1.json
index 75d7ec4c..d13733e7 100644
--- a/Alerting/Sample Watches/port_scan/tests/test1.json
+++ b/Alerting/Sample Watches/port_scan/tests/test1.json
@@ -491,4 +491,3 @@
],
"expected_response":"Port scan detected:hostA to hostB:"
}
-
diff --git a/Alerting/Sample Watches/port_scan/watch.json b/Alerting/Sample Watches/port_scan/watch.json
index 254b4c43..63a4ae92 100644
--- a/Alerting/Sample Watches/port_scan/watch.json
+++ b/Alerting/Sample Watches/port_scan/watch.json
@@ -70,7 +70,6 @@
}
}
},
- "throttle_period": "1m",
"condition": {
"script": {
"id": "condition"
@@ -99,4 +98,4 @@
}
}
}
-}
\ No newline at end of file
+}
diff --git a/Alerting/Sample Watches/twitter_trends/README.md b/Alerting/Sample Watches/twitter_trends/README.md
index 35e2cffb..22ce9408 100644
--- a/Alerting/Sample Watches/twitter_trends/README.md
+++ b/Alerting/Sample Watches/twitter_trends/README.md
@@ -17,12 +17,12 @@ The watch assumes each document in Elasticsearch represents a tweet. All tweets
## Other Assumptions
-* The approach measures the 90th percentiles over the previous 8hrs of tweets, using a percentiles aggregation. If the value in the last 5 minutes is greater than 3 std. deviations above this value an alert is raised. This approach has been tested on Elasticsearch data, where volume is typially low and spikes during specific periods e.g. product releases, and may thus not be robust on other datasets. Elastic would recommend the user modify this query as required.
+* The approach measures the 90th percentiles over the previous 8hrs of tweets, using a percentiles aggregation. If the value in the last 5 minutes is greater than 3 std. deviations above this value an alert is raised. This approach has been tested on Elasticsearch data, where volume is typically low and spikes during specific periods e.g. product releases, and may thus not be robust on other datasets. Elastic would recommend the user modify this query as required.
-# Configuration
+## Configuration
The following watch metadata parameters influence behaviour:
* time_period - The period N (hrs) over which which the percentile and std. dev is calculated. Defaults to 8hrs. Increase to make the trend less sensitive to recent changes.
-* bucket_interval - The bucket width over which the number of tweets are counted and the percentiles/std. dev. calculated. Increasing will make the trend detection less responsive to trends and mean alerts can be raised less infrequently. Should always be equal to the schedule interval.
+* bucket_interval - The bucket width over which the number of tweets are counted and the percentiles/std. dev. calculated. Increasing will make the trend detection less responsive to trends and mean alerts can be raised less infrequently. Should always be equal to the schedule interval.
* query_string - Query string used to identify relevant tweets. Defaults to 'text:elasticsearch'.
diff --git a/Alerting/Sample Watches/unexpected_account_activity/README.md b/Alerting/Sample Watches/unexpected_account_activity/README.md
index 2fb760e2..a37aef6e 100644
--- a/Alerting/Sample Watches/unexpected_account_activity/README.md
+++ b/Alerting/Sample Watches/unexpected_account_activity/README.md
@@ -2,7 +2,7 @@
## Description
-A watch which aims detect and to alert if a user is created in Active Directory/LDAP and subsequently deleted within N mins.
+A watch which aims to detect and alert if a user is created in Active Directory/LDAP and subsequently deleted within N mins.
## Mapping Assumptions
diff --git a/Alerting/Sample Watches/unexpected_account_activity/watch.json b/Alerting/Sample Watches/unexpected_account_activity/watch.json
index 75065e68..6c56813a 100644
--- a/Alerting/Sample Watches/unexpected_account_activity/watch.json
+++ b/Alerting/Sample Watches/unexpected_account_activity/watch.json
@@ -26,12 +26,12 @@
],
"should": [
{
- "terms": {
- "event_type": [
- "add",
- "remove"
- ]
- }
+ "terms": {
+ "event_type": [
+ "add",
+ "remove"
+ ]
+ }
}
]
}
@@ -81,7 +81,7 @@
"log": {
"transform": {
"script": {
- "id":"transform"
+ "id": "transform"
}
},
"logging": {
@@ -89,4 +89,4 @@
}
}
}
-}
\ No newline at end of file
+}
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 6949a202..ad85d18a 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -14,12 +14,12 @@ All Contributions should be contained with a folder that describes the content o
* Known gotchas, system requirements, compatibility issue, etc.
* Logstash config files to process and ingest data
* Template with index mappings (if needed)
- * Kibana config file to load a prebuilt Kibana dashboard
+ * Kibana config file to load a prebuilt Kibana dashboard
* Any other code that is a part of the instruction set. Scripts should be documented and dependencies available in a standard format e.g.for Python a requirements.txt and pip would be appropriate. Always detail tested language versions for scripts e.g.Python 3.5.x
* **Data**
- You can either provide the data file with the example (for small sample datasets), or provide instructions / link to download the raw data (or Elasticsearch index snapshot) from another source (such as Amazon S3). 10mb is a reasonable threshold before moving to an external download- but this shouldn't be considered a hardline.
-
+ You can either provide the data file with the example (for small sample datasets), or provide instructions / link to download the raw data (or Elasticsearch index snapshot) from another source (such as Amazon S3). 10 MiB is a reasonable threshold before moving to an external download- but this shouldn't be considered a hardline.
+
* **Story**
If your example revolves around an analysis of a real-world dataset, try to include some color commentary to describe analysis in narrative form. How is data being used to solve a problem? What interesting insights were mined from this data? You can include this information in the README, or provide links to external blog / video, or perhaps document the narrative with markdown widgets in the Kibana dashboard.
@@ -31,6 +31,3 @@ All Contributions should be contained with a folder that describes the content o
## Feedback & Suggestion
Please open an issue if you find a bug, run into issues or would like to provide feedback / suggestions. We will try our best to respond in a timely manner!
-
-
-
diff --git a/README.md b/README.md
index 431e91f1..b8380db3 100644
--- a/README.md
+++ b/README.md
@@ -12,7 +12,7 @@ This is a collection of examples to help you get familiar with the Elastic Stack
You have a few options to get started with the examples:
-- If you want to try them all, you can [download the entire repo ](https://github.com/elastic/examples/archive/master.zip). Or, if you are familiar with Git, you can [clone the repo](https://github.com/elastic/examples.git). Then, simply follow the instructions in the individual README of the examples you're interested in to get started.
+- If you want to try them all, you can [download the entire repo](https://github.com/elastic/examples/archive/master.zip). Or, if you are familiar with Git, you can [clone the repo](https://github.com/elastic/examples.git). Then, simply follow the instructions in the individual README of the examples you're interested in to get started.
- If you are only interested in a specific example or two, you can download the contents of just those examples - follow instructions in the individual READMEs OR you can use some of the [options mentioned here](http://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo).
@@ -35,7 +35,7 @@ Below is the list of examples available in this repo:
#### Exploring Public Datasets
-Examples using the Elastic Stack for analyzing public dataset.
+Examples using the Elastic Stack for analyzing public datasets.
- [DonorsChoose.org donations](https://github.com/elastic/examples/tree/master/Exploring%20Public%20Datasets/donorschoose)
- [NCEDC earthquakes data](https://github.com/elastic/examples/tree/master/Exploring%20Public%20Datasets/earthquakes)
@@ -52,7 +52,7 @@ Examples using the Elastic Stack for analyzing public dataset.
#### Alerting on Elastic Stack
-X-Pack lets you set up watches (or rules) to detect and alert on changes in your Elasticsearch data. Below is a list of examples watches that configured to detect and alert on a few common scenarios:
+X-Pack lets you set up watches (or rules) to detect and alert on changes in your Elasticsearch data. Below is a list of examples watches that can be configured to detect and alert on a few common scenarios:
- [High I/O wait on CPU](https://github.com/elastic/examples/tree/master/Alerting/Sample%20Watches/cpu_iowait_hosts)
- [Critical error in logs](https://github.com/elastic/examples/tree/master/Alerting/Sample%20Watches/errors_in_logs)
@@ -71,21 +71,21 @@ X-Pack lets you set up watches (or rules) to detect and alert on changes in your
#### Machine learning
- [Getting started tutorials](https://github.com/elastic/examples/tree/master/Machine%20Learning/Getting%20Started%20Examples)
-- [IT operations recipes](https://github.com/elastic/examples/tree/master/Machine%20Learning/IT%20Operations%20Recipes)
+- [IT operations recipes](https://github.com/elastic/examples/tree/master/Machine%20Learning/IT%20Operations%20Recipes)
- [Security analytics recipes](https://github.com/elastic/examples/tree/master/Machine%20Learning/Security%20Analytics%20Recipes)
- [Business metrics recipes](https://github.com/elastic/examples/tree/master/Machine%20Learning/Business%20Metrics%20Recipes)
#### Search & API Examples
- [Recipe Search with Java](https://github.com/elastic/examples/tree/master/Search/recipe_search_java)
-- [Recipe Search with PHP](https://github.com/elastic/examples/tree/master/Search/recipe_search_php)
+- [Recipe Search with PHP](https://github.com/elastic/examples/tree/master/Search/recipe_search_php)
#### Security Analytics
- [Audit Analysis](https://github.com/elastic/examples/tree/master/Security%20Analytics/auditd_analysis)
-- [CEF with Kafka](https://github.com/elastic/examples/tree/master/Security%20Analytics/cef_with_kafka)
+- [CEF with Kafka](https://github.com/elastic/examples/tree/master/Security%20Analytics/cef_with_kafka)
- [DNS Tunnel Detection](https://github.com/elastic/examples/tree/master/Security%20Analytics/dns_tunnel_detection)
-- [Malware Analysis](https://github.com/elastic/examples/tree/master/Security%20Analytics/malware_analysis)
+- [Malware Analysis](https://github.com/elastic/examples/tree/master/Security%20Analytics/malware_analysis)
- [SSH Analysis](https://github.com/elastic/examples/tree/master/Security%20Analytics/ssh_analysis)
diff --git a/Search/recipe_search_java/src/main/webapp/js/bootstrap-editable.js b/Search/recipe_search_java/src/main/webapp/js/bootstrap-editable.js
index 562cabc9..296abe0b 100644
--- a/Search/recipe_search_java/src/main/webapp/js/bootstrap-editable.js
+++ b/Search/recipe_search_java/src/main/webapp/js/bootstrap-editable.js
@@ -531,7 +531,7 @@ Editableform is linked with one of input types, e.g. 'text', 'select' etc.
Success callback. Called when value successfully sent on server and **response status = 200**.
Usefull to work with json response. For example, if your backend response can be {success: true}
or {success: false, msg: "server error"}
you can check it inside this callback.
- If it returns **string** - means error occured and string is shown as error message.
+ If it returns **string** - means error occurred and string is shown as error message.
If it returns **object like** {newValue: <something>}
- it overwrites value, submitted by user.
Otherwise newValue simply rendered into element.