Skip to content

Commit

Permalink
Update README files for heap histo stats
Browse files Browse the repository at this point in the history
  • Loading branch information
at055612 committed Feb 2, 2018
1 parent d8c5cce commit 86f0e1c
Show file tree
Hide file tree
Showing 2 changed files with 70 additions and 50 deletions.
60 changes: 35 additions & 25 deletions source/internal-statistics-sql/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# _internal-statistics-sql_ Content Pack

When Stroom is running it generates a number of statistics relating to the state of the application, its storage tier and its underlying hardware.
When Stroom is running it generates a number of statistics relating to the state of the application, its storage tier and its underlying hardware. In order to be able to query the internal statistics, this content pack is required.

Internal statistics can currently be recorded by two mechanisms; _SQL Statistics_ and _Stroom-Stats_. _SQL Statistics_ comes built in with _Stroom_, however _Stroom-Stats_ is an external service. An internal statistic event will only be recorded if the appropriate entity exists in stroom and the internal statistics service (_SQL Statistics_ or _Stroom-Stats_) for that entity is available. The entities for each service are available as two separate packs and you can import neither, one or both of them depending on your requirements.

This pack enables the recording of _SQL_Statistics_.
Internal statistics can currently be recorded by two mechanisms; _SQL Statistics_ and _Stroom-Stats_. _SQL Statistics_ comes built in with _Stroom_, however _Stroom-Stats_ is an external service. This pack enables the recording and querying of internal statistics with _SQL_Statistics_.

## Contents

Expand All @@ -18,18 +16,6 @@ The following represents the folder structure and content that will be imported

Fields: `Feed`, `Node`, `Type`

* **Meta Data-Stream Size** `StatisticStore`

Tracks the volume of data (in bytes) received by Feed.

Fields: `Feed`

* **Meta Data-Streams Received** `StatisticStore`

Tracks counts of the number of streams received by Feed.

Fields: `Feed`

* **CPU** `StatisticStore`

A number of different statistics relating to the CPU load on a node. The `Type` field is used to qualify the CPU metric being recorded. Valid values for `Type` are:
Expand All @@ -47,7 +33,28 @@ The following represents the folder structure and content that will be imported

Fields: `Node`, `Type`

* **Memory** `StatisticStore`/`StatisticStore`
* **EPS** `StatisticStore`

A number of different statistics relating to the events processed per second by a node. The `Type` field is used to qualify the metric being recorded. All values are counts of the number of events processed per second. Valid values for `Type` are:

* `Read` - The number of events read per second.
* `Write` - The number of events written per second.

Fields: `Node`, `Type`

* **Heap Histogram Bytes** `StatisticStore`

When enabled in the _Jobs_ tab, _Stroom_ will run a jmap heap histogram and record each entry as a statistic. The value is the total number of bytes used by all live instances of the class.

Fields: `Node`, `Class`

* **Heap Histogram Instances** `StatisticStore`

When enabled in the _Jobs_ tab, _Stroom_ will run a jmap heap histogram and record each entry as a statistic. The value is the number of live instances of the class.

Fields: `Node`, `Class`

* **Memory** `StatisticStore`

A number of different statistics relating to the JVM memory usage on a node. The `Type` field is used to qualify the memory usage metric being recorded. All values are in bytes. Valid values for `Type` are:

Expand All @@ -62,26 +69,29 @@ The following represents the folder structure and content that will be imported

Any rollup combinations for this statistic should not include the `Type` field as aggregating events of different types is meaningless.

* **EPS** `StatisticStore`/`StatisticStore`
* **Meta Data-Stream Size** `StatisticStore`

A number of different statistics relating to the events processed per second by a node. The `Type` field is used to qualify the metric being recorded. All values are counts of the number of events processed per second. Valid values for `Type` are:
Tracks the volume of data (in bytes) received by Feed.

* `Read` - The number of events read per second.
* `Write` - The number of events written per second.
Fields: `Feed`

Fields: `Node`, `Type`
* **Meta Data-Streams Received** `StatisticStore`

Tracks counts of the number of streams received by Feed.

Fields: `Feed`

* **PipelineStreamProcessor** `StatisticStore`/`StatisticStore`
* **PipelineStreamProcessor** `StatisticStore`

Fields: `Feed`, `Pipeline`

* **Stream Task Queue Size** `StatisticStore`/`StatisticStore`
* **Stream Task Queue Size** `StatisticStore`

The number of items on the stream task queue.

Fields: _None_

* **Volumes** `StatisticStore`/`StatisticStore`
* **Volumes** `StatisticStore`

A number of different statistics relating to the state of the volumes on Stroom. The `Type` field is used to qualify the metric being recorded. All values are in bytes. Valid values for `Type` are:

Expand Down
60 changes: 35 additions & 25 deletions source/internal-statistics-stroom-stats/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# _internal-statistics-stroom-stats_ Content Pack

When Stroom is running it generates a number of statistics relating to the state of the application, its storage tier and its underlying hardware.
When Stroom is running it generates a number of statistics relating to the state of the application, its storage tier and its underlying hardware. In order to be able to query the internal statistics, this content pack is required.

Internal statistics can currently be recorded by two mechanisms; _SQL Statistics_ and _Stroom-Stats_. _SQL Statistics_ comes built in with _Stroom_, however _Stroom-Stats_ is an external service. An internal statistic event will only be recorded if the appropriate entity exists in stroom and the internal statistics service (_SQL Statistics_ or _Stroom-Stats_) for that entity is available. The entities for each service are available as two separate packs and you can import neither, one or both of them depending on your requirements.

This pack enables the recording of _Stroom-Stats_.
Internal statistics can currently be recorded by two mechanisms; _SQL Statistics_ and _Stroom-Stats_. _SQL Statistics_ comes built in with _Stroom_, however _Stroom-Stats_ is an external service. This pack enables the recording and querying of internal statistics with _Stroom-Stats_.

## Contents

Expand All @@ -18,18 +16,6 @@ The following represents the folder structure and content that will be imported

Fields: `Feed`, `Node`, `Type`

* **Meta Data-Stream Size** `StroomStatsStore`

Tracks the volume of data (in bytes) received by Feed.

Fields: `Feed`

* **Meta Data-Streams Received** `StroomStatsStore`

Tracks counts of the number of streams received by Feed.

Fields: `Feed`

* **CPU** `StroomStatsStore`

A number of different statistics relating to the CPU load on a node. The `Type` field is used to qualify the CPU metric being recorded. Valid values for `Type` are:
Expand All @@ -47,7 +33,28 @@ The following represents the folder structure and content that will be imported

Fields: `Node`, `Type`

* **Memory** `StatisticStore`
* **EPS** `StroomStatsStore`

A number of different statistics relating to the events processed per second by a node. The `Type` field is used to qualify the metric being recorded. All values are counts of the number of events processed per second. Valid values for `Type` are:

* `Read` - The number of events read per second.
* `Write` - The number of events written per second.

Fields: `Node`, `Type`

* **Heap Histogram Bytes** `StroomStatsStore`

When enabled in the _Jobs_ tab, _Stroom_ will run a jmap heap histogram and record each entry as a statistic. The value is the total number of bytes used by all live instances of the class.

Fields: `Node`, `Class`

* **Heap Histogram Instances** `StroomStatsStore`

When enabled in the _Jobs_ tab, _Stroom_ will run a jmap heap histogram and record each entry as a statistic. The value is the number of live instances of the class.

Fields: `Node`, `Class`

* **Memory** `StroomStatsStore`

A number of different statistics relating to the JVM memory usage on a node. The `Type` field is used to qualify the memory usage metric being recorded. All values are in bytes. Valid values for `Type` are:

Expand All @@ -62,26 +69,29 @@ The following represents the folder structure and content that will be imported

Any rollup combinations for this statistic should not include the `Type` field as aggregating events of different types is meaningless.

* **EPS** `StatisticStore`
* **Meta Data-Stream Size** `StroomStatsStore`

A number of different statistics relating to the events processed per second by a node. The `Type` field is used to qualify the metric being recorded. All values are counts of the number of events processed per second. Valid values for `Type` are:
Tracks the volume of data (in bytes) received by Feed.

* `Read` - The number of events read per second.
* `Write` - The number of events written per second.
Fields: `Feed`

Fields: `Node`, `Type`
* **Meta Data-Streams Received** `StroomStatsStore`

Tracks counts of the number of streams received by Feed.

Fields: `Feed`

* **PipelineStreamProcessor** `StatisticStore`
* **PipelineStreamProcessor** `StroomStatsStore`

Fields: `Feed`, `Pipeline`

* **Stream Task Queue Size** `StatisticStore`
* **Stream Task Queue Size** `StroomStatsStore`

The number of items on the stream task queue.

Fields: _None_

* **Volumes** `StatisticStore`
* **Volumes** `StroomStatsStore`

A number of different statistics relating to the state of the volumes on Stroom. The `Type` field is used to qualify the metric being recorded. All values are in bytes. Valid values for `Type` are:

Expand Down

0 comments on commit 86f0e1c

Please sign in to comment.