Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MongoDB Atlas] Process data stream #9552

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions packages/mongodb_atlas/_dev/build/build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
dependencies:
ecs:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this required if we are now using the ecs@mappings?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed here, if we remove it than it will give us error.

reference: [email protected]
86 changes: 86 additions & 0 deletions packages/mongodb_atlas/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# MongoDB Atlas Integration

## Overview

[MongoDB Atlas](https://www.mongodb.com/atlas) is a multi-cloud developer data platform. At its core is our fully managed cloud database for modern applications. Atlas is the best way to run MongoDB, the leading non-relational database. MongoDB’s document model is the fastest way to innovate because documents map directly to the objects in your code. As a result, they are much easier and more natural to work with. You can store data of any structure and modify your schema at any time as you add new features to your applications.

Use the MongoDB Atlas integration to:

- Collect Mongod Audit logs and Process metrics.
- Create visualizations to monitor, measure and analyze the usage trend and key data, and derive business insights.
- Create alerts to reduce the MTTD and also the MTTR by referencing relevant logs when troubleshooting an issue.

## Data streams

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope when we merge both logs and metrics, we will get merge the documentation appropriately as this just states anut metrics currently

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, once any PR is merged we will sync the readme.

The MongoDB Atlas integration collects logs and metrics.

Logs help you keep a record of events that happen on your machine. The `Log` data stream collected by MongoDB Atlas integration is `mongod_audit`.

Metrics give you insight into the statistics of the MongoDB Atlas. The `Metric` data stream collected by the MongoDB Atlas integration is `process` so that the user can monitor and troubleshoot the performance of the MongoDB Atlas instance.

Data streams:
- `mongod_audit`: The auditing facility allows administrators and users to track system activity for deployments with multiple users and applications. Mongod Audit logs capture events related to database operations such as insertions, updates, deletions, user authentication, etc., occurring within the mongod instances.

- `process` : This data stream Collects host Metrics per process for all the hosts of the specified group. Metrics like Measurements for the host, such as CPU usage or number of I/O operations, memory are available on this data stream.

Note:
- Users can monitor and see the log inside the ingested documents for MongoDB Atlas in the `logs-*` index pattern from `Discover`.

## Prerequisites

You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it.
You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended or self-manage the Elastic Stack on your own hardware.

## Setup

### To collect data from MongoDB Atlas, the following parameters from your MongoDB Atlas instance are required:

1. Public Key
2. Private Key
3. GroupId

### Steps to obtain Public Key, Private Key and GroupId:

1. Generate programmatic API Keys with project owner permissions using the instructions in the Atlas [documentation](https://www.mongodb.com/docs/atlas/configure-api-access/#create-an-api-key-for-a-project). Then, copy the public key and private key. These serve the same function as a username and API Key respectively.
2. Enable Database Auditing for the Atlas project for which you want to monitor logs, as described in this Atlas [document](https://www.mongodb.com/docs/atlas/database-auditing/#procedure).
3. You can find your GroupId(ProjectID) in the Atlas UI. Go to your project, click Settings, and copy the GroupID(ProjectID). You can use the Atlas Admin API or Atlas CLI to find it programmatically. As described in this Atlas [document](https://www.mongodb.com/docs/atlas/app-services/apps/metadata/#find-a-project-id)

### Enabling the integration in Elastic:

1. In Kibana go to Management > Integrations
2. In "Search for integrations" search bar, type MongoDB Atlas
3. Click on the "MongoDB Atlas" integration from the search results.
4. Click on the "Add MongoDB Atlas" button to add the integration.
5. Add all the required integration configuration parameters, such as Public Key, Private Key and GroupId.
6. Save the integration.

Note:
- The `mongod_audit` data stream gathers historical data spanning the previous 30 minutes.
- Mongod: Mongod is the primary daemon method for the MongoDB system. It helps in handling the data requests, managing the data access, performing background management operations, and other core database operations.

## Troubleshooting

- If the user encounters the following error during data ingestion, it is likely due to the data collected through this endpoint covers a long time span. As a result, generating a response may take longer. Additionally, if the `HTTP Client Timeout` parameter is set to a small duration, a request timeout might happen. However, if the user wishes to avoid this error altogether, it is recommended to adjust the `HTTP Client Timeout` and `Interval` parameters based on the duration of data collection.
```
{
"error": {
"message": "failed eval: net/http: request canceled (Client.Timeout or context cancellation while reading body)"
}
}
```

## Logs reference

### Mongod Audit

This is the `mongod_audit` data stream. This data stream allows administrators and users to track system activity for deployments with multiple users and applications.


## Metrics reference

### Process
This data stream Collects host Metrics per process for all the hosts of the specified group. Metrics like Measurements for the host, such as CPU usage or number of I/O operations, memory are available on this data stream.

{{event "process"}}

{{fields "process"}}
6 changes: 6 additions & 0 deletions packages/mongodb_atlas/changelog.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# newer versions go on top
- version: "0.0.1"
changes:
- description: MongoDB Atlas integration package with "process" data stream.
type: enhancement
link: https://github.com/elastic/integrations/pull/1 # FIXME Replace with the real PR link
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the PR number

Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
dynamic_fields:
"event.ingested": ".*"
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
{
"events": [
{
"response": {
"ASSERT_MSG": 0,
"ASSERT_REGULAR": 0,
"ASSERT_USER": 0,
"ASSERT_WARNING": 0,
"BACKGROUND_FLUSH_AVG": 0,
"CACHE_DIRTY_BYTES": 0,
"CACHE_BYTES_READ_INTO": 0,
"CACHE_USED_BYTES": 0,
"CACHE_BYTES_WRITTEN_FROM": 0,
"CONNECTIONS": 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets have some values populated

"MAX_PROCESS_NORMALIZED_CPU_CHILDREN_KERNEL": 0,
"MAX_PROCESS_CPU_CHILDREN_KERNEL": 0,
"PROCESS_CPU_CHILDREN_KERNEL": 0,
"MAX_PROCESS_CPU_CHILDREN_USER": 0,
"PROCESS_CPU_CHILDREN_USER": 0,
"MAX_PROCESS_CPU_KERNEL": 0,
"PROCESS_CPU_KERNEL": 0,
"PROCESS_NORMALIZED_CPU_CHILDREN_KERNEL": 0,
"MAX_PROCESS_NORMALIZED_CPU_CHILDREN_USER": 0,
"PROCESS_NORMALIZED_CPU_CHILDREN_USER": 0,
"MAX_PROCESS_NORMALIZED_CPU_KERNEL": 0,
"PROCESS_NORMALIZED_CPU_KERNEL": 0,
"MAX_PROCESS_NORMALIZED_CPU_USER": 0,
"PROCESS_NORMALIZED_CPU_USER": 0,
"MAX_PROCESS_CPU_USER": 0,
"PROCESS_CPU_USER": 0,
"CURSORS_TOTAL_OPEN": 0,
"CURSORS_TOTAL_TIMED_OUT": 0,
"DB_DATA_SIZE_TOTAL": 0,
"DB_STORAGE_TOTAL": 0,
"DOCUMENT_METRICS_DELETED": 0,
"DOCUMENT_METRICS_INSERTED": 0,
"DOCUMENT_METRICS_RETURNED": 0,
"DOCUMENT_METRICS_UPDATED": 0,
"FTS_PROCESS_CPU_KERNEL": 0,
"FTS_PROCESS_NORMALIZED_CPU_KERNEL": 0,
"FTS_PROCESS_NORMALIZED_CPU_USER": 0,
"FTS_PROCESS_CPU_USER": 0,
"FTS_DISK_UTILIZATION": 0,
"FTS_MEMORY_MAPPED": 0,
"FTS_MEMORY_RESIDENT": 0,
"FTS_MEMORY_VIRTUAL": 0,
"GLOBAL_ACCESSES_NOT_IN_MEMORY": 0,
"GLOBAL_LOCK_CURRENT_QUEUE_READERS": 0,
"GLOBAL_LOCK_CURRENT_QUEUE_TOTAL": 0,
"GLOBAL_LOCK_CURRENT_QUEUE_WRITERS": 0,
"GLOBAL_PAGE_FAULT_EXCEPTIONS_THROWN": 0,
"EXTRA_INFO_PAGE_FAULTS": 0,
"INDEX_COUNTERS_BTREE_ACCESSES": 0,
"INDEX_COUNTERS_BTREE_HITS": 0,
"INDEX_COUNTERS_BTREE_MISS_RATIO": 0,
"INDEX_COUNTERS_BTREE_MISSES": 0,
"JOURNALING_COMMITS_IN_WRITE_LOCK": 0,
"JOURNALING_MB": 0,
"JOURNALING_WRITE_DATA_FILES_MB": 0,
"MAX_SYSTEM_NORMALIZED_CPU_USER": 0,
"COMPUTED_MEMORY": 0,
"MEMORY_MAPPED": 0,
"MEMORY_RESIDENT": 0,
"MEMORY_VIRTUAL": 0,
"NETWORK_BYTES_IN": 0,
"NETWORK_BYTES_OUT": 0,
"NETWORK_NUM_REQUESTS": 0,
"OPCOUNTER_CMD": 0,
"OPCOUNTER_DELETE": 0,
"OPCOUNTER_GETMORE": 0,
"OPCOUNTER_INSERT": 0,
"OPCOUNTER_QUERY": 0,
"OPCOUNTER_REPL_CMD": 0,
"OPCOUNTER_REPL_DELETE": 0,
"OPCOUNTER_REPL_INSERT": 0,
"OPCOUNTER_REPL_UPDATE": 0,
"OPCOUNTER_UPDATE": 0,
"OP_EXECUTION_TIME_COMMANDS": 0,
"OP_EXECUTION_TIME_READS": 0,
"OP_EXECUTION_TIME_WRITES": 0,
"OPERATIONS_SCAN_AND_ORDER": 0,
"OPLOG_MASTER_LAG_TIME_DIFF": 0,
"OPLOG_MASTER_TIME": 0,
"OPLOG_RATE_GB_PER_HOUR": 0,
"OPLOG_REPLICATION_LAG": 0,
"OPLOG_SLAVE_LAG_MASTER_TIME": 0,
"QUERY_EXECUTOR_SCANNED": 0,
"QUERY_EXECUTOR_SCANNED_OBJECTS": 0,
"QUERY_TARGETING_SCANNED_OBJECTS_PER_RETURNED": 0,
"QUERY_TARGETING_SCANNED_PER_RETURNED": 0,
"RESTARTS_IN_LAST_HOUR": 0,
"MAX_SWAP_USAGE_FREE": 0,
"SWAP_USAGE_FREE": 0,
"SWAP_USAGE_USED": 0,
"MAX_SWAP_USAGE_USED": 0,
"MAX_SYSTEM_CPU_GUEST": 0,
"SYSTEM_CPU_GUEST": 0,
"MAX_SYSTEM_CPU_IOWAIT": 0,
"SYSTEM_CPU_IOWAIT": 0,
"MAX_SYSTEM_CPU_IRQ": 0,
"SYSTEM_CPU_IRQ": 0,
"MAX_SYSTEM_CPU_KERNEL": 0,
"SYSTEM_CPU_KERNEL": 0,
"SYSTEM_CPU_NICE": 0,
"MAX_SYSTEM_CPU_SOFTIRQ": 0,
"SYSTEM_CPU_SOFTIRQ": 0,
"MAX_SYSTEM_CPU_STEAL": 0,
"SYSTEM_CPU_STEAL": 0,
"MAX_SYSTEM_CPU_USER": 0,
"SYSTEM_CPU_USER": 0,
"SYSTEM_MEMORY_AVAILABLE": 0,
"MAX_SYSTEM_MEMORY_AVAILABLE": 0,
"SYSTEM_MEMORY_FREE": 0,
"MAX_SYSTEM_MEMORY_FREE": 0,
"SYSTEM_MEMORY_USED": 0,
"MAX_SYSTEM_MEMORY_USED": 0,
"SYSTEM_NETWORK_IN": 0,
"MAX_SYSTEM_NETWORK_IN": 0,
"MAX_SYSTEM_NETWORK_OUT": 0,
"SYSTEM_NETWORK_OUT": 0,
"MAX_SYSTEM_NORMALIZED_CPU_GUEST": 0,
"SYSTEM_NORMALIZED_CPU_GUEST": 0,
"MAX_SYSTEM_NORMALIZED_CPU_IOWAIT": 0,
"SYSTEM_NORMALIZED_CPU_IOWAIT": 0,
"MAX_SYSTEM_NORMALIZED_CPU_IRQ": 0,
"SYSTEM_NORMALIZED_CPU_IRQ": 0,
"MAX_SYSTEM_NORMALIZED_CPU_KERNEL": 0,
"SYSTEM_NORMALIZED_CPU_KERNEL": 0,
"MAX_SYSTEM_NORMALIZED_CPU_NICE": 0,
"SYSTEM_NORMALIZED_CPU_NICE": 0,
"MAX_SYSTEM_NORMALIZED_CPU_SOFTIRQ": 0,
"SYSTEM_NORMALIZED_CPU_SOFTIRQ": 0,
"MAX_SYSTEM_NORMALIZED_CPU_STEAL": 0,
"SYSTEM_NORMALIZED_CPU_STEAL": 0,
"SYSTEM_NORMALIZED_CPU_USER": 0,
"TICKETS_AVAILABLE_READS": 0,
"TICKETS_AVAILABLE_WRITE": 0
}
}
]
}
Loading