Skip to content

Commit

Permalink
DOCS-2530 Lint Integrations section (part 8) (#10964)
Browse files Browse the repository at this point in the history
Co-authored-by: Kaylyn <[email protected]>
  • Loading branch information
ruthnaebeck and Kaylyn authored Jan 4, 2022
1 parent d99d5c0 commit f06afb9
Show file tree
Hide file tree
Showing 9 changed files with 57 additions and 41 deletions.
3 changes: 2 additions & 1 deletion lighttpd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,8 @@ Need help? Contact [Datadog support][10].

## Further Reading

To get a better idea of how (or why) to monitor Lighttpd web server metrics with Datadog, check out our [series of blog posts][11] about it.
- [Monitor Lighttpd web server metrics with Datadog][11].


[1]: https://raw.githubusercontent.com/DataDog/integrations-core/master/lighttpd/images/lighttpddashboard.png
[2]: https://app.datadoghq.com/account/settings#agent
Expand Down
12 changes: 6 additions & 6 deletions linkerd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,32 +67,32 @@ Collecting logs is disabled by default in the Datadog Agent. To enable it, see [
| -------------- | ---------------------------------------------------- |
| `<LOG_CONFIG>` | `{"source": "linkerd", "service": "<SERVICE_NAME>"}` |

To increase the verbosity of the data plane logs, see [the official Linkerd documentation][9].
To increase the verbosity of the data plane logs, see [Modifying the Proxy Log Level][9].

<!-- xxz tab xxx -->
<!-- xxz tabs xxx -->

### Validation

[Run the Agent's status subcommand][10] and look for `linkerd` under the Checks section.
Run the [Agent's status subcommand][10] and look for `linkerd` under the Checks section.

## Data Collected

### Metrics

See [metadata.csv][11] for a list of metrics provided by this integration.

For linkerd v1, see [finagle metrics docs][12] for a detailed description of some of the available metrics and [this gist][13] for an example of metrics exposed by linkerd.
For Linkerd v1, see the [finagle metrics guide][12] for metric descriptions and [this gist][13] for an example of metrics exposed by Linkerd.

Attention: Depending on your linkerd configuration, some metrics might not be exposed by linkerd.
**Note**: Depending on your Linkerd configuration, some metrics might not be exposed by Linkerd.

To list the metrics exposed by your current configuration, run
To list the metrics exposed by your current configuration, run:

```bash
curl <linkerd_prometheus_endpoint>
```

Where `linkerd_prometheus_endpoint` is the linkerd prometheus endpoint (you should use the same value as the `prometheus_url` config key in your `linkerd.yaml`)
Where `linkerd_prometheus_endpoint` is the Linkerd Prometheus endpoint (you should use the same value as the `prometheus_url` config key in your `linkerd.yaml`)

If you need to use a metric that is not provided by default, you can add an entry to `linkerd.yaml`.

Expand Down
10 changes: 5 additions & 5 deletions mapr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The MapR check is included in the [Datadog Agent][2] package but requires additi

- [MapR monitoring][3] is running correctly.
- You have an available [MapR user][4] (with name, password, UID, and GID) with the 'consume' permission on the `/var/mapr/mapr.monitoring/metricstreams` stream. This may be an already existing user or a newly created user.
- **On a non-secure cluster**: Follow [this guide][5] so that the `dd-agent` user can impersonate this MapR user.
- **On a non-secure cluster**: Follow [Configuring Impersonation without Cluster Security][5] so that the `dd-agent` user can impersonate this MapR user.
- **On a secure cluster**: Generate a [long-lived service ticket][6] for this user that is readable by the `dd-agent` user.

Installation steps for each node:
Expand Down Expand Up @@ -79,11 +79,11 @@ Then update the `/opt/mapr/fluentd/fluentd-<VERSION>/etc/fluentd/fluentd.conf` w
</store>
```

Refer to [fluent_datadog_plugin][9] documentation for more details about the options you can use.
See the [fluent_datadog_plugin][9] for more details about the options you can use.

### Validation

[Run the Agent's status subcommand][10] and look for `mapr` under the Checks section.
Run the [Agent's status subcommand][10] and look for `mapr` under the Checks section.

## Data Collected

Expand All @@ -103,11 +103,11 @@ See [service_checks.json][12] for a list of service checks provided by this inte

- **The Agent is on a crash loop after configuring the MapR integration**

There have been a few cases where the C library within _mapr-streams-python_ segfaults because of permissions issues. Make sure the `dd-agent` user has read permission on the ticket file, that the `dd-agent` user is able to run maprcli commands when the MAPR_TICKETFILE_LOCATION environment variable points to the ticket.
There have been a few cases where the C library within _mapr-streams-python_ segfaults because of permissions issues. Ensure the `dd-agent` user has read permission on the ticket file, that the `dd-agent` user is able to run `maprcli` commands when the `MAPR_TICKETFILE_LOCATION` environment variable points to the ticket.

- **The integration seems to work correctly but doesn't send any metric**.

Make sure to let the Agent run for at least a couple of minutes, as the integration pulls data from a topic and MapR needs to push data into that topic.
Make sure to let the Agent run for at least a couple of minutes, because the integration pulls data from a topic and MapR needs to push data into that topic.
If that doesn't help, but running the Agent manually with `sudo` shows data, this is a problem with permissions. Double check everything. The `dd-agent` Linux user should be able to use a locally stored ticket, allowing it to run queries against MapR as user X (which may or may not be `dd-agent` itself). Additionally, user X needs to have the `consume` permission on the `/var/mapr/mapr.monitoring/metricstreams` stream.

- **You see the message `confluent_kafka was not imported correctly ...`**
Expand Down
45 changes: 30 additions & 15 deletions mapreduce/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,6 @@ To configure this check for an Agent running on a host:

2. [Restart the Agent][5].

<!-- xxz tab xxx -->
<!-- xxx tab "Containerized" xxx -->

#### Containerized

For containerized environments, see the [Autodiscovery Integration Templates][6] for guidance on applying the parameters below.

| Parameter | Value |
| -------------------- | --------------------------------------------------------------------------------------------- |
| `<INTEGRATION_NAME>` | `mapreduce` |
| `<INIT_CONFIG>` | blank or `{}` |
| `<INSTANCE_CONFIG>` | `{"resourcemanager_uri": "https://%%host%%:8088", "cluster_name":"<MAPREDUCE_CLUSTER_NAME>"}` |

##### Log collection

<!-- partial
Expand Down Expand Up @@ -72,14 +59,41 @@ partial -->

3. [Restart the Agent][5].

See [Datadog's documentation][7] for additional information on how to configure the Agent for log collection in Docker environments.
<!-- xxz tab xxx -->
<!-- xxx tab "Containerized" xxx -->

#### Containerized

For containerized environments, see the [Autodiscovery Integration Templates][6] for guidance on applying the parameters below.

| Parameter | Value |
| -------------------- | --------------------------------------------------------------------------------------------- |
| `<INTEGRATION_NAME>` | `mapreduce` |
| `<INIT_CONFIG>` | blank or `{}` |
| `<INSTANCE_CONFIG>` | `{"resourcemanager_uri": "https://%%host%%:8088", "cluster_name":"<MAPREDUCE_CLUSTER_NAME>"}` |

##### Log collection

<!-- partial
{{< site-region region="us3" >}}
**Log collection is not supported for the Datadog {{< region-param key="dd_site_name" >}} site**.
{{< /site-region >}}
partial -->

Collecting logs is disabled by default in the Datadog Agent. To enable it, see the [Docker Log Collection][7].

Then, set [log integrations][16] as Docker labels:

```yaml
LABEL "com.datadoghq.ad.logs"='[{"source": "mapreduce", "service": "<SERVICE_NAME>"}]'
```

<!-- xxz tab xxx -->
<!-- xxz tabs xxx -->

### Validation

[Run the Agent's status subcommand][8] and look for `mapreduce` under the Checks section.
Run the [Agent's status subcommand][8] and look for `mapreduce` under the Checks section.

## Data Collected

Expand Down Expand Up @@ -121,3 +135,4 @@ Need help? Contact [Datadog support][11].
[13]: https://www.datadoghq.com/blog/monitor-hadoop-metrics
[14]: https://www.datadoghq.com/blog/collecting-hadoop-metrics
[15]: https://www.datadoghq.com/blog/monitor-hadoop-metrics-datadog
[16]: https://docs.datadoghq.com/agent/docker/log/?tab=containerinstallation#log-integrations
6 changes: 3 additions & 3 deletions marathon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The Agent's Marathon check lets you:

### Installation

The Marathon check is included in the [Datadog Agent][1] package, so you don't need to install anything else on your Marathon master.
The Marathon check is included in the [Datadog Agent][1] package. No additional installation is needed on your server.

### Configuration

Expand Down Expand Up @@ -44,7 +44,7 @@ To configure this check for an Agent running on a host:
password: "<PASSWORD>"
```
The function of `username` and `password` depends on whether or not you configure `acs_url`; If you do, the Agent uses them to request an authentication token from ACS, which it then uses to authenticate to the Marathon API. Otherwise, the Agent uses `username` and `password` to directly authenticate to the Marathon API.
The function of `username` and `password` depends on whether or not you configure `acs_url`. If you do, the Agent uses them to request an authentication token from ACS, which it then uses to authenticate to the Marathon API. Otherwise, the Agent uses `username` and `password` to directly authenticate to the Marathon API.

2. [Restart the Agent][4].

Expand Down Expand Up @@ -136,7 +136,7 @@ partial -->

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][6].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][6].

| Parameter | Value |
| -------------- | ----------------------------------------------------- |
Expand Down
6 changes: 3 additions & 3 deletions marklogic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ If you plan to use the `enable_health_service_checks` configuration, give the Da
curl -X POST --anyauth --user <ADMIN_USER>:<ADMIN_PASSWORD> -i -H "Content-Type: application/json" -d '{"user-name": "<USER>", "password": "<PASSWORD>", "roles": {"role": "manage-user"}}' http://<HOSTNAME>:8002/manage/v2/users
```
Use the correct `<ADMIN_USER>` and `<ADMIN_PASSWORD>`, and replace `<USER>` and `<PASSWORD>` with the username and password that the Datadog Agent uses.
For more information about the endpoint, see the [MarkLogic documentation][6].
For more details, see the MarkLogic documentation: [POST /manage/v2/users][6].

2. To verify the user was created with enough permissions:
```shell
Expand Down Expand Up @@ -53,7 +53,7 @@ If you plan to use the `enable_health_service_checks` configuration, give the Da
("http://marklogic.com/dev_modules"))

```
For more information about the query, see the [MarkLogic documentation][7].
For more details, see the MarkLogic documentation: [sec:create-user][7].

4. To verify that the user was created with enough permissions, use `<USER>` and `<PASSWORD>` to authenticate at `http://<HOSTNAME>:8002` (default port).

Expand Down Expand Up @@ -99,7 +99,7 @@ _Available for Agent versions >6.0_
### Validation
[Run the Agent's status subcommand][10] and look for `marklogic` under the Checks section.
Run the [Agent's status subcommand][10] and look for `marklogic` under the Checks section.

## Data Collected

Expand Down
4 changes: 2 additions & 2 deletions mcache/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To configure this check for an Agent running on a host:
Datadog APM integrates with Memcache to see the traces across your distributed system. Trace collection is enabled by default in the Datadog Agent v6+. To start collecting traces:
1. [Enable trace collection in Datadog][5].
2. [Instrument your application that makes requests to Memchache][6].
2. [Instrument your application that makes requests to Memcache][6].
<!-- xxz tab xxx -->
<!-- xxx tab "Containerized" xxx -->
Expand Down Expand Up @@ -102,7 +102,7 @@ _Available for Agent versions >6.0_

### Validation

[Run the Agent's `status` subcommand][10] and look for `mcache` under the Checks section.
Run the [Agent's `status` subcommand][10] and look for `mcache` under the Checks section.

## Data Collected

Expand Down
6 changes: 3 additions & 3 deletions mesos_master/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Mesos_master Check

This check collects metrics for Mesos masters. If you are looking for the metrics for Mesos slave, see the [Mesos Slave Integration documentation][1].
This check collects metrics for Mesos masters. For Mesos slave metrics, see the [Mesos Slave integration][1].

![Mesos master Dashboard][2]

Expand Down Expand Up @@ -36,7 +36,7 @@ Substitute your Datadog API key and Mesos Master's API URL into the command abov

### Configuration

If you passed the correct Master URL when starting datadog-agent, the Agent is already using a default `mesos_master.d/conf.yaml` to collect metrics from your masters; you don't need to configure anything else. See the [sample mesos_master.d/conf.yaml][3] for all available configuration options.
If you passed the correct Master URL when starting datadog-agent, the Agent is already using a default `mesos_master.d/conf.yaml` to collect metrics from your masters. See the [sample mesos_master.d/conf.yaml][3] for all available configuration options.

Unless your masters' API uses a self-signed certificate. In that case, set `disable_ssl_validation: true` in `mesos_master.d/conf.yaml`.

Expand Down Expand Up @@ -75,7 +75,7 @@ partial -->

3. [Restart the Agent][4].

See [Datadog's documentation][5] for additional information on how to configure the Agent for log collection in Kubernetes environments.
To enable logs for Kubernetes environments, see [Kubernetes Log Collection][5].

### Validation

Expand Down
6 changes: 3 additions & 3 deletions mesos_slave/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This check also creates a service check for every executor task.

### Installation

Follow the instructions in our [blog post][2] to install the Datadog Agent on each Mesos agent node via the DC/OS web UI.
See [Installing Datadog on Mesos with DC/OS][2] to install the Datadog Agent on each Mesos agent node with the DC/OS web UI.

### Configuration

Expand All @@ -32,7 +32,7 @@ Follow the instructions in our [blog post][2] to install the Datadog Agent on ea

#### Marathon

If you are not using DC/OS, then use either the Marathon web UI or post to the API URL the following JSON to define the Datadog Agent application. You must change `<YOUR_DATADOG_API_KEY>` with your API Key and the number of instances with the number of slave nodes on your cluster. You may also need to update the docker image used to more recent tag. You can find the latest [on Docker Hub][3]
If you are not using DC/OS, use the Marathon web UI or post to the API URL the following JSON to define the Datadog Agent. You must change `<YOUR_DATADOG_API_KEY>` with your API Key and the number of instances with the number of slave nodes on your cluster. You may also need to update the docker image used to more recent tag. You can find the latest [on Docker Hub][3]

```json
{
Expand Down Expand Up @@ -137,7 +137,7 @@ partial -->

3. [Restart the Agent][5].

See [Datadog's documentation][6] for additional information on how to configure the Agent for log collection in Kubernetes environments.
To enable logs for Kubernetes environments, see [Kubernetes Log Collection][6].

### Validation

Expand Down

0 comments on commit f06afb9

Please sign in to comment.