diff --git a/integrations/README.md b/integrations/README.md index 8e2c9b968c4e9..ea3cb6b63c430 100644 --- a/integrations/README.md +++ b/integrations/README.md @@ -12,79 +12,11 @@ also improve the protection of your workloads, applications, and data. Security Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes and combines security data from AWS and a broad range of enterprise security data sources. -#### Development guide +Refer to these documents for more information about this integration: -A demo of the integration can be started using the content of this folder and Docker. +* [User Guide](./amazon-security-lake/README.md). +* [Developer Guide](./amazon-security-lake/CONTRIBUTING.md). -```console -docker compose -f ./docker/amazon-security-lake.yml up -d -``` - -This docker compose project will bring a _wazuh-indexer_ node, a _wazuh-dashboard_ node, -a _logstash_ node, our event generator and an AWS Lambda Python container. On the one hand, the event generator will push events -constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events -generator](./tools/events-generator/README.md) documentation for customization options). -On the other hand, logstash will constantly query for new data and deliver it to output configured in the -pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`. - -The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers -the data to an S3 bucket, from which the data is processed using a Lambda function, to finally -be sent to the Amazon Security Lake bucket in Parquet format. - - - -Attach a terminal to the container and start the integration by starting logstash, as follows: - -```console -/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash -``` - -After 5 minutes, the first batch of data will show up in http://localhost:9444/ui/wazuh-indexer-aux-bucket. -You'll need to invoke the Lambda function manually, selecting the log file to process. - -```bash -bash amazon-security-lake/src/invoke-lambda.sh -``` - -Processed data will be uploaded to http://localhost:9444/ui/wazuh-indexer-amazon-security-lake-bucket. Click on any file to download it, -and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./amazon-security-lake/). - -```bash -parquet-tools show -``` - -Bucket names can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file. - -For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, -by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively. - -For production usage, follow the instructions in our documentation page about this matter. -(_when-its-done_) - -As a last note, we would like to point out that we also use this Docker environment for development. - -#### Deployment guide - -- Create one S3 bucket to store the raw events, for example: `wazuh-security-lake-integration` -- Create a new AWS Lambda function - - Create an IAM role with access to the S3 bucket created above. - - Select Python 3.12 as the runtime - - Configure the runtime to have 512 MB of memory and 30 seconds timeout - - Configure an S3 trigger so every created object in the bucket with `.txt` extension invokes the Lambda. - - Run `make` to generate a zip deployment package, or create it manually as per the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-create-dependencies). - - Upload the zip package to the bucket. Then, upload it to the Lambda from the S3 as per these instructions: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip -- Create a Custom Source within Security Lake for the Wazuh Parquet files as per the following guide: https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html -- Set the **AWS account ID** for the Custom Source **AWS account with permission to write data**. - - - - -The instructions on this section have been based on the following AWS tutorials and documentation. - -- [Tutorial: Using an Amazon S3 trigger to create thumbnail images](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html) -- [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) -- [Working with .zip file archives for Python Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html) -- [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) ### Other integrations diff --git a/integrations/amazon-security-lake/CONTRIBUTING.md b/integrations/amazon-security-lake/CONTRIBUTING.md new file mode 100644 index 0000000000000..7675aa03c7961 --- /dev/null +++ b/integrations/amazon-security-lake/CONTRIBUTING.md @@ -0,0 +1,59 @@ +# Wazuh to Amazon Security Lake Integration Development Guide + +## Deployment guide on Docker + +A demo of the integration can be started using the content of this folder and Docker. Open a terminal in the `wazuh-indexer/integrations` folder and start the environment. + +```console +docker compose -f ./docker/amazon-security-lake.yml up -d +``` + +This Docker Compose project will bring up these services: + +- a _wazuh-indexer_ node +- a _wazuh-dashboard_ node +- a _logstash_ node +- our [events generator](./tools/events-generator/README.md) +- an AWS Lambda Python container. + +On the one hand, the event generator will push events constantly to the indexer, to the `wazuh-alerts-4.x-sample` index by default (refer to the [events generator](./tools/events-generator/README.md) documentation for customization options). On the other hand, Logstash will query for new data and deliver it to output configured in the pipeline, which can be one of `indexer-to-s3` or `indexer-to-file`. + +The `indexer-to-s3` pipeline is the method used by the integration. This pipeline delivers the data to an S3 bucket, from which the data is processed using a Lambda function, to finally be sent to the Amazon Security Lake bucket in Parquet format. + + +Attach a terminal to the container and start the integration by starting Logstash, as follows: + +```console +/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/indexer-to-s3.conf --path.settings /etc/logstash +``` + +After 5 minutes, the first batch of data will show up in http://localhost:9444/ui/wazuh-aws-security-lake-raw. You'll need to invoke the Lambda function manually, selecting the log file to process. + +```bash +bash amazon-security-lake/src/invoke-lambda.sh +``` + +Processed data will be uploaded to http://localhost:9444/ui/wazuh-aws-security-lake-parquet. Click on any file to download it, and check it's content using `parquet-tools`. Just make sure of installing the virtual environment first, through [requirements.txt](./amazon-security-lake/). + +```bash +parquet-tools show +``` + +If the `S3_BUCKET_OCSF` variable is set in the container running the AWS Lambda function, intermediate data in OCSF and JSON format will be written to a dedicated bucket. This is enabled by default, writing to the `wazuh-aws-security-lake-ocsf` bucket. Bucket names and additional environment variables can be configured editing the [amazon-security-lake.yml](./docker/amazon-security-lake.yml) file. + +For development or debugging purposes, you may want to enable hot-reload, test or debug on these files, by using the `--config.reload.automatic`, `--config.test_and_exit` or `--debug` flags, respectively. + +For production usage, follow the instructions in our documentation page about this matter. +See [README.md](README.md). The instructions on that section have been based on the following AWS tutorials and documentation. + +- [Tutorial: Using an Amazon S3 trigger to create thumbnail images](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html) +- [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) +- [Working with .zip file archives for Python Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html) +- [Best practices for working with AWS Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) + +## Makefile + +**Docker is required**. + +The [Makefile](./Makefile) in this folder automates the generation of a zip deployment package containing the source code and the required dependencies for the AWS Lambda function. Simply run `make` and it will generate the `wazuh_to_amazon_security_lake.zip` file. The main target runs a Docker container to install the Python3 dependencies locally, and zips the source code and the dependencies together. + diff --git a/integrations/amazon-security-lake/Makefile b/integrations/amazon-security-lake/Makefile index 9a6dd674b37e7..d1c11a0b01585 100644 --- a/integrations/amazon-security-lake/Makefile +++ b/integrations/amazon-security-lake/Makefile @@ -25,4 +25,6 @@ $(TARGET): clean: @rm -rf $(TARGET) - @py3clean . \ No newline at end of file + docker run -v `pwd`:/src -w /src \ + python:3.12 \ + py3clean . \ No newline at end of file diff --git a/integrations/amazon-security-lake/README.md b/integrations/amazon-security-lake/README.md index 1dbe1dd4ebb23..7af236b61b6bb 100644 --- a/integrations/amazon-security-lake/README.md +++ b/integrations/amazon-security-lake/README.md @@ -1,62 +1,281 @@ -### Amazon Security Lake integration - Logstash - -Follow the [Wazuh indexer integration using Logstash](https://documentation.wazuh.com/current/integrations-guide/opensearch/index.html#wazuh-indexer-integration-using-logstash) -to install `Logstash` and the `logstash-input-opensearch` plugin. - -> RPM: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html#_yum -```markdown - -# Install plugins (logstash-output-s3 is already installed) -sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch - -# Copy certificates -mkdir -p /etc/logstash/wi-certs/ -cp /etc/wazuh-indexer/certs/root-ca.pem /etc/logstash/wi-certs/root-ca.pem -chown logstash:logstash /etc/logstash/wi-certs/root-ca.pem - -# Configuring new indexes -SKIP - -# Configuring a pipeline - -# Keystore -## Prepare keystore -set +o history -echo 'LOGSTASH_KEYSTORE_PASS="123456"'| sudo tee /etc/sysconfig/logstash -export LOGSTASH_KEYSTORE_PASS=123456 -set -o history -sudo chown root /etc/sysconfig/logstash -sudo chmod 600 /etc/sysconfig/logstash -sudo systemctl start logstash - -## Create keystore -sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create - -## Store Wazuh indexer credentials (admin user) -sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME -sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD - -# Pipeline -sudo touch /etc/logstash/conf.d/wazuh-s3.conf -# Replace with cp /vagrant/wazuh-s3.conf /etc/logstash/conf.d/wazuh-s3.conf -sudo systemctl stop logstash -sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-s3.conf --path.settings /etc/logstash/ - |- Success: `[INFO ][logstash.agent ] Pipelines running ...` - -# Start Logstash -sudo systemctl enable logstash -sudo systemctl start logstash -``` +# Wazuh to Amazon Security Lake Integration Guide +## Table of Contents -### Building the Docker image +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Integration guide](#integration-guide) + - [Configure Amazon Security Lake](#configure-amazon-security-lake) + - [Create an AWS S3 bucket](#create-an-s3-bucket-to-store-events) + - [Configure the AWS Lambda function](#create-an-aws-lambda-function) + - [Validation](#validation) + - [Install and configure Logstash](#install-and-configure-logstash) +- [OCSF mapping](#ocsf-mapping) +- [Troubleshooting](#troubleshooting) +- [Support](#support) -```console -docker build -t wazuh/indexer-security-lake-integration:latest . --progress=plain -``` +## Introduction + +### Amazon Security Lake + +Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, you can get a more complete understanding of your security data across your entire organization. You can also improve the protection of your workloads, applications, and data. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes and combines security data from AWS and a broad range of enterprise security data sources. + +### Open Cybersecurity Schema Framework + +The Open Cybersecurity Schema Framework is an open-source project, delivering an extensible framework for developing schemas, along with a vendor-agnostic core security schema. Vendors and other data producers can adopt and extend the schema for their specific domains. Data engineers can map differing schemas to help security teams simplify data ingestion and normalization, so that data scientists and analysts can work with a common language for threat detection and investigation. The goal is to provide an open standard, adopted in any environment, application, or solution, while complementing existing security standards and processes. + +### Wazuh Security Events + +Wazuh uses rules to monitor the events and logs in your network to detect security threats. When the events and logs meet the test criteria that is defined in the rules, an alert is created to show that a security attack or policy breach is suspected. + +**References**: + +- https://documentation.wazuh.com/current/user-manual/ruleset/getting-started.html#github-repository +- https://github.com/wazuh/wazuh/tree/master/ruleset/rules +- https://github.com/wazuh/wazuh/blob/master/extensions/elasticsearch/7.x/wazuh-template.json + +### Wazuh Security Events to Amazon Security Lake + +Wazuh Security Events can be converted to OCSF events and Parquet format, required by Amazon Security Lake, by using an AWS Lambda Python function, a Logstash instance and an AWS S3 bucket. + +A properly configured Logstash instance can send the Wazuh Security events to an AWS S3 bucket, automatically invoking the AWS Lambda function that will transform and send the events to the Amazon Security lake dedicated S3 bucket. + +The diagram below illustrates the process of converting Wazuh Security Events to OCSF events and to Parquet format for Amazon Security Lake: + +![Overview diagram of the Wazuh integration with Amazon Security Lake](./images/asl-overview.jpeg) + +## Prerequisites + +1. Amazon Security Lake is enabled. +2. At least one up and running `wazuh-indexer` instance with populated `wazuh-alerts-4.x-*` indices. +3. A Logstash instance. +4. An S3 bucket to store raw events. +5. An AWS Lambda function, using the Python 3.12 runtime. +6. (Optional) An S3 bucket to store OCSF events, mapped from raw events. +## Integration guide + +### Configure Amazon Security Lake + +Enable Amazon Security Lake as per the [official instructions](https://docs.aws.amazon.com/security-lake/latest/userguide/what-is-security-lake.html). + +#### Create a custom source for Wazuh + +Follow the [official documentation](https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html) to register Wazuh as a custom source. + +To create the custom source: + +1. From the Amazon Security Lake console, click on _Custom Sources_. +2. Click on the _Create custom source_ button. +3. Enter "Wazuh" as the _Data source name_. +4. Select "Security Finding" as the _OCSF Event class_. +5. For _AWS account with permission to write data_, enter the AWS account ID and External ID of the custom source that will write logs and events to the data lake. +6. For _Service Access_, create and use a new service role or use an existing service role that gives Security Lake permission to invoke AWS Glue. + ![*Custom source* creation form](./images/asl-custom-source-form.jpeg) +7. Choose _Create_. Upon creation, Amazon Security Lake automatically creates an AWS Service Role with permissions to push files into the Security Lake bucket, under the proper prefix named after the custom source name. An AWS Glue Crawler is also created to populate the AWS Glue Data Catalog automatically. + ![*Custom source* after creation](./images/asl-custom-source.jpeg) +8. Finally, collect the S3 bucket details, as these will be needed in the next step. Make sure you have the following information: + - The Amazon Security Lake S3 region. + - The S3 bucket name (e.g, `aws-security-data-lake-us-east-1-AAABBBCCCDDD`). + +### Create an S3 bucket to store events + +Follow the [official documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) to create an S3 bucket within your organization. Use a descriptive name, for example: `wazuh-aws-security-lake-raw`. + +### Create an AWS Lambda function + +Follow the [official documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) to create an AWS Lambda: + +- Select Python 3.12 as the runtime. +- Configure the runtime to have 512 MB of memory and 30 seconds timeout. +- Configure a trigger so every object with `.txt` extension uploaded to the S3 bucket created previously invokes the Lambda. + ![AWS Lambda trigger](./images/asl-lambda-trigger.jpeg) +- Use the [Makefile](./Makefile) to generate the zip package `wazuh_to_amazon_security_lake.zip`, and upload it to the S3 bucket created previously as per [these instructions](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-package.html#gettingstarted-package-zip). See [CONTRIBUTING](./CONTRIBUTING.md) for details about the Makefile. +- Configure the Lambda with the at least the required _Environment Variables_ below: + + | Environment variable | Required | Value | + | -------------------- | -------- | -------------------------------------------------------------------------------------------------- | + | AWS_BUCKET | True | The name of the Amazon S3 bucket in which Security Lake stores your custom source data | + | SOURCE_LOCATION | True | The _Data source name_ of the _Custom Source_ | + | ACCOUNT_ID | True | Enter the ID that you specified when creating your Amazon Security Lake custom source | + | AWS_REGION | True | AWS Region to which the data is written | + | S3_BUCKET_OCSF | False | S3 bucket to which the mapped events are written | + | OCSF_CLASS | False | The OCSF class to map the events into. Can be "SECURITY_FINDING" (default) or "DETECTION_FINDING". | + +### Validation + +To validate that the Lambda function works as it should, add the sample events below to the `sample.txt` file and upload it to the S3 bucket. -Run with: -```console -docker run -it --name=wazuh-indexer-security-lake-integration --rm wazuh/indexer-security-lake-integration ls ``` +{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:20:46.976+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80791","description":"Audit: Command: /usr/sbin/crond"},"location":"","agent":{"id":"004","ip":"47.204.15.21","name":"Ubuntu"},"data":{"audit":{"type":"NORMAL","file":{"name":"/etc/sample/file"},"success":"yes","command":"cron","exe":"/usr/sbin/crond","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:20:46.976Z"} +{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:03.034+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80790","description":"Audit: Command: /usr/sbin/bash"},"location":"","agent":{"id":"007","ip":"24.273.97.14","name":"Debian"},"data":{"audit":{"type":"PATH","file":{"name":"/bin/bash"},"success":"yes","command":"bash","exe":"/usr/sbin/bash","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:03.034Z"} +{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:08.087+0000","rule":{"id":"1740","mail":false,"description":"Sample alert 1","groups":["ciscat"],"level":9},"location":"","agent":{"id":"006","ip":"207.45.34.78","name":"Windows"},"data":{"cis":{"rule_title":"CIS-CAT 5","timestamp":"2024-04-22T14:22:08.087+0000","benchmark":"CIS Ubuntu Linux 16.04 LTS Benchmark","result":"notchecked","pass":52,"fail":0,"group":"Access, Authentication and Authorization","unknown":61,"score":79,"notchecked":1,"@timestamp":"2024-04-22T14:22:08.087+0000"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:08.087Z"} +``` + +A successful execution of the Lambda function will map these events into the OCSF Security Finding Class and write them to the Amazon Security Lake S3 bucket in Paquet format, properly partitioned based on the Custom Source name, Account ID, AWS Region and date, as described in the [official documentation](https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html#custom-sources-best-practices). + +### Install and configure Logstash + +Install Logstash on a dedicated server or on the server hosting the `wazuh-indexer`. Logstash forwards the data from the `wazuh-indexer` to the [AWS S3 bucket created previously](#create-an-s3-bucket-to-store-events). + +1. Follow the [official documentation](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html) to install Logstash. +2. Install the [logstash-input-opensearch](https://github.com/opensearch-project/logstash-input-opensearch) plugin and the [logstash-output-s3](https://www.elastic.co/guide/en/logstash/8.13/plugins-outputs-s3.html) plugin (this one is installed by default in most cases). + + ```console + sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch + ``` + +3. Copy the `wazuh-indexer` root certificate on the Logstash server, to any folder of your choice (e.g, `/usr/share/logstash/root-ca.pem`). +4. Give the `logstash` user the required permissions to read the certificate. + + ```console + sudo chmod -R 755 /root-ca.pem + ``` + +#### Configure the Logstash pipeline + +A [Logstash pipeline](https://www.elastic.co/guide/en/logstash/current/configuration.html) allows Logstash to use plugins to read the data from the `wazuh-indexer`and send them to an AWS S3 bucket. + +The Logstash pipeline requires access to the following secrets: + +- `wazuh-indexer` credentials: `INDEXER_USERNAME` and `INDEXER_PASSWORD`. +- AWS credentials for the account with permissions to write to the S3 bucket: `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`. +- AWS S3 bucket details: `AWS_REGION` and `S3_BUCKET` (bucket name). + +1. Use the [Logstash keystore](https://www.elastic.co/guide/en/logstash/current/keystore.html) to securely store these values. + + +2. Create the configuration file `indexer-to-s3.conf` in the `/etc/logstash/conf.d/` folder: + + ```console + sudo touch /etc/logstash/conf.d/indexer-to-s3.conf + ``` + +3. Add the following configuration to the `indexer-to-s3.conf` file. + + ```console + input { + opensearch { + hosts => [":9200"] + user => "${INDEXER_USERNAME}" + password => "${INDEXER_PASSWORD}" + ssl => true + ca_file => "/root-ca.pem" + index => "wazuh-alerts-4.x-*" + query => '{ + "query": { + "range": { + "@timestamp": { + "gt": "now-5m" + } + } + } + }' + schedule => "*/5 * * * *" + } + } + + output { + stdout { + id => "output.stdout" + codec => json_lines + } + s3 { + id => "output.s3" + access_key_id => "${AWS_ACCESS_KEY_ID}" + secret_access_key => "${AWS_SECRET_ACCESS_KEY}" + region => "${AWS_REGION}" + bucket => "${S3_BUCKET}" + codec => "json_lines" + retry_count => 0 + validate_credentials_on_root_bucket => false + prefix => "%{+YYYY}%{+MM}%{+dd}" + server_side_encryption => true + server_side_encryption_algorithm => "AES256" + additional_settings => { + "force_path_style" => true + } + time_file => 5 + } + } + ``` + +#### Running Logstash + +1. Once you have everything set, run Logstash from the CLI with your configuration: + + ```console + sudo systemctl stop logstash + sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash ----config.test_and_exit + ``` + +2. After confirming that the configuration loads correctly without errors, run Logstash as a service. + + ```console + sudo systemctl enable logstash + sudo systemctl start logstash + ``` + +## OCSF Mapping + +The integration maps Wazuh Security Events to the **OCSF v1.1.0** [Security Finding (2001)](https://schema.ocsf.io/classes/security_finding) Class. +The tables below represent how the Wazuh Security Events are mapped into the OCSF Security Finding Class. + +> **NOTE**: This does not reflect any transformations or evaluations of the data. Some data evaluation and transformation will be necessary for a correct representation in OCSF that matches all requirements. + +### Metadata + +| **OCSF Key** | **OCSF Value Type** | **Value** | +| ---------------------------- | ------------------- | ------------------ | +| category_uid | Integer | 2 | +| category_name | String | "Findings" | +| class_uid | Integer | 2001 | +| class_name | String | "Security Finding" | +| type_uid | Long | 200101 | +| metadata.product.name | String | "Wazuh" | +| metadata.product.vendor_name | String | "Wazuh, Inc." | +| metadata.product.version | String | "4.9.0" | +| metadata.product.lang | String | "en" | +| metadata.log_name | String | "Security events" | +| metadata.log_provider | String | "Wazuh" | + +#### Security events + +| **OCSF Key** | **OCSF Value Type** | **Wazuh Event Value** | +| ---------------------- | ------------------- | -------------------------------------- | +| activity_id | Integer | 1 | +| time | Timestamp | timestamp | +| message | String | rule.description | +| count | Integer | rule.firedtimes | +| finding.uid | String | id | +| finding.title | String | rule.description | +| finding.types | String Array | input.type | +| analytic.category | String | rule.groups | +| analytic.name | String | decoder.name | +| analytic.type | String | "Rule" | +| analytic.type_id | Integer | 1 | +| analytic.uid | String | rule.id | +| risk_score | Integer | rule.level | +| attacks.tactic.name | String | rule.mitre.tactic | +| attacks.technique.name | String | rule.mitre.technique | +| attacks.technique.uid | String | rule.mitre.id | +| attacks.version | String | "v13.1" | +| nist | String Array | rule.nist_800_53 | +| severity_id | Integer | convert(rule.level) | +| status_id | Integer | 99 | +| resources.name | String | agent.name | +| resources.uid | String | agent.id | +| data_sources | String Array | ['_index', 'location', 'manager.name'] | +| raw_data | String | full_log | + +## Troubleshooting + +| **Issue** | **Resolution** | +| --------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| The Wazuh alert data is available in the Amazon Security Lake S3 bucket, but the Glue Crawler fails to parse the data into the Security Lake. | This issue typically occurs when the custom source that is created for the integration is using the wrong event class. Make sure you create the custom source with the Security Finding event class. | + +## Support + +The integration guide is an open source project and not a Wazuh product. As such, it carries no formal support, expressed, or implied. If you encounter any issues while deploying the integration guide, you can create an issue on our GitHub repository for bugs, enhancements, or other requests. + +Amazon Security Lake is an AWS product. As such, any questions or problems you experience with this service should be handled through a support ticket with AWS Support. diff --git a/integrations/amazon-security-lake/images/asl-custom-source-form.jpeg b/integrations/amazon-security-lake/images/asl-custom-source-form.jpeg new file mode 100644 index 0000000000000..c14d960f7370d Binary files /dev/null and b/integrations/amazon-security-lake/images/asl-custom-source-form.jpeg differ diff --git a/integrations/amazon-security-lake/images/asl-custom-source.jpeg b/integrations/amazon-security-lake/images/asl-custom-source.jpeg new file mode 100644 index 0000000000000..71fb91088ce1e Binary files /dev/null and b/integrations/amazon-security-lake/images/asl-custom-source.jpeg differ diff --git a/integrations/amazon-security-lake/images/asl-lambda-trigger.jpeg b/integrations/amazon-security-lake/images/asl-lambda-trigger.jpeg new file mode 100644 index 0000000000000..8efb04895779b Binary files /dev/null and b/integrations/amazon-security-lake/images/asl-lambda-trigger.jpeg differ diff --git a/integrations/amazon-security-lake/images/asl-overview.jpeg b/integrations/amazon-security-lake/images/asl-overview.jpeg new file mode 100644 index 0000000000000..294cf4024ba49 Binary files /dev/null and b/integrations/amazon-security-lake/images/asl-overview.jpeg differ