From 06c42d5bde75cd89e1cff0e5470528844dc74461 Mon Sep 17 00:00:00 2001 From: wallrony Date: Tue, 16 Aug 2022 10:45:12 -0300 Subject: [PATCH 1/4] Standardize text formatting in env variables docs Co-authored-by: vassalo --- docs/CONFIGURE_LOG_EXPORT_CONTAINER.md | 4 ++-- docs/{inputs => audit}/CONFIGURE_SDM_AUDIT.md | 7 +++---- docs/inputs/CONFIGURE_FILE_INPUT.md | 2 +- docs/monitoring/CONFIGURE_PROMETHEUS.md | 4 +++- docs/outputs/CONFIGURE_ELASTICSEARCH.md | 6 +++--- docs/outputs/CONFIGURE_KAFKA.md | 2 +- docs/outputs/CONFIGURE_LOGZ.md | 2 +- docs/outputs/CONFIGURE_LOKI.md | 3 +-- docs/outputs/CONFIGURE_MONGO.md | 2 +- docs/outputs/CONFIGURE_REMOTE_SYSLOG.md | 2 +- 10 files changed, 17 insertions(+), 17 deletions(-) rename docs/{inputs => audit}/CONFIGURE_SDM_AUDIT.md (87%) diff --git a/docs/CONFIGURE_LOG_EXPORT_CONTAINER.md b/docs/CONFIGURE_LOG_EXPORT_CONTAINER.md index 1456e5b..31ca9eb 100644 --- a/docs/CONFIGURE_LOG_EXPORT_CONTAINER.md +++ b/docs/CONFIGURE_LOG_EXPORT_CONTAINER.md @@ -10,8 +10,8 @@ nav_order: 2 ### Required configuration -- **LOG_EXPORT_CONTAINER_INPUT**. Container input format (`syslog-json`, `syslog-csv`, `tcp-json`, `tcp-csv`, `file-json` or `file-csv`). Default: `syslog-json` -- **LOG_EXPORT_CONTAINER_OUTPUT**. Container output storage (`stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `logz`, `loki`, `elasticsearch` and/or `bigquery`). Default: `stdout`. You could configure multiple storages, for example: `stdout s3 datadog`. +* **LOG_EXPORT_CONTAINER_INPUT**. Container input format (`syslog-json`, `syslog-csv`, `tcp-json`, `tcp-csv`, `file-json` or `file-csv`). Default = `syslog-json`. +* **LOG_EXPORT_CONTAINER_OUTPUT**. Container output storage (`stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `logz`, `loki`, `elasticsearch` and/or `bigquery`). Default = `stdout`. You could configure multiple storages, for example: `stdout s3 datadog`. When using `LOG_EXPORT_CONTAINER_INPUT=file-json` or `LOG_EXPORT_CONTAINER_INPUT=file-csv` add variables listed in [CONFIGURE_FILE_INPUT.md](inputs/CONFIGURE_FILE_INPUT.md) diff --git a/docs/inputs/CONFIGURE_SDM_AUDIT.md b/docs/audit/CONFIGURE_SDM_AUDIT.md similarity index 87% rename from docs/inputs/CONFIGURE_SDM_AUDIT.md rename to docs/audit/CONFIGURE_SDM_AUDIT.md index 5d7596c..14e4cc0 100644 --- a/docs/inputs/CONFIGURE_SDM_AUDIT.md +++ b/docs/audit/CONFIGURE_SDM_AUDIT.md @@ -9,8 +9,7 @@ nav_order: 9 First, to make this work, you need to provide the following variable: -- **SDM_ADMIN_TOKEN**. Admin Token created in SDM Web UI. You need to check the options `Activities`, `Datasources`, `Users`, `Roles` and `Gateways` -to have permissions to extract all logs from the SDM CLI audit command. +* **SDM_ADMIN_TOKEN**. Admin Token created in SDM Web UI. The token must have the audit permissions for `Activities`, `Datasources`, `Users`, `Roles` and `Gateways`. **NOTE**: if you intend to run LEC locally, you'll need to install the [SDM CLI](https://www.strongdm.com/docs/user-guide/client-installation). @@ -29,8 +28,8 @@ It is worth noting that if you do not specify the interval value after each `/`, If you want to specifically extract the activity logs you can also use the variables below: -- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES=true` Variable responsible for indicating whether activity logs will be extracted, default = false. -- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL=15` Interval in minutes for running the extractor script, default = 15. +- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES=true` Variable responsible for indicating whether activity logs will be extracted. Default = `false`. +- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL=15` Interval in minutes for running the extractor script. Default = `15`. However, be aware that if these variables are informed together with `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`, their content will have priority over `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`. diff --git a/docs/inputs/CONFIGURE_FILE_INPUT.md b/docs/inputs/CONFIGURE_FILE_INPUT.md index 3061b13..a8d3338 100644 --- a/docs/inputs/CONFIGURE_FILE_INPUT.md +++ b/docs/inputs/CONFIGURE_FILE_INPUT.md @@ -9,7 +9,7 @@ nav_order: 9 The Log Export Container uses [fluent plugin tail](https://docs.fluentd.org/input/tail). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_INPUT=file-json` or `LOG_EXPORT_CONTAINER_INPUT=file-csv` and provide the following variables: -- **LOG_FILE_PATH**. Log file path, e.g. `/var/log/sdm/logs.log` +* **LOG_FILE_PATH**. Log file path. E.g., `/var/log/sdm/logs.log`. ## Configuration using docker diff --git a/docs/monitoring/CONFIGURE_PROMETHEUS.md b/docs/monitoring/CONFIGURE_PROMETHEUS.md index a43244e..725d78c 100644 --- a/docs/monitoring/CONFIGURE_PROMETHEUS.md +++ b/docs/monitoring/CONFIGURE_PROMETHEUS.md @@ -6,7 +6,9 @@ When Prometheus is enabled, an endpoint is available in port `24321` allowing to - `fluentd_output_status_emit_count` - the total count of forwarded logs by output (e.g.: `stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `loki`, `elasticsearch` and `bigquery`) - `fluentd_output_status_num_errors` - the count of total errors by output match -To enable it, you need to set the variable `LOG_EXPORT_CONTAINER_ENABLE_MONITORING=true`. + +To enable it, you need to configure the following variable: +* **LOG_EXPORT_CONTAINER_ENABLE_MONITORING**. Boolean variable to enable the monitoring endpoint. To see an example, you can use `docker-compose-prometheus.yml` to run Log Export Container with Prometheus and Grafana. Then you can access the `Log Export Container Metrics` dashboard in Grafana (in the port `3000`) and see how it's used. There we have the following panels: diff --git a/docs/outputs/CONFIGURE_ELASTICSEARCH.md b/docs/outputs/CONFIGURE_ELASTICSEARCH.md index 93499d6..a35e21d 100644 --- a/docs/outputs/CONFIGURE_ELASTICSEARCH.md +++ b/docs/outputs/CONFIGURE_ELASTICSEARCH.md @@ -8,6 +8,6 @@ nav_order: 5 The Log Export Container uses [fluentd elasticsearch output plugin](https://docs.fluentd.org/output/elasticsearch/). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=elasticsearch` and provide the following variables: -* **ELASTICSEARCH_HOST**. ElasticSearch server host, e.g. `127.0.0.1` -* **ELASTICSEARCH_PORT**. ElasticSearch server port, e.g. `9201`. Default: `9200` -* **ELASTICSEARCH_INDEX_NAME**. ElasticSearch index name, e.g. `my-index` +* **ELASTICSEARCH_HOST**. ElasticSearch server host. E.g., `127.0.0.1`. +* **ELASTICSEARCH_PORT**. ElasticSearch server port. E.g., `9201`. Default = `9200`. +* **ELASTICSEARCH_INDEX_NAME**. ElasticSearch index name. E.g., `my-index`. diff --git a/docs/outputs/CONFIGURE_KAFKA.md b/docs/outputs/CONFIGURE_KAFKA.md index b61635f..f12f556 100644 --- a/docs/outputs/CONFIGURE_KAFKA.md +++ b/docs/outputs/CONFIGURE_KAFKA.md @@ -9,7 +9,7 @@ nav_order: 6 The Log Export Container uses a [fluentd kafka output plugin](https://github.com/fluent/fluent-plugin-kafka). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=kafka` and provide the following variables: * **KAFKA_BROKERS**. List of brokers, following the format: `:,:` * **KAFKA_TOPIC**. Topic name -* **KAFKA_FORMAT_TYPE**. Input text type, for example: `text, json, ltsv, msgpack`. Default = json +* **KAFKA_FORMAT_TYPE**. Input text type, for example: `text, json, ltsv, msgpack`. Default = `json`. ## Plugin changes diff --git a/docs/outputs/CONFIGURE_LOGZ.md b/docs/outputs/CONFIGURE_LOGZ.md index 5968c76..60f805d 100644 --- a/docs/outputs/CONFIGURE_LOGZ.md +++ b/docs/outputs/CONFIGURE_LOGZ.md @@ -8,4 +8,4 @@ nav_order: 7 The Log Export Container uses [fluent plugin logzio](https://github.com/logzio/fluent-plugin-logzio). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=logz` and provide the following variables: -* **LOGZ_ENDPOINT**. Logz.io Endpoint URL, e.g. `https://listener.logz.io:8071?token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&type=my_type` +* **LOGZ_ENDPOINT**. Logz.io Endpoint URL. E.g., `https://listener.logz.io:8071?token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&type=my_type`. diff --git a/docs/outputs/CONFIGURE_LOKI.md b/docs/outputs/CONFIGURE_LOKI.md index 0c23977..32ff560 100644 --- a/docs/outputs/CONFIGURE_LOKI.md +++ b/docs/outputs/CONFIGURE_LOKI.md @@ -8,5 +8,4 @@ nav_order: 8 The Log Export Container uses [fluent plugin grafana loki](https://grafana.com/docs/loki/latest/clients/fluentd/). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=loki` and provide the following variables: -* **LOKI_URL**. Loki Endpoint URL, e.g. `http://localhost:3100` - +* **LOKI_URL**. Loki Endpoint URL. E.g., `http://localhost:3100`. diff --git a/docs/outputs/CONFIGURE_MONGO.md b/docs/outputs/CONFIGURE_MONGO.md index 040508d..44123e7 100644 --- a/docs/outputs/CONFIGURE_MONGO.md +++ b/docs/outputs/CONFIGURE_MONGO.md @@ -8,4 +8,4 @@ nav_order: 9 The Log Export Container uses [fluentd mongo output plugin](https://docs.fluentd.org/output/mongo). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=mongo` and provide the following variables: * **MONGO_URI**. Mongo Connection URI -* **MONGO_COLLECTION**. Mongo Collection to store the Log Export Container events. Default=sdm_logs. +* **MONGO_COLLECTION**. Mongo Collection to store the Log Export Container events. Default = `sdm_logs`. diff --git a/docs/outputs/CONFIGURE_REMOTE_SYSLOG.md b/docs/outputs/CONFIGURE_REMOTE_SYSLOG.md index 8333c0f..d7aeedd 100644 --- a/docs/outputs/CONFIGURE_REMOTE_SYSLOG.md +++ b/docs/outputs/CONFIGURE_REMOTE_SYSLOG.md @@ -9,4 +9,4 @@ nav_order: 10 The Log Export Container uses [fluent remote_syslog plugin](https://github.com/fluent-plugins-nursery/fluent-plugin-remote_syslog). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=remote-syslog` and provide the following variables: * **REMOTE_SYSLOG_HOST**. Remote Syslog host address. * **REMOTE_SYSLOG_PORT**. Remote Syslog port. -* **REMOTE_SYSLOG_PROTOCOL**. Remote Syslog protocol. Possible values: `tcp` or `udp`. Default: `tcp`. +* **REMOTE_SYSLOG_PROTOCOL**. Remote Syslog protocol. Possible values: `tcp` or `udp`. Default = `tcp`. From f5b787642bad9fcdf4538be38ca6faa464b9b382 Mon Sep 17 00:00:00 2001 From: wallrony Date: Tue, 16 Aug 2022 11:35:32 -0300 Subject: [PATCH 2/4] Replace 'for example' with 'E.g.' in env variables docs Co-authored-by: vassalo --- docs/outputs/CONFIGURE_CLOUDWATCH.md | 6 +++--- docs/outputs/CONFIGURE_KAFKA.md | 2 +- docs/outputs/CONFIGURE_S3.md | 6 +++--- docs/outputs/CONFIGURE_SPLUNK_HEC.md | 6 +++--- docs/outputs/CONFIGURE_SUMOLOGIC.md | 2 +- 5 files changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/outputs/CONFIGURE_CLOUDWATCH.md b/docs/outputs/CONFIGURE_CLOUDWATCH.md index 7386288..ff1dd59 100644 --- a/docs/outputs/CONFIGURE_CLOUDWATCH.md +++ b/docs/outputs/CONFIGURE_CLOUDWATCH.md @@ -9,9 +9,9 @@ nav_order: 3 The Log Export Container uses a [fluentd cloudwatch output plugin](https://github.com/fluent-plugins-nursery/fluent-plugin-cloudwatch-logs). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=cloudwatch` and provide the following variables: * **AWS_ACCESS_KEY_ID**. AWS Access Key * **AWS_SECRET_ACCESS_KEY**. AWS Access Secret -* **AWS_REGION**. AWS Region Name, for example: `us-west-2` -* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs, for example: `aws/sdm-logs` -* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs, for example: `test` +* **AWS_REGION**. AWS Region Name. E.g., `us-west-2` +* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs. E.g., `aws/sdm-logs` +* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs. E.g., `test` ## IAM permissions Add -at least- the following policy to your IAM user: diff --git a/docs/outputs/CONFIGURE_KAFKA.md b/docs/outputs/CONFIGURE_KAFKA.md index f12f556..c22b879 100644 --- a/docs/outputs/CONFIGURE_KAFKA.md +++ b/docs/outputs/CONFIGURE_KAFKA.md @@ -9,7 +9,7 @@ nav_order: 6 The Log Export Container uses a [fluentd kafka output plugin](https://github.com/fluent/fluent-plugin-kafka). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=kafka` and provide the following variables: * **KAFKA_BROKERS**. List of brokers, following the format: `:,:` * **KAFKA_TOPIC**. Topic name -* **KAFKA_FORMAT_TYPE**. Input text type, for example: `text, json, ltsv, msgpack`. Default = `json`. +* **KAFKA_FORMAT_TYPE**. Input text type. E.g., `text, json, ltsv, msgpack`. Default = `json`. ## Plugin changes diff --git a/docs/outputs/CONFIGURE_S3.md b/docs/outputs/CONFIGURE_S3.md index 5a230fb..0718256 100644 --- a/docs/outputs/CONFIGURE_S3.md +++ b/docs/outputs/CONFIGURE_S3.md @@ -9,9 +9,9 @@ nav_order: 11 The Log Export Container uses [fluentd s3 output plugin](https://docs.fluentd.org/output/s3). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=s3` and provide the following variables: * **AWS_ACCESS_KEY_ID**. AWS Access Key * **AWS_SECRET_ACCESS_KEY**. AWS Access Secret -* **S3_BUCKET**. AWS S3 Bucket Name, for example: `log-export-container` -* **S3_REGION**. AWS S3 Bucket Region Name, for example: `us-west-2` -* **S3_PATH**. AWS S3 Path to Append to your Logs, for example: `logs`. The actual path on S3 will be: `{path}{container_id}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format)` +* **S3_BUCKET**. AWS S3 Bucket Name. E.g., `log-export-container` +* **S3_REGION**. AWS S3 Bucket Region Name. E.g., `us-west-2` +* **S3_PATH**. AWS S3 Path to Append to your Logs. E.g., `logs`. The actual path on S3 will be: `{path}{container_id}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format)` ## Plugin changes diff --git a/docs/outputs/CONFIGURE_SPLUNK_HEC.md b/docs/outputs/CONFIGURE_SPLUNK_HEC.md index 601a2c8..f3bd369 100644 --- a/docs/outputs/CONFIGURE_SPLUNK_HEC.md +++ b/docs/outputs/CONFIGURE_SPLUNK_HEC.md @@ -9,9 +9,9 @@ nav_order: 12 # Configure Splunk HEC The Log Export Container uses [fluentd splunk hec output plugin](https://github.com/splunk/fluent-plugin-splunk-hec). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=splunk-hec` and provide the following variables: -* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer, for example: `prd-p-xxxxx.splunkcloud.com` -* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer, for example: `8088` -* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token, for example: `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz` +* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer. E.g., `prd-p-xxxxx.splunkcloud.com` +* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer. E.g., `8088` +* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token. E.g., `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz` IMPORTANT: SSL validation is disabled by default, you can pass different [SSL Params](https://github.com/splunk/fluent-plugin-splunk-hec#ssl-parameters) overriding the builtin configuration as commented below diff --git a/docs/outputs/CONFIGURE_SUMOLOGIC.md b/docs/outputs/CONFIGURE_SUMOLOGIC.md index 7304f5d..4dafdf8 100644 --- a/docs/outputs/CONFIGURE_SUMOLOGIC.md +++ b/docs/outputs/CONFIGURE_SUMOLOGIC.md @@ -8,7 +8,7 @@ nav_order: 13 The Log Export Container uses a [fluentd sumologic output plugin](https://github.com/SumoLogic/fluentd-output-sumologic). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=sumologic` and provide the following variables: * **SUMOLOGIC_ENDPOINT**. SumoLogic HTTP Collector URL -* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic, for example: `/prod/sdm/logs` +* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic. E.g., `/prod/sdm/logs` ## Plugin changes From 4efdf1fdc9787853a4313ae9538f28751161f0d3 Mon Sep 17 00:00:00 2001 From: wallrony Date: Tue, 16 Aug 2022 11:44:14 -0300 Subject: [PATCH 3/4] Fix "e.g." standard text formatting Co-authored-by: vassalo --- docs/outputs/CONFIGURE_CLOUDWATCH.md | 6 +++--- docs/outputs/CONFIGURE_S3.md | 4 ++-- docs/outputs/CONFIGURE_SPLUNK_HEC.md | 6 +++--- docs/outputs/CONFIGURE_SUMOLOGIC.md | 2 +- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/outputs/CONFIGURE_CLOUDWATCH.md b/docs/outputs/CONFIGURE_CLOUDWATCH.md index ff1dd59..0cea449 100644 --- a/docs/outputs/CONFIGURE_CLOUDWATCH.md +++ b/docs/outputs/CONFIGURE_CLOUDWATCH.md @@ -9,9 +9,9 @@ nav_order: 3 The Log Export Container uses a [fluentd cloudwatch output plugin](https://github.com/fluent-plugins-nursery/fluent-plugin-cloudwatch-logs). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=cloudwatch` and provide the following variables: * **AWS_ACCESS_KEY_ID**. AWS Access Key * **AWS_SECRET_ACCESS_KEY**. AWS Access Secret -* **AWS_REGION**. AWS Region Name. E.g., `us-west-2` -* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs. E.g., `aws/sdm-logs` -* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs. E.g., `test` +* **AWS_REGION**. AWS Region Name. E.g., `us-west-2`. +* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs. E.g., `aws/sdm-logs`. +* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs. E.g., `test`. ## IAM permissions Add -at least- the following policy to your IAM user: diff --git a/docs/outputs/CONFIGURE_S3.md b/docs/outputs/CONFIGURE_S3.md index 0718256..fd31d8d 100644 --- a/docs/outputs/CONFIGURE_S3.md +++ b/docs/outputs/CONFIGURE_S3.md @@ -9,8 +9,8 @@ nav_order: 11 The Log Export Container uses [fluentd s3 output plugin](https://docs.fluentd.org/output/s3). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=s3` and provide the following variables: * **AWS_ACCESS_KEY_ID**. AWS Access Key * **AWS_SECRET_ACCESS_KEY**. AWS Access Secret -* **S3_BUCKET**. AWS S3 Bucket Name. E.g., `log-export-container` -* **S3_REGION**. AWS S3 Bucket Region Name. E.g., `us-west-2` +* **S3_BUCKET**. AWS S3 Bucket Name. E.g., `log-export-container`. +* **S3_REGION**. AWS S3 Bucket Region Name. E.g., `us-west-2`. * **S3_PATH**. AWS S3 Path to Append to your Logs. E.g., `logs`. The actual path on S3 will be: `{path}{container_id}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format)` ## Plugin changes diff --git a/docs/outputs/CONFIGURE_SPLUNK_HEC.md b/docs/outputs/CONFIGURE_SPLUNK_HEC.md index f3bd369..81dd27b 100644 --- a/docs/outputs/CONFIGURE_SPLUNK_HEC.md +++ b/docs/outputs/CONFIGURE_SPLUNK_HEC.md @@ -9,9 +9,9 @@ nav_order: 12 # Configure Splunk HEC The Log Export Container uses [fluentd splunk hec output plugin](https://github.com/splunk/fluent-plugin-splunk-hec). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=splunk-hec` and provide the following variables: -* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer. E.g., `prd-p-xxxxx.splunkcloud.com` -* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer. E.g., `8088` -* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token. E.g., `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz` +* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer. E.g., `prd-p-xxxxx.splunkcloud.com`. +* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer. E.g., `8088`. +* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token. E.g., `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz`. IMPORTANT: SSL validation is disabled by default, you can pass different [SSL Params](https://github.com/splunk/fluent-plugin-splunk-hec#ssl-parameters) overriding the builtin configuration as commented below diff --git a/docs/outputs/CONFIGURE_SUMOLOGIC.md b/docs/outputs/CONFIGURE_SUMOLOGIC.md index 4dafdf8..1214da0 100644 --- a/docs/outputs/CONFIGURE_SUMOLOGIC.md +++ b/docs/outputs/CONFIGURE_SUMOLOGIC.md @@ -8,7 +8,7 @@ nav_order: 13 The Log Export Container uses a [fluentd sumologic output plugin](https://github.com/SumoLogic/fluentd-output-sumologic). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=sumologic` and provide the following variables: * **SUMOLOGIC_ENDPOINT**. SumoLogic HTTP Collector URL -* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic. E.g., `/prod/sdm/logs` +* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic. E.g., `/prod/sdm/logs`. ## Plugin changes From dcc11e8d7f43cadeaaddd8f0a45e7d8380b25622 Mon Sep 17 00:00:00 2001 From: wallrony Date: Thu, 18 Aug 2022 10:20:53 -0300 Subject: [PATCH 4/4] Fix audit documentation following the env variables standard Co-authored-by: vassalo --- docs/audit/CONFIGURE_SDM_AUDIT.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/docs/audit/CONFIGURE_SDM_AUDIT.md b/docs/audit/CONFIGURE_SDM_AUDIT.md index 14e4cc0..8186718 100644 --- a/docs/audit/CONFIGURE_SDM_AUDIT.md +++ b/docs/audit/CONFIGURE_SDM_AUDIT.md @@ -17,23 +17,20 @@ First, to make this work, you need to provide the following variable: The Log Export Container uses [fluentd input exec plugin](https://docs.fluentd.org/input/exec) to extract the logs from strongDM Audit command. To export the logs about activities, resources, users and roles coming from strongDM Audit command, you need to specify the value of the following -variable with the name of the entity (activities, resources, users or roles) and the extract interval in minutes (you should follow the syntax -shown below where we have `entity_name/extract_interval` space-separated): +variable: -``` -LOG_EXPORT_CONTAINER_EXTRACT_AUDIT=activities/15 resources/480 users/480 roles/480 -``` +* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT**. The value should be the names of the entities (activities, resources, users or roles) and their extract interval minutes following the syntax `entity_name/extract_interval` (space-separated for each entity). E.g., `activities/15 resources/480 users/480 roles/480`. It is worth noting that if you do not specify the interval value after each `/`, the default interval values for each entity will be as defined above. If you want to specifically extract the activity logs you can also use the variables below: -- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES=true` Variable responsible for indicating whether activity logs will be extracted. Default = `false`. -- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL=15` Interval in minutes for running the extractor script. Default = `15`. +* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES**. Variable responsible for indicating whether activity logs will be extracted. Default = `false`. +* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL**. Interval in minutes for running the extractor script for activities. Default = `15`. However, be aware that if these variables are informed together with `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`, their content will have priority over `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`. -**NOTE**: the variables `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES` and `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL` +**NOTE**: The variables `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES` and `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL` will be deprecated. So we encourage to use the `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT` variable instead. ## Configure Stream