You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -38,24 +38,30 @@ The purpose of the `default.yml` is to define a standard set of variables that c
38
38
39
39
#### Generation
40
40
The image contains a script to enable dynamic generation of this file automatically. Run the following command to generate a `default.yml`:
41
-
```
41
+
```bash
42
42
$ docker run --rm -it splunk/splunk:latest create-defaults > default.yml
43
43
```
44
44
45
45
You can also pre-seed some settings based on environment variables during this `default.yml` generation process. For example, you can define `SPLUNK_PASSWORD` with the following command:
When starting the docker container, the `default.yml` can be mounted in `/tmp/defaults/default.yml` or fetched dynamically with `SPLUNK_DEFAULTS_URL`. Ansible provisioning will read in and honor these settings.
51
51
52
52
Environment variables specified at runtime will take precedence over anything defined in `default.yml`.
Variables at the root level influence the behavior of everything in the container, as they have global scope.
66
72
67
73
Example:
68
-
```
74
+
```yaml
69
75
---
70
76
retry_num: 100
71
77
```
@@ -79,7 +85,9 @@ retry_num: 100
79
85
The major object `splunk` in the YAML file contains variables that control how Splunk operates.
80
86
81
87
Sample:
82
-
```
88
+
<!-- {% raw %} -->
89
+
```yaml
90
+
---
83
91
splunk:
84
92
opt: /opt
85
93
home: /opt/splunk
@@ -98,7 +106,9 @@ splunk:
98
106
# hec.token is used only for ingestion (receiving Splunk events)
99
107
token: <default_hec_token>
100
108
smartstore: null
109
+
...
101
110
```
111
+
<!-- {% endraw %} -->
102
112
103
113
| Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering |
104
114
| --- | --- | --- | --- | --- | --- | --- |
@@ -124,12 +134,15 @@ splunk:
124
134
The `app_paths` section under `splunk` controls how apps are installed inside the container.
125
135
126
136
Sample:
127
-
```
137
+
```yaml
138
+
---
139
+
splunk:
128
140
app_paths:
129
141
default: /opt/splunk/etc/apps
130
142
shc: /opt/splunk/etc/shcluster/apps
131
143
idxc: /opt/splunk/etc/master-apps
132
144
httpinput: /opt/splunk/etc/apps/splunk_httpinput
145
+
...
133
146
```
134
147
135
148
| Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering |
@@ -144,12 +157,15 @@ Sample:
144
157
Search Head Clustering is configured using the `shc` section under `splunk`.
145
158
146
159
Sample:
147
-
```
160
+
```yaml
161
+
---
162
+
splunk:
148
163
shc:
149
164
enable: false
150
165
secret: <secret_key>
151
166
replication_factor: 3
152
167
replication_port: 9887
168
+
...
153
169
```
154
170
155
171
| Variable Name | Description | Parent Object | Default Value | Required for Standalone | Required for Search Head Clustering | Required for Index Clustering |
@@ -164,12 +180,15 @@ Sample:
164
180
Indexer Clustering is configured using the `idxc` section under `splunk`.
165
181
166
182
Sample:
167
-
```
183
+
```yaml
184
+
---
185
+
splunk:
168
186
idxc:
169
187
secret: <secret_key>
170
188
search_factor: 2
171
189
replication_factor: 3
172
190
replication_port: 9887
191
+
...
173
192
```
174
193
175
194
| Variable Name | Description | Parent Object | Default Value | Required for Standalone| Required for Search Head Clustering | Required for Index Clustering |
@@ -181,16 +200,22 @@ Sample:
181
200
182
201
## Install apps
183
202
Apps can be installed by using the `SPLUNK_APPS_URL` environment variable when creating the Splunk container:
See the [full app installation guide](advanced/APP_INSTALL.md) to learn how to specify multiple apps and how to install apps in a distributed environment.
189
211
190
212
## Apply Splunk license
191
213
Licenses can be added with the `SPLUNK_LICENSE_URI` environment variable when creating the Splunk container:
See the [full license installation guide](advanced/LICENSE_INSTALL.md) to learn how to specify multiple licenses and how to use a central, containerized license manager.
@@ -200,8 +225,8 @@ When Splunk boots, it registers all the config files in various locations on the
200
225
201
226
Using the Splunk Docker image, users can also create their own config files, following the same INI file format that drives Splunk. This is a power-user/admin-level feature, as invalid config files can break or prevent start-up of your Splunk installation.
202
227
203
-
User-specified config files are set in `default.yml` by creating a `conf` key under `splunk`, in the format below:
204
-
```
228
+
User-specified config files are set in `default.yml` by creating a `conf` key under `splunk`, in the format below:
229
+
```yaml
205
230
---
206
231
splunk:
207
232
conf:
@@ -217,7 +242,7 @@ splunk:
217
242
This generates a file `user-prefs.conf`, owned by the correct Splunk user and group and located in the given directory (in this case, `/opt/splunkforwarder/etc/users/admin/user-prefs/local`).
218
243
219
244
Following INI format, the contents of `user-prefs.conf` will resemble the following:
220
-
```
245
+
```ini
221
246
[general]
222
247
search_syntax_highlighting = dark
223
248
default_namespace = appboilerplate
@@ -235,7 +260,7 @@ This is a capability only available for indexer clusters (cluster_master + index
235
260
The Splunk Docker image supports SmartStore in a bring-your-own backend storage provider format. Due to the complexity of this option, SmartStore is only enabled if you specify all the parameters in your `default.yml` file.
236
261
237
262
Sample configuration that persists *all* indexes (default) with a SmartStore backend:
238
-
```
263
+
```yaml
239
264
---
240
265
splunk:
241
266
smartstore:
@@ -259,20 +284,22 @@ The SmartStore cache manager controls data movement between the indexer and the
259
284
* The `index` stanza corresponds to [indexes.conf options](https://docs.splunk.com/Documentation/Splunk/latest/admin/Indexesconf).
260
285
261
286
This example defines cache settings and retention policy:
262
-
```
263
-
smartstore:
264
-
cachemanager:
265
-
max_cache_size: 500
266
-
max_concurrent_uploads: 7
267
-
index:
268
-
- indexName: custom_index
269
-
remoteName: my_storage
270
-
scheme: http
271
-
remoteLocation: my_storage.net
272
-
maxGlobalDataSizeMB: 500
273
-
maxGlobalRawDataSizeMB: 200
274
-
hotlist_recency_secs: 30
275
-
hotlist_bloom_filter_recency_hours: 1
287
+
```yaml
288
+
splunk:
289
+
smartstore:
290
+
cachemanager:
291
+
max_cache_size: 500
292
+
max_concurrent_uploads: 7
293
+
index:
294
+
- indexName: custom_index
295
+
remoteName: my_storage
296
+
scheme: http
297
+
remoteLocation: my_storage.net
298
+
maxGlobalDataSizeMB: 500
299
+
maxGlobalRawDataSizeMB: 200
300
+
hotlist_recency_secs: 30
301
+
hotlist_bloom_filter_recency_hours: 1
302
+
...
276
303
```
277
304
278
305
## Use a deployment server
@@ -291,7 +318,7 @@ To secure network traffic from one Splunk instance to another (e.g. forwarders t
291
318
If you are enabling SSL on one tier of your Splunk topology, it's likely all instances will need it. To achieve this, generate your server and CA certificates and add them to the `default.yml`, which gets shared across all Splunk docker containers.
292
319
293
320
Sample `default.yml` snippet to configure Splunk TCP with SSL:
294
-
```
321
+
```yaml
295
322
splunk:
296
323
...
297
324
s2s:
@@ -312,7 +339,7 @@ Building your own images from source is possible, but neither supported nor reco
312
339
The supplied `Makefile` in the root of this project contains commands to control the build:
313
340
1. Fork the [docker-splunk GitHub repository](https://github.com/splunk/docker-splunk/)
314
341
1. Clone your fork using git and create a branch off develop
Copy file name to clipboardExpand all lines: docs/ARCHITECTURE.md
+12-6Lines changed: 12 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
## Architecture
2
-
From a design perspective, the containers brought up with the `docker-splunk` images are meant to provision themselves locally and asynchronously. The execution flow of the provisioning process is meant to gracefully handle interoperability in this manner, while also maintaining idempotency and reliability.
2
+
From a design perspective, the containers brought up with the `docker-splunk` images are meant to provision themselves locally and asynchronously. The execution flow of the provisioning process is meant to gracefully handle interoperability in this manner, while also maintaining idempotency and reliability.
3
3
4
4
## Navigation
5
5
@@ -9,7 +9,7 @@ From a design perspective, the containers brought up with the `docker-splunk` im
9
9
*[Supported platforms](#supported-platforms)
10
10
11
11
## Networking
12
-
By default, the Docker image exposes a variety of ports for both external interaction as well as internal use.
12
+
By default, the Docker image exposes a variety of ports for both external interaction as well as internal use.
13
13
```
14
14
EXPOSE 8000 8065 8088 8089 8191 9887 9997
15
15
```
@@ -28,11 +28,13 @@ Below is a table detailing the purpose of each port, which can be used as a refe
28
28
29
29
## Design
30
30
31
-
#####Remote networking
32
-
Particularly when bringing up distributed Splunk topologies, there is a need for one Splunk instances to make a request against another Splunk instance in order to construct the cluster. These networking requests are often prone to failure, as when Ansible is executed asyncronously there are no guarantees that the requestee is online/ready to receive the message.
31
+
#### Remote networking
32
+
Particularly when bringing up distributed Splunk topologies, there is a need for one Splunk instances to make a request against another Splunk instance in order to construct the cluster. These networking requests are often prone to failure, as when Ansible is executed asynchronously there are no guarantees that the requestee is online/ready to receive the message.
33
33
34
34
While developing new playbooks that require remote Splunk-to-Splunk connectivity, we employ the use of `retry` and `delay` options for tasks. For instance, in this example below, we add indexers as search peers of individual search head. To overcome error-prone networking, we have retry counts with delays embedded in the task. There are also break-early conditions that maintain idempotency so we can progress if successful:
@@ -49,9 +51,12 @@ While developing new playbooks that require remote Splunk-to-Splunk connectivity
49
51
no_log: "{{ hide_password }}"
50
52
when: "'splunk_indexer' in groups"
51
53
```
54
+
<!-- {% endraw %} -->
52
55
53
56
Another utility you can add when creating new plays is an implicit wait. For more information on this, see the `roles/splunk_common/tasks/wait_for_splunk_instance.yml` play which will wait for another Splunk instance to be online before making any connections against it.
@@ -68,6 +73,7 @@ Another utility you can add when creating new plays is an implicit wait. For mor
68
73
ignore_errors: true
69
74
no_log: "{{ hide_password }}"
70
75
```
76
+
<!-- {% endraw %} -->
71
77
72
78
## Supported platforms
73
79
At the current time, this project only officially supports running Splunk Enterprise on `debian:stretch-slim`. We do have plans to incorporate other operating systems and Windows in the future.
Copy file name to clipboardExpand all lines: docs/CONTRIBUTING.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -116,7 +116,7 @@ There are multiple types of tests. The location of the test code varies with typ
116
116
$ make medium-tests
117
117
```
118
118
119
-
3. **Large:** Exercises the entire system, end-to-end; used to identify crucial performance and basic functionality that will be run for every code check-in and commit; may launch or interact with services in a datacenter, preferably with a staging environment to avoid affecting production
119
+
3. **Large:** Exercises the entire system, end-to-end; used to identify crucial performance and basic functionality that will be run for every code check-in and commit; may launch or interact with services in a data center, preferably with a staging environment to avoid affecting production
0 commit comments