Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/dependabot.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,21 @@ updates:
schedule:
# Check for updates to GitHub Actions every month
interval: monthly
- package-ecosystem: terraform
directory: /deployment/dev
groups:
terraform_dev:
patterns:
- "*"
schedule:
# Check for updates to Terraform every month
interval: monthly
- package-ecosystem: terraform
directory: /deployment/prod
groups:
terraform_prod:
patterns:
- "*"
schedule:
# Check for updates to Terraform every month
interval: monthly
29 changes: 29 additions & 0 deletions .github/workflows/dev-checks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
name: dev-checks

"on":
pull_request:
branches:
- main
paths:
- "deployment/dev/**"

jobs:
linting:
uses: broadinstitute/shared-workflows/.github/workflows/[email protected]
with:
working_directory: "./deployment/dev"
validation:
uses: broadinstitute/shared-workflows/.github/workflows/terraform-validate.yaml@hf_use_tfenv
with:
working_directory: "./deployment/dev"
# NOTE: using tfsec because trivy tries to scan remote terraform modules and trivy-ignores
# at root level do not work for remote terraform modules
static_analysis:
uses: broadinstitute/shared-workflows/.github/workflows/[email protected]
secrets:
wf_github_token: ${{ secrets.github_token }}
with:
working_directory: "./deployment/dev"
run_tfsec: true
run_trivy: false
37 changes: 37 additions & 0 deletions .github/workflows/prod-checks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
name: prod-checks

"on":
pull_request:
branches:
- main
paths:
- "deployment/prod/**"

defaults:
run:
working-directory: "./deployment/prod/"

jobs:
terraform-docs:
uses: broadinstitute/shared-workflows/.github/workflows/[email protected]
with:
working_directory: "./deployment/prod"
linting:
uses: broadinstitute/shared-workflows/.github/workflows/[email protected]
with:
working_directory: "./deployment/prod"
validation:
uses: broadinstitute/shared-workflows/.github/workflows/terraform-validate.yaml@hf_use_tfenv
with:
working_directory: "./deployment/prod"
# NOTE: using tfsec because trivy tries to scan remote terraform modules and trivy-ignores
# at root level do not work for remote terraform modules
static_analysis:
uses: broadinstitute/shared-workflows/.github/workflows/[email protected]
secrets:
wf_github_token: ${{ secrets.github_token }}
with:
working_directory: "./deployment/prod"
run_tfsec: true
run_trivy: false
45 changes: 45 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,48 @@ $RECYCLE.BIN/

# Windows shortcuts
*.lnk

# Local .terraform directories
.terraform/

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log
crash.*.log

# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Ignore transient lock info files created by terraform apply
.terraform.tfstate.lock.info

# Include override files you do wish to add to version control using negated pattern
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc

# Optional: ignore graph output files generated by `terraform graph`
# *.dot

# Optional: ignore plan files saved before destroying Terraform configuration
# Uncomment the line below if you want to ignore planout files.
# planout
Expand Down
44 changes: 44 additions & 0 deletions .terraform-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
formatter: "markdown" # this is required

version: ""

header-from: HEADER.md
footer-from: FOOTER.md

recursive:
enabled: false
path: modules

content: ""

output:
file: ""
mode: inject
template: |-
<!-- BEGIN_TF_DOCS -->
{{ .Content }}
<!-- END_TF_DOCS -->

output-values:
enabled: false
from: ""

sort:
enabled: true
by: required

settings:
anchor: true
color: true
default: true
description: false
escape: true
hide-empty: false
html: true
indent: 2
lockfile: true
read-comments: true
required: true
sensitive: true
type: true
8 changes: 8 additions & 0 deletions atlantis.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
version: 3
projects:
- name: prod
dir: deployment/prod
apply_requirements: [approved]
- name: dev
dir: deployment/dev
1 change: 1 addition & 0 deletions deployment/app/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
terraform-output-*.json
10 changes: 10 additions & 0 deletions deployment/app/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@

Updating the application after the initial deploy requires passing some additional password values to the `helmfile apply` command.

Below are some shell commands that can be used for manual deploys

export VALKEY_PASSWORD=$(kubectl get secret --namespace <NAMESPACE> netbox-<INSTANCE>-valkey -o jsonpath="{.data.valkey-password}" | base64 -d)

export PASSWORD=$(kubectl get secret --namespace <NAMESPACE> netbox-<INSTANCE>-superuser -o jsonpath="{.data.password}" | base64 -d)

helmfile --set global.valkey.password=$VALKEY_PASSWORD --set superuser.password=$PASSWORD apply
Empty file added deployment/app/default.yaml
Empty file.
2 changes: 2 additions & 0 deletions deployment/app/dev.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Google_Project: "broad-netbox-dev"
Namespace: "broad-netbox-dev"
158 changes: 158 additions & 0 deletions deployment/app/helmfile.yaml.gotmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
repositories:
- name: netbox
url: https://charts.netbox.oss.netboxlabs.com/

helmDefaults:
kubeContext: gke_bits-gke-clusters_us-east4_gke-autopilot-internal-01-prod

environments:
dev:
values:
- default.yaml
- dev.yaml
- terraform-output-dev.json
prod:
values:
- default.yaml
- prod.yaml
- terraform-output-prod.json

---

# kubeContext: gke_bits-gke-clusters_us-east4_gke-autopilot-01-prod
# annotations:
# kubernetes.io/ingress.class: "gce"
# networking.gke.io/managed-certificates: netbox-hjf
# tls:
# - hosts:
# - "netbox-hjf.broadinstitute.org"
# - allowedHosts:
# - "netbox{{ if ne .Values.instance.value "prod" }}-{{ .Values.instance.value }}{{ end }}.broadinstitute.org"
# - debug: true

releases:
- name: netbox-{{ .Values.instance.value }}
namespace: {{ .Values.Namespace }}
chart: netbox/netbox
version: 7.1.18
values:
# to debug database connections
- extraEnvs:
- name: DB_WAIT_DEBUG
value: "1"
- commonLabels:
environment: {{ .Values.instance.value }}
team: "science-and-technology"
app: "netbox"
- ingress:
enabled: false
className: "gce"
hostname: "netbox{{ if ne .Values.instance.value "prod" }}-{{ .Values.instance.value }}{{ end }}.broadinstitute.org"
hosts:
- host: "netbox{{ if ne .Values.instance.value "prod" }}-{{ .Values.instance.value }}{{ end }}.broadinstitute.org"
paths:
- "/"
- resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "1"
memory: "2Gi"
- serviceAccount:
annotations:
iam.gke.io/gcp-service-account: netbox-{{ .Values.instance.value }}@{{ .Values.Google_Project }}.iam.gserviceaccount.com
# Superuser password is stored in database and can be changed via UI
- superuser:
password: "TempPW4Now!"
# As long as we use PVCs for media, reports,... can not do rolling update due to
# PVCs only allowing single pod write
- updateStrategy:
type: Recreate
- worker:
sidecars:
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.18.3
imagePullPolicy: Always
ports:
- name: database
containerPort: 5432
args:
# Enable structured logging with LogEntry format:
- "--structured-logs"
# Replace DB_PORT with the port the proxy should listen on
- "--port=5432"
# Use auto iam authn to authenticate with the Cloud SQL instance
- "--auto-iam-authn"
- {{ .Values.application_database.value | quote }}
# You should use resource requests/limits as a best practice to prevent
# pods from consuming too many resources and affecting the execution of
# other pods. You should adjust the following values based on what your
# application needs. For details, see
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
# The proxy's CPU use scales linearly with the amount of IO between
# the database and the application. Adjust this value based on your
# application's requirements.
cpu: "1"
# The proxy's memory use scales linearly with the number of active
# connections. Fewer open connections will use less memory. Adjust
# this value based on your application's requirements.
memory: "1Gi"
securityContext:
# The default Cloud SQL Auth Proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
# As long as we use PVCs for media, reports,... can not do rolling update due to
# PVCs only allowing single pod write
updateStrategy:
type: Recreate
- sidecars:
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.18.3
imagePullPolicy: Always
ports:
- name: database
containerPort: 5432
args:
# Enable structured logging with LogEntry format:
- "--structured-logs"
# Replace DB_PORT with the port the proxy should listen on
- "--port=5432"
# Use auto iam authn to authenticate with the Cloud SQL instance
- "--auto-iam-authn"
- {{ .Values.application_database.value | quote }}
# You should use resource requests/limits as a best practice to prevent
# pods from consuming too many resources and affecting the execution of
# other pods. You should adjust the following values based on what your
# application needs. For details, see
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
# The proxy's CPU use scales linearly with the amount of IO between
# the database and the application. Adjust this value based on your
# application's requirements.
cpu: "1"
# The proxy's memory use scales linearly with the number of active
# connections. Fewer open connections will use less memory. Adjust
# this value based on your application's requirements.
memory: "1Gi"
securityContext:
# The default Cloud SQL Auth Proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
- postgresql:
enabled: false
- externalDatabase:
host: localhost
port: 5432
database: netbox
username: netbox
password: ref+gcpsecrets://{{ .Values.Google_Project }}/{{ .Values.application_database_password_secret.value }}?version=1
2 changes: 2 additions & 0 deletions deployment/app/prod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Google_Project: "broad-netbox-prod"
Namespace: "broad-netbox-prod"
1 change: 1 addition & 0 deletions deployment/dev/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
!terraform.tfvars
1 change: 1 addition & 0 deletions deployment/dev/.terraform-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1.13.5
Loading