Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@
"pages": [
"quickstart",
"guide",
"self-hosting"
"self-hosting",
"self-hosting-helm"
]
},
{
Expand Down
376 changes: 376 additions & 0 deletions self-hosting-helm.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,376 @@
---
title: Self-hosting with Helm
description: Deploy Sure on Kubernetes using the official Helm chart
---

This guide shows you how to deploy Sure on Kubernetes using the official Helm chart. The chart supports web (Rails) and worker (Sidekiq) workloads, optional in-cluster PostgreSQL and Redis, and production-grade features like pre-upgrade migrations, pod security contexts, and horizontal pod autoscaling.

## Prerequisites

- Kubernetes >= 1.25
- Helm >= 3.10
- Basic familiarity with Kubernetes and Helm

## Features

- Web (Rails) deployment with service and optional ingress
- Worker (Sidekiq) deployment
- Optional database migrations via Helm hook job or initContainer
- Optional subcharts for PostgreSQL (CloudNativePG) and Redis (OT-CONTAINER-KIT redis-operator)
- Security best practices: runAsNonRoot, readOnlyRootFilesystem, no hardcoded secrets
- Scalability: replicas, resources, topology spread constraints, optional HPAs
- Optional CronJobs for custom tasks

## Installation

### Add Helm repositories

Add the Sure Helm repository:

```bash
helm repo add sure https://we-promise.github.io/sure
helm repo update
```

If you plan to use the bundled PostgreSQL or Redis subcharts, add their repositories as well:

```bash
helm repo add cloudnative-pg https://cloudnative-pg.github.io/charts
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
helm repo update
```

### Quickstart (turnkey self-hosting)

This installs CloudNativePG operator with a Postgres cluster and Redis managed by the OT redis-operator.

<Warning>
For production stability, use immutable image tags (for example, `image.tag=v1.2.3`) instead of `latest`.
</Warning>

```bash
# Create namespace
kubectl create ns sure || true

# Install chart with a pinned image tag
helm upgrade --install sure sure/sure \
-n sure \
--set image.tag=v1.2.3 \
--set rails.secret.enabled=true \
--set rails.secret.values.SECRET_KEY_BASE=$(openssl rand -hex 32)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The quickstart uses SECRET_KEY_BASE=$(openssl rand -hex 32) directly in the helm upgrade command. While convenient for a quick test, this method is not recommended for production environments as the secret is passed directly on the command line and not managed securely. It would be better to guide users to create a Kubernetes Secret beforehand and reference it, as demonstrated in the "Secrets management" section, or add a stronger warning about this being for development only.

```

Expose the app via an ingress (see configuration below) or port-forward:

```bash
kubectl port-forward svc/sure 8080:80 -n sure
```

Navigate to `http://localhost:8080` to access Sure.

## Configuration

### Using external Postgres and Redis

To use external managed databases instead of the bundled subcharts:

```yaml
cnpg:
enabled: false

redisOperator:
managed:
enabled: false

redisSimple:
enabled: false

rails:
extraEnv:
DATABASE_URL: postgresql://user:pass@db.example.com:5432/sure
REDIS_URL: redis://:pass@redis.example.com:6379/0
```

### Deployment profiles

#### Simple single-node

Minimal HA setup for development or small deployments:

```yaml
image:
repository: ghcr.io/we-promise/sure
tag: "v1.0.0"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For production deployments, image.pullPolicy: Always is generally recommended to ensure that the latest image for a given tag is always used, especially if mutable tags like stable are ever considered. While v1.0.0 is a fixed tag, explicitly setting Always can prevent issues if the image in the local cache is somehow corrupted or outdated.

  pullPolicy: Always

pullPolicy: IfNotPresent

rails:
existingSecret: sure-secrets
encryptionEnv:
enabled: true
settings:
SELF_HOSTED: "true"
Comment on lines +108 to +111

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These settings (encryptionEnv and SELF_HOSTED) are introduced without explanation. Briefly describing what encryptionEnv and SELF_HOSTED do, and why they are enabled/set to "true" in this profile, would improve clarity for users.


cnpg:
enabled: true
cluster:
enabled: true
name: sure-db
instances: 1
storage:
size: 8Gi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The storageClassName: longhorn assumes that the Longhorn storage provisioner is installed and configured in the Kubernetes cluster. It would be beneficial to add a note advising users to ensure this storage class exists or to replace it with an appropriate storage class available in their cluster.

storageClassName: longhorn

redisOperator:
enabled: true
managed:
enabled: true
mode: replication
replicas: 3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In a "Simple single-node" profile, having 3 Redis replicas might be excessive for development or small deployments, potentially consuming more resources than necessary. Consider reducing the default to 1 replica for a truly minimal setup, or clarify the rationale for 3 replicas in this "simple" context.

  replicas: 1

persistence:
enabled: true
className: longhorn
size: 8Gi

migrations:
strategy: job
```

#### HA k3s profile

High availability setup with multiple replicas and synchronous replication:

```yaml
cnpg:
enabled: true
cluster:
enabled: true
name: sure-db
instances: 3
storage:
size: 20Gi
storageClassName: longhorn
minSyncReplicas: 1
maxSyncReplicas: 2
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
cnpg.io/cluster: sure-db

redisOperator:
enabled: true
managed:
enabled: true
mode: replication
replicas: 3
persistence:
enabled: true
className: longhorn
size: 8Gi

migrations:
strategy: job
initContainer:
enabled: true

hpa:
web:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
```

### Secrets management

Create a Kubernetes secret with the required credentials:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: sure-secrets
type: Opaque
stringData:
# Rails secrets
SECRET_KEY_BASE: "__SET_SECRET__"

# Active Record Encryption keys (required for self-hosted mode)
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY: "__SET_SECRET__"
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY: "__SET_SECRET__"
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT: "__SET_SECRET__"

# Redis password
redis-password: "__SET_SECRET__"
Comment on lines +197 to +206

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For the SECRET_KEY_BASE, ACTIVE_RECORD_ENCRYPTION_*, and redis-password placeholders, it would be helpful to provide example commands for generating secure random strings, similar to the openssl rand -hex 32 example for SECRET_KEY_BASE in the quickstart. This helps users generate strong secrets rather than using weak or default values.

```

Apply the secret:

```bash
kubectl apply -f sure-secrets.yaml -n sure
```

Reference the secret in your values:

```yaml
rails:
existingSecret: sure-secrets

redisOperator:
managed:
enabled: true
auth:
existingSecret: sure-secrets
passwordKey: redis-password
```

### Ingress configuration

Enable ingress to expose Sure externally:

```yaml
ingress:
enabled: true
className: "nginx"
hosts:
- host: finance.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts: [finance.example.com]
secretName: finance-tls
```

### Horizontal pod autoscaling

Enable HPAs for automatic scaling based on CPU utilization:

```yaml
hpa:
web:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70

worker:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
```

## Updating

To update to a new version of Sure:

```bash
# Update the Helm repository
helm repo update sure

# Update the deployment with a new image tag
helm upgrade sure sure/sure -n sure \
--set image.tag=v0.6.6-alpha.7 \
-f your-values.yaml
```

The chart will automatically run database migrations before deploying the new version.

## Backup and restore

### PostgreSQL backups with CloudNativePG

CloudNativePG supports volume snapshot backups:

```yaml
cnpg:
cluster:
backup:
method: volumeSnapshot
volumeSnapshot:
className: longhorn
```

### Manual backup

Create a manual backup of your PostgreSQL database:

```bash
# Get the primary pod name
PRIMARY_POD=$(kubectl get pod -n sure -l cnpg.io/cluster=sure-db,role=primary -o name)

# Create a backup
kubectl exec -n sure $PRIMARY_POD -- pg_dump -U sure sure_production > backup.sql

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The backup and restore commands assume a PostgreSQL user sure and database sure_production. It would be prudent to add a note that these values might need to be adjusted if the user has customized POSTGRES_USER or POSTGRES_DB in their Helm chart configuration.

```

### Restore from backup

```bash
# Copy backup to pod
kubectl cp backup.sql sure/$PRIMARY_POD:/tmp/backup.sql

# Restore
kubectl exec -n sure $PRIMARY_POD -- psql -U sure sure_production < /tmp/backup.sql
```

## Troubleshooting

### View logs

```bash
# Web logs
kubectl logs -n sure -l app.kubernetes.io/component=web

# Worker logs
kubectl logs -n sure -l app.kubernetes.io/component=worker

# Migration job logs
kubectl logs -n sure -l job-name=sure-migrate
```

### Check pod status

```bash
kubectl get pods -n sure
```

### Verify database connectivity

```bash
# Test connection from web pod
kubectl exec -n sure deploy/sure-web -- rails runner "puts ActiveRecord::Base.connection.execute('SELECT 1').first"
```

### Run Helm tests

After installation, verify the deployment:

```bash
helm test sure -n sure
```

## Uninstall

To remove Sure from your cluster:

```bash
helm uninstall sure -n sure
```

<Warning>
This will not delete PersistentVolumeClaims. To completely remove all data, manually delete the PVCs after uninstalling.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The command kubectl delete pvc -n sure --all is very broad. If other applications or components share the sure namespace and have their own PVCs, this command would delete them as well. It would be safer to provide a more targeted deletion command, perhaps using labels specific to the Sure application's PVCs, or to explicitly warn users about the potential for unintended data loss if other PVCs exist in the namespace.

</Warning>

```bash
kubectl delete pvc -n sure --all
```

## Getting help

If you find bugs or have feature requests:
- Read the [contributing guide](https://github.com/we-promise/sure/wiki/How-to-Contribute-Effectively-to-Sure)
- Ask in the [Discord](https://discord.gg/36ZGBsxYEK)
- Open an [issue](https://github.com/we-promise/sure/issues/new/choose)