Skip to content

Commit 8033d70

Browse files
committed
chore: add longhorn backup restore blog
1 parent ae7035a commit 8033d70

File tree

2 files changed

+11
-44
lines changed

2 files changed

+11
-44
lines changed

_posts/2025-9-11-longhorn-backup-restore.md

Lines changed: 11 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
11
---
2-
title: "Restoring from Longhorn Backups: A Complete Guide"
2+
title: "Restoring from Longhorn Backups"
33
date: 2025-09-11 14:30:00 +0300
4-
categories: [kubernetes, disaster-recovery]
4+
categories: [infrastructure]
55
#tags: [kubernetes,longhorn,backup,restore,disaster-recovery,k8s,storage,minio,prometheus,grafana,jellyfin,flux,gitops]
6-
description: Complete step-by-step guide to restoring Kubernetes applications from Longhorn backups stored in MinIO, including scaling strategies, PVC management, and real-world lessons learned.
6+
description: Oops, My App Died! The Complete Human's Guide to Rescuing Kubernetes Applications from Longhorn Backups in MinIO
77
image:
88
path: /assets/img/posts/k8s-longhorn-restore.webp
99
alt: Kubernetes Longhorn backup restoration guide
10-
draft: true
10+
draft: false
1111
---
1212

1313

14-
# Restoring Your Kubernetes Applications from Longhorn Backups: A Complete Guide
14+
# Restoring Your Kubernetes Applications from Longhorn Backups
1515

16-
When disaster strikes your Kubernetes cluster, having a solid backup strategy isn't enough—you need to know how to restore your applications quickly and reliably. Recently, I had to rebuild my entire K8S cluster from scratch and restore all my applications from 3-month-old Longhorn backups stored in MinIO. Here's the complete step-by-step process that got my media stack and observability tools back online.
16+
When disaster strikes your Kubernetes cluster, having a solid backup strategy isn't enough—you need to know how to restore your applications quickly and reliably. Recently, I had to rebuild my entire K8S cluster from scratch and restore all my applications from Longhorn backups stored in MinIO. Here's the complete process that got my media stack and observability tools back online.
1717

1818
## The Situation
1919

20-
After redeploying my K8S cluster with Flux GitOps, I found myself with:
20+
After redeploying my K8S cluster with [Flux GitOps](https://merox.dev/blog/homelab-tour/), I found myself with:
2121
- ✅ Fresh cluster with all applications deployed via Flux
2222
- ✅ Longhorn storage configured and connected to MinIO backend
2323
- ✅ All backup data visible in Longhorn UI
@@ -42,7 +42,7 @@ Before starting, ensure you have:
4242
- Kubernetes cluster with kubectl access
4343
- Longhorn installed and configured
4444
- Backup storage backend accessible (MinIO/S3)
45-
- Applications deployed (scaled up or down doesn't matter)
45+
- Applications deployed (scaled up or down doesn't really matter)
4646
- Longhorn UI access for backup management
4747

4848
## Step 1: Assess Current State
@@ -128,7 +128,7 @@ For each backup, click the **⟲ (restore)** button and configure:
128128
Once restoration completes, the restored Longhorn volumes need PersistentVolumes to be accessible by Kubernetes:
129129

130130
```yaml
131-
# Example for Jellyfin - repeat for all applications
131+
# Example for Jellyfin - repeat for all applications you want to be restored
132132
apiVersion: v1
133133
kind: PersistentVolume
134134
metadata:
@@ -222,25 +222,6 @@ kubectl get pods -n default -o wide
222222
kubectl get pods -n observability -o wide
223223
```
224224

225-
## Results and Key Lessons
226-
227-
### What Was Restored Successfully ✅
228-
- **Jellyfin**: Complete media library, metadata, and user settings
229-
- **Grafana**: All dashboards, data sources, and alerting rules
230-
- **Prometheus**: Historical metrics and configuration
231-
- **Loki**: Log retention policies and stored logs
232-
- **QBittorrent**: Torrent configurations and download states
233-
- **Sonarr**: TV show monitoring and quality profiles
234-
235-
### Important Considerations
236-
237-
1. **Data Age**: My backups were 3 months old, so any data created after that point was lost. Plan backup frequency accordingly.
238-
239-
2. **Storage Sizes**: Pay attention to backup sizes vs. current PVC sizes. My Prometheus backup was 45GB while the current PVC was only 15GB—the restore process required updating the PVC size.
240-
241-
3. **Volume Naming**: Longhorn creates restored volumes with specific names. The PV `volumeHandle` must match exactly.
242-
243-
4. **Application Dependencies**: Some applications have interdependencies. Restore core infrastructure (Prometheus, Grafana) before application-specific services.
244225

245226
## Alternative: CLI-Based Restoration
246227

@@ -260,23 +241,9 @@ spec:
260241

261242
## Conclusion
262243

263-
Restoring Kubernetes applications from Longhorn backups requires careful orchestration of scaling, PVC management, and volume binding. The process took about 45 minutes for 6 applications, but the result was a complete restoration to the previous backup state.
264-
265-
Key takeaways:
266-
- **Always scale down applications first** to prevent corruption
267-
- **Understand the relationship** between Longhorn volumes, PVs, and PVCs
268-
- **Test your backup restoration process** before you need it
269-
- **Document your PVC naming conventions** for faster recovery
270-
- **Monitor backup age** vs. acceptable data loss
244+
Restoring Kubernetes applications from Longhorn backups requires careful orchestration of scaling, PVC management, and volume binding. The process took about 30 minutes for 6 applications, but the result was a complete restoration to the previous backup state.
271245

272246
Having a solid backup strategy is crucial, but knowing how to restore efficiently under pressure is what separates good infrastructure management from great infrastructure management.
273247

274-
## Next Steps
275-
276-
Consider implementing:
277-
- **Automated backup validation** to ensure restorability
278-
- **Backup age monitoring** with alerts
279-
- **Documentation of critical PVC mappings**
280-
- **Regular disaster recovery drills**
281248

282-
Your future self will thank you when disaster strikes again.
249+
Your future self will thank you when disaster strikes again. 😆
File renamed without changes.

0 commit comments

Comments
 (0)