chore: link vault wiki to Gitea
This commit is contained in:
0
02-selfhosting/dns-networking/.keep
Normal file
0
02-selfhosting/dns-networking/.keep
Normal file
145
02-selfhosting/dns-networking/tailscale-homelab-remote-access.md
Normal file
145
02-selfhosting/dns-networking/tailscale-homelab-remote-access.md
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
title: "Tailscale for Homelab Remote Access"
|
||||
domain: selfhosting
|
||||
category: dns-networking
|
||||
tags: [tailscale, vpn, remote-access, wireguard, homelab]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Tailscale for Homelab Remote Access
|
||||
|
||||
Tailscale is how I access my home services from anywhere. It creates a private encrypted mesh network between all your devices — no port forwarding, no dynamic DNS, no exposing anything to the internet. It just works, which is what you want from networking infrastructure.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
Install Tailscale on every device you want connected. Sign in with the same account. Done — all devices can reach each other by hostname.
|
||||
|
||||
```bash
|
||||
# Install on Linux
|
||||
curl -fsSL https://tailscale.com/install.sh | sh
|
||||
sudo tailscale up
|
||||
|
||||
# Check status
|
||||
tailscale status
|
||||
|
||||
# See your devices and IPs
|
||||
tailscale status --json | jq '.Peer[] | {Name: .HostName, IP: .TailscaleIPs[0]}'
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
Tailscale sits on top of WireGuard — the modern, fast, audited VPN protocol. Each device gets a `100.x.x.x` address on your tailnet (private network). Traffic between devices is encrypted WireGuard tunnels, peer-to-peer when possible, routed through Tailscale's DERP relay servers when direct connection isn't possible (restrictive NAT, cellular, etc.).
|
||||
|
||||
The key thing: your home server's services never need to be exposed to the public internet. They stay bound to `localhost` or your LAN IP, and Tailscale makes them accessible from your other devices over the tailnet.
|
||||
|
||||
## MagicDNS
|
||||
|
||||
Enable MagicDNS in the Tailscale admin console (login.tailscale.com → DNS → Enable MagicDNS). This assigns each device a stable hostname based on its machine name.
|
||||
|
||||
```
|
||||
# Instead of remembering 100.64.x.x
|
||||
http://homelab:3000
|
||||
|
||||
# Or the full MagicDNS name (works from anywhere on the tailnet)
|
||||
http://homelab.tail-xxxxx.ts.net:3000
|
||||
```
|
||||
|
||||
No manual DNS configuration on any device. When Tailscale is running, hostnames resolve automatically.
|
||||
|
||||
## Installation by Platform
|
||||
|
||||
**Linux (server/desktop):**
|
||||
```bash
|
||||
curl -fsSL https://tailscale.com/install.sh | sh
|
||||
sudo tailscale up
|
||||
```
|
||||
|
||||
**macOS:**
|
||||
Download from Mac App Store or tailscale.com/download/mac. Sign in through the menu bar app.
|
||||
|
||||
**iOS/iPadOS:**
|
||||
App Store → Tailscale. Sign in, enable the VPN.
|
||||
|
||||
**Windows:**
|
||||
Download installer from tailscale.com. Runs as a system service.
|
||||
|
||||
## Making Services Accessible Over Tailscale
|
||||
|
||||
By default, Docker services bind to `0.0.0.0` — they're already reachable on the Tailscale interface. Verify:
|
||||
|
||||
```bash
|
||||
docker ps --format "table {{.Names}}\t{{.Ports}}"
|
||||
```
|
||||
|
||||
Look for `0.0.0.0:PORT` in the output. If you see `127.0.0.1:PORT`, the service is bound to localhost only and won't be reachable. Fix it in the compose file by removing the `127.0.0.1:` prefix from the port mapping.
|
||||
|
||||
Ollama on Linux defaults to localhost. Override it:
|
||||
|
||||
```bash
|
||||
# Add to /etc/systemd/system/ollama.service.d/override.conf
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0"
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload && sudo systemctl restart ollama
|
||||
```
|
||||
|
||||
## Access Control
|
||||
|
||||
By default, all devices on your tailnet can reach all other devices. For a personal homelab this is fine. If you want to restrict access, Tailscale ACLs (in the admin console) let you define which devices can reach which others.
|
||||
|
||||
Simple ACL example — allow all devices to reach all others (the default):
|
||||
|
||||
```json
|
||||
{
|
||||
"acls": [
|
||||
{"action": "accept", "src": ["*"], "dst": ["*:*"]}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
More restrictive — only your laptop can reach the server:
|
||||
|
||||
```json
|
||||
{
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": ["tag:laptop"],
|
||||
"dst": ["tag:server:*"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Subnet Router (Optional)
|
||||
|
||||
If you want all your home LAN devices accessible over Tailscale (not just devices with Tailscale installed), set up a subnet router on your home server:
|
||||
|
||||
```bash
|
||||
# Advertise your home subnet
|
||||
sudo tailscale up --advertise-routes=192.168.1.0/24
|
||||
|
||||
# Approve the route in the admin console
|
||||
# Then on client devices, enable accepting routes:
|
||||
sudo tailscale up --accept-routes
|
||||
```
|
||||
|
||||
Now any device on your home LAN is reachable from anywhere on the tailnet, even if Tailscale isn't installed on that device.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Tailscale is not a replacement for a firewall.** It secures device-to-device communication, but your server still needs proper firewall rules for LAN access.
|
||||
- **Devices need to be approved in the admin console** unless you've enabled auto-approval. If a new device can't connect, check the admin console first.
|
||||
- **Mobile devices will disconnect in the background** depending on OS settings. iOS aggressively kills VPN connections. Enable background app refresh for Tailscale in iOS Settings.
|
||||
- **DERP relay adds latency** when direct connections aren't possible (common on cellular). Still encrypted and functional, just slower than direct peer-to-peer.
|
||||
- **Exit nodes** let you route all traffic through a specific tailnet device — useful as a simple home VPN if you want all your internet traffic going through your home IP when traveling. Set with `--advertise-exit-node` on the server and `--exit-node=hostname` on the client.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[self-hosting-starter-guide]]
|
||||
- [[linux-server-hardening-checklist]]
|
||||
- [[setting-up-caddy-reverse-proxy]]
|
||||
0
02-selfhosting/docker/.keep
Normal file
0
02-selfhosting/docker/.keep
Normal file
168
02-selfhosting/docker/debugging-broken-docker-containers.md
Normal file
168
02-selfhosting/docker/debugging-broken-docker-containers.md
Normal file
@@ -0,0 +1,168 @@
|
||||
---
|
||||
title: "Debugging Broken Docker Containers"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [docker, troubleshooting, debugging, containers, logs]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Debugging Broken Docker Containers
|
||||
|
||||
When something in Docker breaks, there's a sequence to work through. Check logs first, inspect the container second, deal with network and permissions third. Resist the urge to rebuild immediately — most failures tell you what's wrong if you look.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
```bash
|
||||
# Step 1: check logs
|
||||
docker logs containername
|
||||
|
||||
# Step 2: inspect the container
|
||||
docker inspect containername
|
||||
|
||||
# Step 3: get a shell inside
|
||||
docker exec -it containername /bin/bash
|
||||
# or /bin/sh if bash isn't available
|
||||
```
|
||||
|
||||
## The Debugging Sequence
|
||||
|
||||
### 1. Check the logs first
|
||||
|
||||
Before anything else:
|
||||
|
||||
```bash
|
||||
docker logs containername
|
||||
|
||||
# Follow live output
|
||||
docker logs -f containername
|
||||
|
||||
# Last 100 lines
|
||||
docker logs --tail 100 containername
|
||||
|
||||
# With timestamps
|
||||
docker logs --timestamps containername
|
||||
```
|
||||
|
||||
Most failures announce themselves in the logs. Crash loops, config errors, missing environment variables — it's usually right there.
|
||||
|
||||
### 2. Check if the container is actually running
|
||||
|
||||
```bash
|
||||
docker ps
|
||||
|
||||
# Show stopped containers too
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
If the container shows as `Exited (1)` or any non-zero exit code, it crashed. The logs will usually tell you why.
|
||||
|
||||
### 3. Inspect the container
|
||||
|
||||
`docker inspect` dumps everything — environment variables, mounts, network config, the actual command being run:
|
||||
|
||||
```bash
|
||||
docker inspect containername
|
||||
```
|
||||
|
||||
Too verbose? Filter for the part you need:
|
||||
|
||||
```bash
|
||||
# Just the mounts
|
||||
docker inspect --format='{{json .Mounts}}' containername | jq
|
||||
|
||||
# Just the environment variables
|
||||
docker inspect --format='{{json .Config.Env}}' containername | jq
|
||||
|
||||
# Exit code
|
||||
docker inspect --format='{{.State.ExitCode}}' containername
|
||||
```
|
||||
|
||||
### 4. Get a shell inside
|
||||
|
||||
If the container is running but misbehaving:
|
||||
|
||||
```bash
|
||||
docker exec -it containername /bin/bash
|
||||
```
|
||||
|
||||
If it crashed and won't stay up, override the entrypoint to get in anyway:
|
||||
|
||||
```bash
|
||||
docker run -it --entrypoint /bin/bash imagename:tag
|
||||
```
|
||||
|
||||
### 5. Check port conflicts
|
||||
|
||||
Container starts but the service isn't reachable:
|
||||
|
||||
```bash
|
||||
# See what ports are mapped
|
||||
docker ps --format "table {{.Names}}\t{{.Ports}}"
|
||||
|
||||
# See what's using a port on the host
|
||||
sudo ss -tlnp | grep :8080
|
||||
```
|
||||
|
||||
Two containers can't bind the same host port. If something else grabbed the port first, the container will start but the port won't be accessible.
|
||||
|
||||
### 6. Check volume permissions
|
||||
|
||||
Permission errors inside containers are almost always a UID mismatch. The user inside the container doesn't own the files on the host volume.
|
||||
|
||||
```bash
|
||||
# Check ownership of the mounted directory on the host
|
||||
ls -la /path/to/host/volume
|
||||
|
||||
# Find out what UID the container runs as
|
||||
docker inspect --format='{{.Config.User}}' containername
|
||||
```
|
||||
|
||||
Fix by chowning the host directory to match, or by explicitly setting the user in your compose file:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
myapp:
|
||||
image: myapp:latest
|
||||
user: "1000:1000"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
```
|
||||
|
||||
### 7. Recreate cleanly when needed
|
||||
|
||||
If you've made config changes and things are in a weird state:
|
||||
|
||||
```bash
|
||||
docker compose down
|
||||
docker compose up -d
|
||||
|
||||
# Nuclear option — also removes volumes (data gone)
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
Don't jump straight to the nuclear option. Only use `-v` if you want a completely clean slate and don't care about the data.
|
||||
|
||||
## Common Failure Patterns
|
||||
|
||||
| Symptom | Likely cause | Where to look |
|
||||
|---|---|---|
|
||||
| Exits immediately | Config error, missing env var | `docker logs` |
|
||||
| Keeps restarting | Crash loop — app failing to start | `docker logs`, exit code |
|
||||
| Port not reachable | Port conflict or wrong binding | `docker ps`, `ss -tlnp` |
|
||||
| Permission denied inside container | UID mismatch on volume | `ls -la` on host path |
|
||||
| "No such file or directory" | Wrong mount path or missing file | `docker inspect` mounts |
|
||||
| Container runs but service is broken | App config error, not Docker | shell in, check app logs |
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **`docker restart` doesn't pick up compose file changes.** Use `docker compose up -d` to apply changes.
|
||||
- **Logs persist after a container exits.** You can still `docker logs` a stopped container — useful for post-mortem on crashes.
|
||||
- **If a container won't stay up long enough to exec into it**, use the entrypoint override (`--entrypoint /bin/bash`) with `docker run` against the image directly.
|
||||
- **Watch out for cached layers on rebuild.** If you're rebuilding an image and the behavior doesn't change, add `--no-cache` to `docker build`.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[docker-vs-vms-homelab]]
|
||||
- [[tuning-netdata-web-log-alerts]]
|
||||
95
02-selfhosting/docker/docker-vs-vms-homelab.md
Normal file
95
02-selfhosting/docker/docker-vs-vms-homelab.md
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: "Docker vs VMs in the Homelab: Why Not Both?"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [docker, vm, homelab, virtualization, containers]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Docker vs VMs in the Homelab: Why Not Both?
|
||||
|
||||
People treat this like an either/or decision. It's not. Docker and VMs solve different problems and the right homelab runs both. Here's how I think about which one to reach for.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
Use Docker for services. Use VMs for things that need full OS isolation, a different kernel, or Windows. Run them side by side — they're complementary, not competing.
|
||||
|
||||
## What Docker Is Good At
|
||||
|
||||
Docker containers are great for running services — apps, databases, reverse proxies, monitoring stacks. They start fast, they're easy to move, and Docker Compose makes multi-service setups manageable with a single file.
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml — a simple example
|
||||
services:
|
||||
app:
|
||||
image: myapp:latest
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
restart: unless-stopped
|
||||
|
||||
db:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_PASSWORD: secret
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
```
|
||||
|
||||
The key advantages:
|
||||
- **Density** — you can run a lot of containers on modest hardware
|
||||
- **Portability** — move a service to another machine by copying the compose file and a data directory
|
||||
- **Isolation from other services** (but not from the host kernel)
|
||||
- **Easy updates** — pull a new image, recreate the container
|
||||
|
||||
## What VMs Are Good At
|
||||
|
||||
VMs give you a completely separate kernel and OS. That matters when:
|
||||
|
||||
- You need a **Windows environment** on Linux hardware (gaming server, specific Windows-only tools)
|
||||
- You're running something that needs a **different kernel version** than the host
|
||||
- You want **stronger isolation** — a compromised container can potentially escape to the host, a compromised VM is much harder to escape
|
||||
- You're testing a full OS install, distro setup, or something destructive
|
||||
- You need **hardware passthrough** — GPU, USB devices, etc.
|
||||
|
||||
On Linux, KVM + QEMU is the stack. `virt-manager` gives you a GUI if you want it.
|
||||
|
||||
```bash
|
||||
# Install KVM stack on Fedora/RHEL
|
||||
sudo dnf install qemu-kvm libvirt virt-install virt-manager
|
||||
|
||||
# Start and enable the libvirt daemon
|
||||
sudo systemctl enable --now libvirtd
|
||||
|
||||
# Verify KVM is available
|
||||
sudo virt-host-validate
|
||||
```
|
||||
|
||||
## How I Actually Use Both
|
||||
|
||||
In practice:
|
||||
- **Self-hosted services** (Nextcloud, Gitea, Jellyfin, monitoring stacks) → Docker Compose
|
||||
- **Gaming/Windows stuff that needs the real deal** → VM with GPU passthrough
|
||||
- **Testing a new distro or destructive experiments** → VM, snapshot before anything risky
|
||||
- **Network appliances** (pfSense, OPNsense) → VM, not a container
|
||||
|
||||
The two coexist fine on the same host. Docker handles the service layer, KVM handles the heavier isolation needs.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Containers share the host kernel.** That's a feature for performance and density, but it means a kernel exploit affects everything on the host. For sensitive workloads, VM isolation is worth the overhead.
|
||||
- **Networking gets complicated when both are running.** Docker creates its own bridge networks, KVM does the same. Know which traffic is going where. Naming your Docker networks explicitly helps.
|
||||
- **Backups are different.** Backing up a Docker service means backing up volumes + the compose file. Backing up a VM means snapshotting the QCOW2 disk file. Don't treat them the same.
|
||||
- **Don't run Docker inside a VM on your homelab unless you have a real reason.** It works, but you're layering virtualization overhead for no benefit in most cases.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[managing-linux-services-systemd-ansible]]
|
||||
- [[tuning-netdata-web-log-alerts]]
|
||||
115
02-selfhosting/docker/self-hosting-starter-guide.md
Normal file
115
02-selfhosting/docker/self-hosting-starter-guide.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: "Self-Hosting Starter Guide"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [selfhosting, homelab, docker, getting-started, privacy]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Self-Hosting Starter Guide
|
||||
|
||||
Self-hosting is running your own services on hardware you control instead of depending on someone else's platform. It's more work than just signing up for a SaaS product, but you own your data, you don't have to worry about a company changing their pricing or disappearing, and there's no profit motive to do weird things with your stuff.
|
||||
|
||||
This is where to start if you want in but don't know where to begin.
|
||||
|
||||
## What You Actually Need
|
||||
|
||||
You don't need a rack full of servers. A single machine with Docker is enough to run most things people want to self-host. Options in rough order of entry cost:
|
||||
|
||||
- **Old laptop or desktop** — works fine for home use, just leave it on
|
||||
- **Raspberry Pi or similar SBC** — low power, always-on, limited compute
|
||||
- **Mini PC** (Intel NUC, Beelink, etc.) — better than SBC, still low power, actually useful
|
||||
- **Old server hardware** — more capable, louder, more power draw
|
||||
- **VPS** — if you want public internet access and don't want to deal with networking
|
||||
|
||||
For a first setup, a mini PC or a spare desktop you already have is the right call.
|
||||
|
||||
## The Core Stack
|
||||
|
||||
Three tools cover almost everything:
|
||||
|
||||
**Docker** — runs your services as containers. One command to start a service, one command to update it.
|
||||
|
||||
**Docker Compose** — defines multi-service setups in a YAML file. Most self-hosted apps publish a `compose.yml` you can use directly.
|
||||
|
||||
**A reverse proxy** (Nginx Proxy Manager, Caddy, or Traefik) — routes traffic to your services by hostname or path, handles SSL certificates.
|
||||
|
||||
```bash
|
||||
# Install Docker (Linux)
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
|
||||
# Add your user to the docker group (so you don't need sudo every time)
|
||||
sudo usermod -aG docker $USER
|
||||
# Log out and back in after this
|
||||
|
||||
# Verify
|
||||
docker --version
|
||||
docker compose version
|
||||
```
|
||||
|
||||
## Starting a Service
|
||||
|
||||
Most self-hosted apps have a Docker Compose file in their documentation. The pattern is the same for almost everything:
|
||||
|
||||
1. Create a directory for the service
|
||||
2. Put the `compose.yml` in it
|
||||
3. Run `docker compose up -d`
|
||||
|
||||
Example — Uptime Kuma (monitoring):
|
||||
|
||||
```yaml
|
||||
# uptime-kuma/compose.yml
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:latest
|
||||
ports:
|
||||
- "3001:3001"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir uptime-kuma && cd uptime-kuma
|
||||
# paste the compose.yml
|
||||
docker compose up -d
|
||||
# open http://your-server-ip:3001
|
||||
```
|
||||
|
||||
## What to Self-Host First
|
||||
|
||||
Start with things that are low-stakes and high-value:
|
||||
|
||||
- **Uptime Kuma** — monitors your other services, alerts when things go down. Easy to set up, immediately useful.
|
||||
- **Portainer** — web UI for managing Docker. Makes it easier to see what's running and pull updates.
|
||||
- **Vaultwarden** — self-hosted Bitwarden-compatible password manager. Your passwords on your hardware.
|
||||
- **Nextcloud** — file sync and storage. Replaces Dropbox/Google Drive.
|
||||
- **Jellyfin** — media server for your own video library.
|
||||
|
||||
Don't try to spin everything up at once. Get one service working, understand how it runs, then add the next one.
|
||||
|
||||
## Networking Basics
|
||||
|
||||
By default, services are only accessible on your home network. You have options for accessing them remotely:
|
||||
|
||||
- **Tailscale** — install on your server and your other devices, everything is accessible over a private encrypted network. Zero port forwarding required. This is what I use.
|
||||
- **Cloudflare Tunnel** — exposes services publicly through Cloudflare's network without opening ports. Good if you want things internet-accessible without exposing your home IP.
|
||||
- **Port forwarding** — traditional method, opens a port on your router to the server. Works but exposes your home IP.
|
||||
|
||||
Tailscale is the easiest and safest starting point for personal use.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Persistent storage:** Always map volumes for your service's data directory. If you run a container without a volume and it gets recreated, your data is gone.
|
||||
- **Restart policies:** Use `restart: unless-stopped` on services you want to survive reboots. `always` also works but will restart even if you manually stopped the container.
|
||||
- **Updates:** Pull the new image and recreate the container. `docker compose pull && docker compose up -d` is the standard pattern. Check the app's changelog first for anything that requires migration steps.
|
||||
- **Backups:** Self-hosting means you're responsible for your own backups. Back up the data directories for your services regularly. The `compose.yml` files should be in version control or backed up separately.
|
||||
- **Don't expose everything to the internet.** If you don't need public access to a service, don't create it. Tailscale for private access is safer than punching holes in your firewall.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[docker-vs-vms-homelab]]
|
||||
- [[debugging-broken-docker-containers]]
|
||||
- [[linux-server-hardening-checklist]]
|
||||
0
02-selfhosting/monitoring/.keep
Normal file
0
02-selfhosting/monitoring/.keep
Normal file
88
02-selfhosting/monitoring/tuning-netdata-web-log-alerts.md
Normal file
88
02-selfhosting/monitoring/tuning-netdata-web-log-alerts.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: Tuning Netdata Web Log Alerts
|
||||
domain: selfhosting
|
||||
category: monitoring
|
||||
tags:
|
||||
- netdata
|
||||
- apache
|
||||
- monitoring
|
||||
- alerts
|
||||
- ubuntu
|
||||
status: published
|
||||
created: '2026-03-06'
|
||||
updated: '2026-03-08'
|
||||
---
|
||||
|
||||
# Tuning Netdata Web Log Alerts
|
||||
|
||||
To stop Netdata's `web_log_1m_redirects` alert from firing on normal HTTP-to-HTTPS redirect traffic, edit `/etc/netdata/health.d/web_log.conf` and raise the redirect threshold to 80%, then reload with `netdatacli reload-health`. The default threshold is too sensitive for any server that forces HTTPS — automated traffic hits port 80, gets a 301, and Netdata flags it as a WARNING even though nothing is wrong.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
```bash
|
||||
sudo /etc/netdata/edit-config health.d/web_log.conf
|
||||
```
|
||||
|
||||
Change the `warn` line in the `web_log_1m_redirects` template to:
|
||||
|
||||
```bash
|
||||
warn: ($web_log_1m_requests > 120) ? ($this > (($status >= $WARNING ) ? ( 1 ) : ( 80 )) ) : ( 0 )
|
||||
```
|
||||
|
||||
Then reload:
|
||||
|
||||
```bash
|
||||
netdatacli reload-health
|
||||
```
|
||||
|
||||
## Background
|
||||
|
||||
Production nodes forcing HTTPS see a lot of 301s. The default Netdata threshold is too sensitive for sites with a high bot-to-human ratio — it was designed for environments where redirects are unexpected, not standard operating procedure.
|
||||
|
||||
The tuned logic warns only when there are more than 120 requests/minute AND redirects exceed 80% of traffic (dropping to 1% once already in WARNING state to prevent flapping).
|
||||
|
||||
## Steps
|
||||
|
||||
1. Identify what's actually generating the redirects
|
||||
|
||||
```bash
|
||||
# Check status code distribution
|
||||
awk '{print $9}' /var/log/apache2/access.log | sort | uniq -c | sort -nr
|
||||
|
||||
# Top redirected URLs
|
||||
awk '$9 == "301" {print $7}' /var/log/apache2/access.log | sort | uniq -c | sort -nr | head -n 10
|
||||
```
|
||||
|
||||
2. Open the Netdata health config
|
||||
|
||||
```bash
|
||||
sudo /etc/netdata/edit-config health.d/web_log.conf
|
||||
```
|
||||
|
||||
3. Find the `web_log_1m_redirects` template and update the `warn` line
|
||||
|
||||
```bash
|
||||
warn: ($web_log_1m_requests > 120) ? ($this > (($status >= $WARNING ) ? ( 1 ) : ( 80 )) ) : ( 0 )
|
||||
```
|
||||
|
||||
4. Reload without restarting the service
|
||||
|
||||
```bash
|
||||
netdatacli reload-health
|
||||
```
|
||||
|
||||
5. Verify the alert cleared
|
||||
|
||||
```bash
|
||||
curl -s http://localhost:19999/api/v1/alarms?all | grep -A 15 "web_log_1m_redirec"
|
||||
```
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Ubuntu/Debian only:** The `edit-config` path and `apache2` log location are Debian-specific. On Fedora/RHEL the log is at `/var/log/httpd/access_log`.
|
||||
- **The 1% recovery threshold is intentional:** Without it, the alert will flap between WARNING and CLEAR constantly on busy sites. The hysteresis keeps it stable once triggered.
|
||||
- **Adjust the 120 req/min floor to your traffic:** Low-traffic sites may need a lower threshold; high-traffic sites may need higher.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[Netdata service monitoring]]
|
||||
0
02-selfhosting/reverse-proxy/.keep
Normal file
0
02-selfhosting/reverse-proxy/.keep
Normal file
140
02-selfhosting/reverse-proxy/setting-up-caddy-reverse-proxy.md
Normal file
140
02-selfhosting/reverse-proxy/setting-up-caddy-reverse-proxy.md
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: "Setting Up a Reverse Proxy with Caddy"
|
||||
domain: selfhosting
|
||||
category: reverse-proxy
|
||||
tags: [caddy, reverse-proxy, ssl, https, selfhosting]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Setting Up a Reverse Proxy with Caddy
|
||||
|
||||
A reverse proxy sits in front of your services and routes traffic to the right one based on hostname or path. It also handles SSL certificates so you don't have to manage them per-service. Caddy is the one I reach for first because it gets HTTPS right automatically with zero configuration.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
```bash
|
||||
# Install Caddy (Debian/Ubuntu)
|
||||
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
|
||||
sudo apt update && sudo apt install caddy
|
||||
```
|
||||
|
||||
Or via Docker (simpler for a homelab):
|
||||
|
||||
```yaml
|
||||
services:
|
||||
caddy:
|
||||
image: caddy:latest
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
```
|
||||
|
||||
## The Caddyfile
|
||||
|
||||
Caddy's config format is refreshingly simple compared to nginx. A Caddyfile that proxies two services:
|
||||
|
||||
```
|
||||
# Caddyfile
|
||||
|
||||
nextcloud.yourdomain.com {
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
|
||||
jellyfin.yourdomain.com {
|
||||
reverse_proxy localhost:8096
|
||||
}
|
||||
```
|
||||
|
||||
That's it. Caddy automatically gets TLS certificates from Let's Encrypt for both hostnames, handles HTTP-to-HTTPS redirects, and renews certificates before they expire. No certbot, no cron jobs.
|
||||
|
||||
## For Local/Home Network Use (Without a Public Domain)
|
||||
|
||||
If you don't have a public domain or want to use Caddy purely on your LAN:
|
||||
|
||||
**Option 1: Local hostnames with a self-signed certificate**
|
||||
|
||||
```
|
||||
:443 {
|
||||
tls internal
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
```
|
||||
|
||||
Caddy generates a local CA and signs the cert. You'll get browser warnings unless you trust the Caddy root CA on your devices. Run `caddy trust` to install it locally.
|
||||
|
||||
**Option 2: Use a real domain with DNS challenge (home IP, no port forwarding)**
|
||||
|
||||
If you own a domain but don't want to expose your home IP:
|
||||
|
||||
```
|
||||
nextcloud.yourdomain.com {
|
||||
tls {
|
||||
dns cloudflare {env.CF_API_TOKEN}
|
||||
}
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
```
|
||||
|
||||
This uses a DNS challenge instead of HTTP — Caddy creates a TXT record to prove domain ownership, so no ports need to be open. Requires the Caddy DNS plugin for your registrar.
|
||||
|
||||
## Reloading After Changes
|
||||
|
||||
```bash
|
||||
# Validate config before reloading
|
||||
caddy validate --config /etc/caddy/Caddyfile
|
||||
|
||||
# Reload without restart
|
||||
caddy reload --config /etc/caddy/Caddyfile
|
||||
|
||||
# Or with systemd
|
||||
sudo systemctl reload caddy
|
||||
```
|
||||
|
||||
## Adding Headers and Custom Options
|
||||
|
||||
```
|
||||
yourdomain.com {
|
||||
reverse_proxy localhost:3000
|
||||
|
||||
# Security headers
|
||||
header {
|
||||
X-Frame-Options "SAMEORIGIN"
|
||||
X-Content-Type-Options "nosniff"
|
||||
Referrer-Policy "strict-origin-when-cross-origin"
|
||||
}
|
||||
|
||||
# Increase timeout for slow upstream
|
||||
reverse_proxy localhost:3000 {
|
||||
transport http {
|
||||
response_header_timeout 120s
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Ports 80 and 443 must be open.** For automatic HTTPS, Caddy needs to reach Let's Encrypt. If you're behind a firewall, open those ports. For local-only setups, use `tls internal` instead.
|
||||
- **The `caddy_data` volume is important.** Caddy stores its certificates there. Lose the volume, lose the certs (they'll be re-issued but it causes brief downtime).
|
||||
- **Caddy hot-reloads config cleanly.** Unlike nginx, `caddy reload` doesn't drop connections. Safe to use on production.
|
||||
- **Docker networking:** When proxying to other Docker containers, use the container name or Docker network IP instead of `localhost`. If everything is in the same compose stack, use the service name: `reverse_proxy app:8080`.
|
||||
- **nginx comparison:** nginx is more widely documented and more feature-complete for edge cases, but Caddy's automatic HTTPS and simpler config makes it faster to set up for personal use. Both are good choices.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[self-hosting-starter-guide]]
|
||||
- [[linux-server-hardening-checklist]]
|
||||
- [[debugging-broken-docker-containers]]
|
||||
0
02-selfhosting/security/.keep
Normal file
0
02-selfhosting/security/.keep
Normal file
208
02-selfhosting/security/linux-server-hardening-checklist.md
Normal file
208
02-selfhosting/security/linux-server-hardening-checklist.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: "Linux Server Hardening Checklist"
|
||||
domain: selfhosting
|
||||
category: security
|
||||
tags: [security, hardening, linux, ssh, firewall, server]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Linux Server Hardening Checklist
|
||||
|
||||
When I set up a fresh Linux server, there's a standard set of things I do before I put anything on it. None of this is exotic — it's the basics that prevent the most common attacks. Do these before the server touches the public internet.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
New server checklist: create a non-root user, disable root SSH login, use key-based auth only, configure a firewall, keep packages updated. That covers 90% of what matters.
|
||||
|
||||
## 1. Create a Non-Root User
|
||||
|
||||
Don't work as root. Create a user, give it sudo:
|
||||
|
||||
```bash
|
||||
# Create user
|
||||
adduser yourname
|
||||
|
||||
# Add to sudo group (Debian/Ubuntu)
|
||||
usermod -aG sudo yourname
|
||||
|
||||
# Add to wheel group (Fedora/RHEL)
|
||||
usermod -aG wheel yourname
|
||||
```
|
||||
|
||||
Log out and log back in as that user before doing anything else.
|
||||
|
||||
## 2. SSH Key Authentication
|
||||
|
||||
Passwords over SSH are a liability. Set up key-based auth and disable password login.
|
||||
|
||||
On your local machine, generate a key if you don't have one:
|
||||
|
||||
```bash
|
||||
ssh-keygen -t ed25519 -C "yourname@hostname"
|
||||
```
|
||||
|
||||
Copy the public key to the server:
|
||||
|
||||
```bash
|
||||
ssh-copy-id yourname@server-ip
|
||||
```
|
||||
|
||||
Or manually append your public key to `~/.ssh/authorized_keys` on the server.
|
||||
|
||||
Test that key auth works **before** disabling passwords.
|
||||
|
||||
## 3. Harden sshd_config
|
||||
|
||||
Edit `/etc/ssh/sshd_config`:
|
||||
|
||||
```
|
||||
# Disable root login
|
||||
PermitRootLogin no
|
||||
|
||||
# Disable password authentication
|
||||
PasswordAuthentication no
|
||||
|
||||
# Disable empty passwords
|
||||
PermitEmptyPasswords no
|
||||
|
||||
# Limit to specific users (optional but good)
|
||||
AllowUsers yourname
|
||||
|
||||
# Change the port (optional — reduces log noise, not real security)
|
||||
Port 2222
|
||||
```
|
||||
|
||||
Restart SSH after changes:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart sshd
|
||||
```
|
||||
|
||||
Keep your current session open when testing — if you lock yourself out you'll need console access to fix it.
|
||||
|
||||
## 4. Configure a Firewall
|
||||
|
||||
**ufw (Ubuntu/Debian):**
|
||||
|
||||
```bash
|
||||
sudo apt install ufw
|
||||
|
||||
# Default: deny incoming, allow outgoing
|
||||
sudo ufw default deny incoming
|
||||
sudo ufw default allow outgoing
|
||||
|
||||
# Allow SSH (use your actual port if you changed it)
|
||||
sudo ufw allow 22/tcp
|
||||
|
||||
# Allow whatever services you're running
|
||||
sudo ufw allow 80/tcp
|
||||
sudo ufw allow 443/tcp
|
||||
|
||||
# Enable
|
||||
sudo ufw enable
|
||||
|
||||
# Check status
|
||||
sudo ufw status verbose
|
||||
```
|
||||
|
||||
**firewalld (Fedora/RHEL):**
|
||||
|
||||
```bash
|
||||
sudo systemctl enable --now firewalld
|
||||
|
||||
# Allow SSH
|
||||
sudo firewall-cmd --permanent --add-service=ssh
|
||||
|
||||
# Allow HTTP/HTTPS
|
||||
sudo firewall-cmd --permanent --add-service=http
|
||||
sudo firewall-cmd --permanent --add-service=https
|
||||
|
||||
# Apply changes
|
||||
sudo firewall-cmd --reload
|
||||
|
||||
# Check
|
||||
sudo firewall-cmd --list-all
|
||||
```
|
||||
|
||||
## 5. Keep Packages Updated
|
||||
|
||||
Security patches come through package updates. Automate this or do it regularly:
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian — manual
|
||||
sudo apt update && sudo apt upgrade
|
||||
|
||||
# Enable unattended security upgrades (Ubuntu)
|
||||
sudo apt install unattended-upgrades
|
||||
sudo dpkg-reconfigure --priority=low unattended-upgrades
|
||||
|
||||
# Fedora/RHEL — manual
|
||||
sudo dnf upgrade
|
||||
|
||||
# Enable automatic updates (Fedora)
|
||||
sudo dnf install dnf-automatic
|
||||
sudo systemctl enable --now dnf-automatic.timer
|
||||
```
|
||||
|
||||
## 6. Fail2ban
|
||||
|
||||
Fail2ban watches log files and bans IPs that fail authentication too many times. Helps with brute force noise.
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
sudo apt install fail2ban
|
||||
|
||||
# Fedora/RHEL
|
||||
sudo dnf install fail2ban
|
||||
|
||||
# Start and enable
|
||||
sudo systemctl enable --now fail2ban
|
||||
```
|
||||
|
||||
Create `/etc/fail2ban/jail.local` to override defaults:
|
||||
|
||||
```ini
|
||||
[DEFAULT]
|
||||
bantime = 1h
|
||||
findtime = 10m
|
||||
maxretry = 5
|
||||
|
||||
[sshd]
|
||||
enabled = true
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl restart fail2ban
|
||||
|
||||
# Check status
|
||||
sudo fail2ban-client status sshd
|
||||
```
|
||||
|
||||
## 7. Disable Unnecessary Services
|
||||
|
||||
Less running means less attack surface:
|
||||
|
||||
```bash
|
||||
# See what's running
|
||||
systemctl list-units --type=service --state=active
|
||||
|
||||
# Disable something you don't need
|
||||
sudo systemctl disable --now servicename
|
||||
```
|
||||
|
||||
Common ones to disable on a dedicated server: `avahi-daemon`, `cups`, `bluetooth`.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Don't lock yourself out.** Test SSH key auth in a second terminal before disabling passwords. Keep the original session open.
|
||||
- **If you changed the SSH port**, make sure the firewall allows the new port before restarting sshd. Block the old port after you've confirmed the new one works.
|
||||
- **fail2ban and Docker don't always play nicely.** Docker bypasses iptables rules in some configurations. If you're running services in Docker, test that fail2ban is actually seeing traffic.
|
||||
- **SELinux on RHEL/Fedora** may block things your firewall allows. Check `ausearch -m avc` if a service stops working after hardening.
|
||||
- **This is a baseline, not a complete security posture.** For anything holding sensitive data, also look at: disk encryption, intrusion detection (AIDE, Tripwire), log shipping to a separate system, and regular audits.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[managing-linux-services-systemd-ansible]]
|
||||
- [[debugging-broken-docker-containers]]
|
||||
0
02-selfhosting/services/.keep
Normal file
0
02-selfhosting/services/.keep
Normal file
0
02-selfhosting/storage-backup/.keep
Normal file
0
02-selfhosting/storage-backup/.keep
Normal file
162
02-selfhosting/storage-backup/rsync-backup-patterns.md
Normal file
162
02-selfhosting/storage-backup/rsync-backup-patterns.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: "rsync Backup Patterns"
|
||||
domain: selfhosting
|
||||
category: storage-backup
|
||||
tags: [rsync, backup, linux, storage, automation]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# rsync Backup Patterns
|
||||
|
||||
rsync is the tool for moving files on Linux. Fast, resumable, bandwidth-efficient — it only transfers what changed. For local and remote backups, it's what I reach for first.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
```bash
|
||||
# Sync source to destination (local)
|
||||
rsync -av /source/ /destination/
|
||||
|
||||
# Sync to remote server
|
||||
rsync -avz /source/ user@server:/destination/
|
||||
|
||||
# Dry run first — see what would change
|
||||
rsync -avnP /source/ /destination/
|
||||
```
|
||||
|
||||
## Core Flags
|
||||
|
||||
| Flag | What it does |
|
||||
|---|---|
|
||||
| `-a` | Archive mode: preserves permissions, timestamps, symlinks, owner/group. Use this almost always. |
|
||||
| `-v` | Verbose output — shows files being transferred. |
|
||||
| `-z` | Compress during transfer. Useful over slow connections, overhead on fast LAN. |
|
||||
| `-P` | Progress + partial transfers (resumes interrupted transfers). |
|
||||
| `-n` | Dry run — shows what would happen without doing it. |
|
||||
| `--delete` | Removes files from destination that no longer exist in source. Makes it a true mirror. |
|
||||
| `--exclude` | Exclude patterns. |
|
||||
| `--bwlimit` | Limit bandwidth in KB/s. |
|
||||
|
||||
## Local Backup
|
||||
|
||||
```bash
|
||||
# Basic sync
|
||||
rsync -av /home/major/ /backup/home/
|
||||
|
||||
# Mirror — destination matches source exactly (deletes removed files)
|
||||
rsync -av --delete /home/major/ /backup/home/
|
||||
|
||||
# Exclude directories
|
||||
rsync -av --exclude='.cache' --exclude='Downloads' /home/major/ /backup/home/
|
||||
|
||||
# Multiple excludes from a file
|
||||
rsync -av --exclude-from=exclude.txt /home/major/ /backup/home/
|
||||
```
|
||||
|
||||
**The trailing slash matters:**
|
||||
- `/source/` — sync the contents of source into destination
|
||||
- `/source` — sync the source directory itself into destination (creates `/destination/source/`)
|
||||
|
||||
Almost always want the trailing slash on the source.
|
||||
|
||||
## Remote Backup
|
||||
|
||||
```bash
|
||||
# Local → remote
|
||||
rsync -avz /home/major/ user@server:/backup/major/
|
||||
|
||||
# Remote → local (pull backup)
|
||||
rsync -avz user@server:/var/data/ /local/backup/data/
|
||||
|
||||
# Specify SSH port or key
|
||||
rsync -avz -e "ssh -p 2222 -i ~/.ssh/id_ed25519" /source/ user@server:/dest/
|
||||
```
|
||||
|
||||
## Incremental Backups with Hard Links
|
||||
|
||||
The `--link-dest` pattern creates space-efficient incremental backups. Each backup looks like a full copy but only stores changed files — unchanged files are hard links to previous versions.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
BACKUP_DIR="/backup"
|
||||
SOURCE="/home/major"
|
||||
DATE="$(date +%Y-%m-%d)"
|
||||
LATEST="${BACKUP_DIR}/latest"
|
||||
DEST="${BACKUP_DIR}/${DATE}"
|
||||
|
||||
rsync -av --delete \
|
||||
--link-dest="$LATEST" \
|
||||
"$SOURCE/" \
|
||||
"$DEST/"
|
||||
|
||||
# Update the 'latest' symlink
|
||||
rm -f "$LATEST"
|
||||
ln -s "$DEST" "$LATEST"
|
||||
```
|
||||
|
||||
Each dated directory looks like a complete backup. Storage is only used for changed files. You can delete any dated directory without affecting others.
|
||||
|
||||
## Backup Script with Logging
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SOURCE="/home/major/"
|
||||
DEST="/backup/home/"
|
||||
LOG="/var/log/rsync-backup.log"
|
||||
|
||||
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; }
|
||||
|
||||
log "Backup started"
|
||||
|
||||
rsync -av --delete \
|
||||
--exclude='.cache' \
|
||||
--exclude='Downloads' \
|
||||
--log-file="$LOG" \
|
||||
"$SOURCE" "$DEST"
|
||||
|
||||
log "Backup complete"
|
||||
```
|
||||
|
||||
Run via cron:
|
||||
|
||||
```bash
|
||||
# Daily at 2am
|
||||
0 2 * * * /usr/local/bin/backup.sh
|
||||
```
|
||||
|
||||
Or systemd timer (preferred):
|
||||
|
||||
```ini
|
||||
# /etc/systemd/system/rsync-backup.timer
|
||||
[Unit]
|
||||
Description=Daily rsync backup
|
||||
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl enable --now rsync-backup.timer
|
||||
```
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Test with `--dry-run` first.** Especially when using `--delete`. See what would be removed before actually removing it.
|
||||
- **`--delete` is destructive.** It removes files from the destination that don't exist in the source. That's the point, but know what you're doing.
|
||||
- **Large files and slow connections:** Add `-P` for progress and partial transfer resume. An interrupted rsync picks up where it left off.
|
||||
- **For network backups to untrusted locations**, consider using rsync over SSH + encryption at rest. rsync over SSH handles transit encryption; storage encryption is separate.
|
||||
- **rsync vs Restic:** rsync is fast and simple. Restic gives you deduplication, encryption, and multiple backend support (S3, B2, etc.). For local backups, rsync. For offsite with encryption needs, Restic.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[self-hosting-starter-guide]]
|
||||
- [[bash-scripting-patterns]]
|
||||
Reference in New Issue
Block a user