chore: link vault wiki to Gitea
This commit is contained in:
0
02-selfhosting/docker/.keep
Normal file
0
02-selfhosting/docker/.keep
Normal file
168
02-selfhosting/docker/debugging-broken-docker-containers.md
Normal file
168
02-selfhosting/docker/debugging-broken-docker-containers.md
Normal file
@@ -0,0 +1,168 @@
|
||||
---
|
||||
title: "Debugging Broken Docker Containers"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [docker, troubleshooting, debugging, containers, logs]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Debugging Broken Docker Containers
|
||||
|
||||
When something in Docker breaks, there's a sequence to work through. Check logs first, inspect the container second, deal with network and permissions third. Resist the urge to rebuild immediately — most failures tell you what's wrong if you look.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
```bash
|
||||
# Step 1: check logs
|
||||
docker logs containername
|
||||
|
||||
# Step 2: inspect the container
|
||||
docker inspect containername
|
||||
|
||||
# Step 3: get a shell inside
|
||||
docker exec -it containername /bin/bash
|
||||
# or /bin/sh if bash isn't available
|
||||
```
|
||||
|
||||
## The Debugging Sequence
|
||||
|
||||
### 1. Check the logs first
|
||||
|
||||
Before anything else:
|
||||
|
||||
```bash
|
||||
docker logs containername
|
||||
|
||||
# Follow live output
|
||||
docker logs -f containername
|
||||
|
||||
# Last 100 lines
|
||||
docker logs --tail 100 containername
|
||||
|
||||
# With timestamps
|
||||
docker logs --timestamps containername
|
||||
```
|
||||
|
||||
Most failures announce themselves in the logs. Crash loops, config errors, missing environment variables — it's usually right there.
|
||||
|
||||
### 2. Check if the container is actually running
|
||||
|
||||
```bash
|
||||
docker ps
|
||||
|
||||
# Show stopped containers too
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
If the container shows as `Exited (1)` or any non-zero exit code, it crashed. The logs will usually tell you why.
|
||||
|
||||
### 3. Inspect the container
|
||||
|
||||
`docker inspect` dumps everything — environment variables, mounts, network config, the actual command being run:
|
||||
|
||||
```bash
|
||||
docker inspect containername
|
||||
```
|
||||
|
||||
Too verbose? Filter for the part you need:
|
||||
|
||||
```bash
|
||||
# Just the mounts
|
||||
docker inspect --format='{{json .Mounts}}' containername | jq
|
||||
|
||||
# Just the environment variables
|
||||
docker inspect --format='{{json .Config.Env}}' containername | jq
|
||||
|
||||
# Exit code
|
||||
docker inspect --format='{{.State.ExitCode}}' containername
|
||||
```
|
||||
|
||||
### 4. Get a shell inside
|
||||
|
||||
If the container is running but misbehaving:
|
||||
|
||||
```bash
|
||||
docker exec -it containername /bin/bash
|
||||
```
|
||||
|
||||
If it crashed and won't stay up, override the entrypoint to get in anyway:
|
||||
|
||||
```bash
|
||||
docker run -it --entrypoint /bin/bash imagename:tag
|
||||
```
|
||||
|
||||
### 5. Check port conflicts
|
||||
|
||||
Container starts but the service isn't reachable:
|
||||
|
||||
```bash
|
||||
# See what ports are mapped
|
||||
docker ps --format "table {{.Names}}\t{{.Ports}}"
|
||||
|
||||
# See what's using a port on the host
|
||||
sudo ss -tlnp | grep :8080
|
||||
```
|
||||
|
||||
Two containers can't bind the same host port. If something else grabbed the port first, the container will start but the port won't be accessible.
|
||||
|
||||
### 6. Check volume permissions
|
||||
|
||||
Permission errors inside containers are almost always a UID mismatch. The user inside the container doesn't own the files on the host volume.
|
||||
|
||||
```bash
|
||||
# Check ownership of the mounted directory on the host
|
||||
ls -la /path/to/host/volume
|
||||
|
||||
# Find out what UID the container runs as
|
||||
docker inspect --format='{{.Config.User}}' containername
|
||||
```
|
||||
|
||||
Fix by chowning the host directory to match, or by explicitly setting the user in your compose file:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
myapp:
|
||||
image: myapp:latest
|
||||
user: "1000:1000"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
```
|
||||
|
||||
### 7. Recreate cleanly when needed
|
||||
|
||||
If you've made config changes and things are in a weird state:
|
||||
|
||||
```bash
|
||||
docker compose down
|
||||
docker compose up -d
|
||||
|
||||
# Nuclear option — also removes volumes (data gone)
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
Don't jump straight to the nuclear option. Only use `-v` if you want a completely clean slate and don't care about the data.
|
||||
|
||||
## Common Failure Patterns
|
||||
|
||||
| Symptom | Likely cause | Where to look |
|
||||
|---|---|---|
|
||||
| Exits immediately | Config error, missing env var | `docker logs` |
|
||||
| Keeps restarting | Crash loop — app failing to start | `docker logs`, exit code |
|
||||
| Port not reachable | Port conflict or wrong binding | `docker ps`, `ss -tlnp` |
|
||||
| Permission denied inside container | UID mismatch on volume | `ls -la` on host path |
|
||||
| "No such file or directory" | Wrong mount path or missing file | `docker inspect` mounts |
|
||||
| Container runs but service is broken | App config error, not Docker | shell in, check app logs |
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **`docker restart` doesn't pick up compose file changes.** Use `docker compose up -d` to apply changes.
|
||||
- **Logs persist after a container exits.** You can still `docker logs` a stopped container — useful for post-mortem on crashes.
|
||||
- **If a container won't stay up long enough to exec into it**, use the entrypoint override (`--entrypoint /bin/bash`) with `docker run` against the image directly.
|
||||
- **Watch out for cached layers on rebuild.** If you're rebuilding an image and the behavior doesn't change, add `--no-cache` to `docker build`.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[docker-vs-vms-homelab]]
|
||||
- [[tuning-netdata-web-log-alerts]]
|
||||
95
02-selfhosting/docker/docker-vs-vms-homelab.md
Normal file
95
02-selfhosting/docker/docker-vs-vms-homelab.md
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: "Docker vs VMs in the Homelab: Why Not Both?"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [docker, vm, homelab, virtualization, containers]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Docker vs VMs in the Homelab: Why Not Both?
|
||||
|
||||
People treat this like an either/or decision. It's not. Docker and VMs solve different problems and the right homelab runs both. Here's how I think about which one to reach for.
|
||||
|
||||
## The Short Answer
|
||||
|
||||
Use Docker for services. Use VMs for things that need full OS isolation, a different kernel, or Windows. Run them side by side — they're complementary, not competing.
|
||||
|
||||
## What Docker Is Good At
|
||||
|
||||
Docker containers are great for running services — apps, databases, reverse proxies, monitoring stacks. They start fast, they're easy to move, and Docker Compose makes multi-service setups manageable with a single file.
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml — a simple example
|
||||
services:
|
||||
app:
|
||||
image: myapp:latest
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
restart: unless-stopped
|
||||
|
||||
db:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_PASSWORD: secret
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
```
|
||||
|
||||
The key advantages:
|
||||
- **Density** — you can run a lot of containers on modest hardware
|
||||
- **Portability** — move a service to another machine by copying the compose file and a data directory
|
||||
- **Isolation from other services** (but not from the host kernel)
|
||||
- **Easy updates** — pull a new image, recreate the container
|
||||
|
||||
## What VMs Are Good At
|
||||
|
||||
VMs give you a completely separate kernel and OS. That matters when:
|
||||
|
||||
- You need a **Windows environment** on Linux hardware (gaming server, specific Windows-only tools)
|
||||
- You're running something that needs a **different kernel version** than the host
|
||||
- You want **stronger isolation** — a compromised container can potentially escape to the host, a compromised VM is much harder to escape
|
||||
- You're testing a full OS install, distro setup, or something destructive
|
||||
- You need **hardware passthrough** — GPU, USB devices, etc.
|
||||
|
||||
On Linux, KVM + QEMU is the stack. `virt-manager` gives you a GUI if you want it.
|
||||
|
||||
```bash
|
||||
# Install KVM stack on Fedora/RHEL
|
||||
sudo dnf install qemu-kvm libvirt virt-install virt-manager
|
||||
|
||||
# Start and enable the libvirt daemon
|
||||
sudo systemctl enable --now libvirtd
|
||||
|
||||
# Verify KVM is available
|
||||
sudo virt-host-validate
|
||||
```
|
||||
|
||||
## How I Actually Use Both
|
||||
|
||||
In practice:
|
||||
- **Self-hosted services** (Nextcloud, Gitea, Jellyfin, monitoring stacks) → Docker Compose
|
||||
- **Gaming/Windows stuff that needs the real deal** → VM with GPU passthrough
|
||||
- **Testing a new distro or destructive experiments** → VM, snapshot before anything risky
|
||||
- **Network appliances** (pfSense, OPNsense) → VM, not a container
|
||||
|
||||
The two coexist fine on the same host. Docker handles the service layer, KVM handles the heavier isolation needs.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Containers share the host kernel.** That's a feature for performance and density, but it means a kernel exploit affects everything on the host. For sensitive workloads, VM isolation is worth the overhead.
|
||||
- **Networking gets complicated when both are running.** Docker creates its own bridge networks, KVM does the same. Know which traffic is going where. Naming your Docker networks explicitly helps.
|
||||
- **Backups are different.** Backing up a Docker service means backing up volumes + the compose file. Backing up a VM means snapshotting the QCOW2 disk file. Don't treat them the same.
|
||||
- **Don't run Docker inside a VM on your homelab unless you have a real reason.** It works, but you're layering virtualization overhead for no benefit in most cases.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[managing-linux-services-systemd-ansible]]
|
||||
- [[tuning-netdata-web-log-alerts]]
|
||||
115
02-selfhosting/docker/self-hosting-starter-guide.md
Normal file
115
02-selfhosting/docker/self-hosting-starter-guide.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: "Self-Hosting Starter Guide"
|
||||
domain: selfhosting
|
||||
category: docker
|
||||
tags: [selfhosting, homelab, docker, getting-started, privacy]
|
||||
status: published
|
||||
created: 2026-03-08
|
||||
updated: 2026-03-08
|
||||
---
|
||||
|
||||
# Self-Hosting Starter Guide
|
||||
|
||||
Self-hosting is running your own services on hardware you control instead of depending on someone else's platform. It's more work than just signing up for a SaaS product, but you own your data, you don't have to worry about a company changing their pricing or disappearing, and there's no profit motive to do weird things with your stuff.
|
||||
|
||||
This is where to start if you want in but don't know where to begin.
|
||||
|
||||
## What You Actually Need
|
||||
|
||||
You don't need a rack full of servers. A single machine with Docker is enough to run most things people want to self-host. Options in rough order of entry cost:
|
||||
|
||||
- **Old laptop or desktop** — works fine for home use, just leave it on
|
||||
- **Raspberry Pi or similar SBC** — low power, always-on, limited compute
|
||||
- **Mini PC** (Intel NUC, Beelink, etc.) — better than SBC, still low power, actually useful
|
||||
- **Old server hardware** — more capable, louder, more power draw
|
||||
- **VPS** — if you want public internet access and don't want to deal with networking
|
||||
|
||||
For a first setup, a mini PC or a spare desktop you already have is the right call.
|
||||
|
||||
## The Core Stack
|
||||
|
||||
Three tools cover almost everything:
|
||||
|
||||
**Docker** — runs your services as containers. One command to start a service, one command to update it.
|
||||
|
||||
**Docker Compose** — defines multi-service setups in a YAML file. Most self-hosted apps publish a `compose.yml` you can use directly.
|
||||
|
||||
**A reverse proxy** (Nginx Proxy Manager, Caddy, or Traefik) — routes traffic to your services by hostname or path, handles SSL certificates.
|
||||
|
||||
```bash
|
||||
# Install Docker (Linux)
|
||||
curl -fsSL https://get.docker.com | sh
|
||||
|
||||
# Add your user to the docker group (so you don't need sudo every time)
|
||||
sudo usermod -aG docker $USER
|
||||
# Log out and back in after this
|
||||
|
||||
# Verify
|
||||
docker --version
|
||||
docker compose version
|
||||
```
|
||||
|
||||
## Starting a Service
|
||||
|
||||
Most self-hosted apps have a Docker Compose file in their documentation. The pattern is the same for almost everything:
|
||||
|
||||
1. Create a directory for the service
|
||||
2. Put the `compose.yml` in it
|
||||
3. Run `docker compose up -d`
|
||||
|
||||
Example — Uptime Kuma (monitoring):
|
||||
|
||||
```yaml
|
||||
# uptime-kuma/compose.yml
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:latest
|
||||
ports:
|
||||
- "3001:3001"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir uptime-kuma && cd uptime-kuma
|
||||
# paste the compose.yml
|
||||
docker compose up -d
|
||||
# open http://your-server-ip:3001
|
||||
```
|
||||
|
||||
## What to Self-Host First
|
||||
|
||||
Start with things that are low-stakes and high-value:
|
||||
|
||||
- **Uptime Kuma** — monitors your other services, alerts when things go down. Easy to set up, immediately useful.
|
||||
- **Portainer** — web UI for managing Docker. Makes it easier to see what's running and pull updates.
|
||||
- **Vaultwarden** — self-hosted Bitwarden-compatible password manager. Your passwords on your hardware.
|
||||
- **Nextcloud** — file sync and storage. Replaces Dropbox/Google Drive.
|
||||
- **Jellyfin** — media server for your own video library.
|
||||
|
||||
Don't try to spin everything up at once. Get one service working, understand how it runs, then add the next one.
|
||||
|
||||
## Networking Basics
|
||||
|
||||
By default, services are only accessible on your home network. You have options for accessing them remotely:
|
||||
|
||||
- **Tailscale** — install on your server and your other devices, everything is accessible over a private encrypted network. Zero port forwarding required. This is what I use.
|
||||
- **Cloudflare Tunnel** — exposes services publicly through Cloudflare's network without opening ports. Good if you want things internet-accessible without exposing your home IP.
|
||||
- **Port forwarding** — traditional method, opens a port on your router to the server. Works but exposes your home IP.
|
||||
|
||||
Tailscale is the easiest and safest starting point for personal use.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Persistent storage:** Always map volumes for your service's data directory. If you run a container without a volume and it gets recreated, your data is gone.
|
||||
- **Restart policies:** Use `restart: unless-stopped` on services you want to survive reboots. `always` also works but will restart even if you manually stopped the container.
|
||||
- **Updates:** Pull the new image and recreate the container. `docker compose pull && docker compose up -d` is the standard pattern. Check the app's changelog first for anything that requires migration steps.
|
||||
- **Backups:** Self-hosting means you're responsible for your own backups. Back up the data directories for your services regularly. The `compose.yml` files should be in version control or backed up separately.
|
||||
- **Don't expose everything to the internet.** If you don't need public access to a service, don't create it. Tailscale for private access is safer than punching holes in your firewall.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[docker-vs-vms-homelab]]
|
||||
- [[debugging-broken-docker-containers]]
|
||||
- [[linux-server-hardening-checklist]]
|
||||
Reference in New Issue
Block a user