Compare commits
19 Commits
main
...
4cf2a8e0a6
| Author | SHA1 | Date | |
|---|---|---|---|
| 4cf2a8e0a6 | |||
| 016072e972 | |||
| 994c0c9191 | |||
| 21988a2fa9 | |||
| 58cb5e7b2a | |||
| 394d5200ad | |||
| 1790aa771a | |||
| 34aadae03a | |||
| 29333fbe0a | |||
| afae561e7e | |||
| 6e67c2b0b1 | |||
| 01981e0610 | |||
| a689d8203a | |||
| 2861cade55 | |||
|
|
64df4b8cfb | ||
| 70d9657b7f | |||
| c4673f70e0 | |||
| 9d537dec5f | |||
| 639b23f861 |
18
.gitattributes
vendored
Normal file
18
.gitattributes
vendored
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Normalize line endings to LF for all text files
|
||||||
|
* text=auto eol=lf
|
||||||
|
|
||||||
|
# Explicitly handle markdown
|
||||||
|
*.md text eol=lf
|
||||||
|
|
||||||
|
# Explicitly handle config files
|
||||||
|
*.yml text eol=lf
|
||||||
|
*.yaml text eol=lf
|
||||||
|
*.json text eol=lf
|
||||||
|
*.toml text eol=lf
|
||||||
|
|
||||||
|
# Binary files — don't touch
|
||||||
|
*.png binary
|
||||||
|
*.jpg binary
|
||||||
|
*.jpeg binary
|
||||||
|
*.gif binary
|
||||||
|
*.pdf binary
|
||||||
@@ -1,29 +1,29 @@
|
|||||||
# 🐧 Linux & Sysadmin
|
# 🐧 Linux & Sysadmin
|
||||||
|
|
||||||
A collection of guides covering Linux administration, shell scripting, networking, and distro-specific topics.
|
A collection of guides covering Linux administration, shell scripting, networking, and distro-specific topics.
|
||||||
|
|
||||||
## Files & Permissions
|
## Files & Permissions
|
||||||
|
|
||||||
- [Linux File Permissions and Ownership](files-permissions/linux-file-permissions.md)
|
- [Linux File Permissions and Ownership](files-permissions/linux-file-permissions.md)
|
||||||
|
|
||||||
## Networking
|
## Networking
|
||||||
|
|
||||||
- [SSH Config & Key Management](networking/ssh-config-key-management.md)
|
- [SSH Config & Key Management](networking/ssh-config-key-management.md)
|
||||||
|
|
||||||
## Package Management
|
## Package Management
|
||||||
|
|
||||||
- [Package Management Reference](packages/package-management-reference.md)
|
- [Package Management Reference](packages/package-management-reference.md)
|
||||||
|
|
||||||
## Process Management
|
## Process Management
|
||||||
|
|
||||||
- [Managing Linux Services with systemd](process-management/managing-linux-services-systemd-ansible.md)
|
- [Managing Linux Services with systemd](process-management/managing-linux-services-systemd-ansible.md)
|
||||||
|
|
||||||
## Shell & Scripting
|
## Shell & Scripting
|
||||||
|
|
||||||
- [Ansible Getting Started](shell-scripting/ansible-getting-started.md)
|
- [Ansible Getting Started](shell-scripting/ansible-getting-started.md)
|
||||||
- [Bash Scripting Patterns](shell-scripting/bash-scripting-patterns.md)
|
- [Bash Scripting Patterns](shell-scripting/bash-scripting-patterns.md)
|
||||||
|
|
||||||
## Distro-Specific
|
## Distro-Specific
|
||||||
|
|
||||||
- [Linux Distro Guide for Beginners](distro-specific/linux-distro-guide-beginners.md)
|
- [Linux Distro Guide for Beginners](distro-specific/linux-distro-guide-beginners.md)
|
||||||
- [WSL2 Instance Migration to Fedora 43](distro-specific/wsl2-instance-migration-fedora43.md)
|
- [WSL2 Instance Migration to Fedora 43](distro-specific/wsl2-instance-migration-fedora43.md)
|
||||||
|
|||||||
74
01-linux/storage/snapraid-mergerfs-setup.md
Normal file
74
01-linux/storage/snapraid-mergerfs-setup.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# SnapRAID & MergerFS Storage Setup
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Managing a collection of mismatched hard drives as a single pool while maintaining data redundancy (parity) without the overhead or risk of a traditional RAID 5/6 array.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
A combination of **MergerFS** for pooling and **SnapRAID** for parity. This is ideal for "mostly static" media storage (like MajorRAID) where files aren't changing every second.
|
||||||
|
|
||||||
|
### 1. Concepts
|
||||||
|
|
||||||
|
- **MergerFS:** A FUSE-based union filesystem. It takes multiple drives/folders and presents them as a single mount point. It does NOT provide redundancy.
|
||||||
|
- **SnapRAID:** A backup/parity tool for disk arrays. It creates parity information on a dedicated drive. It is NOT real-time (you must run `snapraid sync`).
|
||||||
|
|
||||||
|
### 2. Implementation Strategy
|
||||||
|
|
||||||
|
1. **Clean the Pool:** Use `rmlint` to clear duplicates and reclaim space.
|
||||||
|
2. **Identify the Parity Drive:** Choose your largest drive (or one equal to the largest data drive) to hold the parity information. In my setup, `/mnt/usb` (sdc) was cleared of 4TB of duplicates to be repurposed for this.
|
||||||
|
3. **Configure MergerFS:** Pool the data drives (e.g., `/mnt/disk1`, `/mnt/disk2`) into `/storage`.
|
||||||
|
4. **Configure SnapRAID:** Point SnapRAID to the data drives and the parity drive.
|
||||||
|
|
||||||
|
### 3. MergerFS Config (/etc/fstab)
|
||||||
|
|
||||||
|
```fstab
|
||||||
|
# Example MergerFS pool
|
||||||
|
/mnt/disk*:/mnt/usb-data /storage fuse.mergerfs defaults,allow_other,cache.files=off,use_ino,category.create=mfs,minfreespace=20G,fsname=mergerfsPool 0 0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. SnapRAID Config (/etc/snapraid.conf)
|
||||||
|
|
||||||
|
```conf
|
||||||
|
# Parity file location
|
||||||
|
parity /mnt/parity/snapraid.parity
|
||||||
|
|
||||||
|
# Data drives
|
||||||
|
content /var/snapraid/snapraid.content
|
||||||
|
content /mnt/disk1/.snapraid.content
|
||||||
|
content /mnt/disk2/.snapraid.content
|
||||||
|
|
||||||
|
data d1 /mnt/disk1/
|
||||||
|
data d2 /mnt/disk2/
|
||||||
|
|
||||||
|
# Exclusions
|
||||||
|
exclude /lost+found/
|
||||||
|
exclude /tmp/
|
||||||
|
exclude .DS_Store
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### SnapRAID Sync
|
||||||
|
|
||||||
|
Run this daily (via cron) or after adding large amounts of data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
snapraid sync
|
||||||
|
```
|
||||||
|
|
||||||
|
### SnapRAID Scrub
|
||||||
|
|
||||||
|
Run this weekly to check for bitrot:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
snapraid scrub
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#snapraid #mergerfs #linux #storage #homelab #raid
|
||||||
@@ -1,29 +1,29 @@
|
|||||||
# 🏠 Self-Hosting & Homelab
|
# 🏠 Self-Hosting & Homelab
|
||||||
|
|
||||||
Guides for running your own services at home, including Docker, reverse proxies, DNS, storage, monitoring, and security.
|
Guides for running your own services at home, including Docker, reverse proxies, DNS, storage, monitoring, and security.
|
||||||
|
|
||||||
## Docker & Containers
|
## Docker & Containers
|
||||||
|
|
||||||
- [Self-Hosting Starter Guide](docker/self-hosting-starter-guide.md)
|
- [Self-Hosting Starter Guide](docker/self-hosting-starter-guide.md)
|
||||||
- [Docker vs VMs for the Homelab](docker/docker-vs-vms-homelab.md)
|
- [Docker vs VMs for the Homelab](docker/docker-vs-vms-homelab.md)
|
||||||
- [Debugging Broken Docker Containers](docker/debugging-broken-docker-containers.md)
|
- [Debugging Broken Docker Containers](docker/debugging-broken-docker-containers.md)
|
||||||
|
|
||||||
## Reverse Proxies
|
## Reverse Proxies
|
||||||
|
|
||||||
- [Setting Up Caddy as a Reverse Proxy](reverse-proxy/setting-up-caddy-reverse-proxy.md)
|
- [Setting Up Caddy as a Reverse Proxy](reverse-proxy/setting-up-caddy-reverse-proxy.md)
|
||||||
|
|
||||||
## DNS & Networking
|
## DNS & Networking
|
||||||
|
|
||||||
- [Tailscale for Homelab Remote Access](dns-networking/tailscale-homelab-remote-access.md)
|
- [Tailscale for Homelab Remote Access](dns-networking/tailscale-homelab-remote-access.md)
|
||||||
|
|
||||||
## Storage & Backup
|
## Storage & Backup
|
||||||
|
|
||||||
- [rsync Backup Patterns](storage-backup/rsync-backup-patterns.md)
|
- [rsync Backup Patterns](storage-backup/rsync-backup-patterns.md)
|
||||||
|
|
||||||
## Monitoring
|
## Monitoring
|
||||||
|
|
||||||
- [Tuning Netdata Web Log Alerts](monitoring/tuning-netdata-web-log-alerts.md)
|
- [Tuning Netdata Web Log Alerts](monitoring/tuning-netdata-web-log-alerts.md)
|
||||||
|
|
||||||
## Security
|
## Security
|
||||||
|
|
||||||
- [Linux Server Hardening Checklist](security/linux-server-hardening-checklist.md)
|
- [Linux Server Hardening Checklist](security/linux-server-hardening-checklist.md)
|
||||||
|
|||||||
89
03-opensource/alternatives/freshrss.md
Normal file
89
03-opensource/alternatives/freshrss.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# FreshRSS — Self-Hosted RSS Reader
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
RSS is the best way to follow websites, blogs, and podcasts without algorithmic feeds, engagement bait, or data harvesting. But hosted RSS services like Feedly gate features behind subscriptions and still have access to your reading habits. Google killed Google Reader in 2013 and has been trying to kill RSS ever since.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[FreshRSS](https://freshrss.org) is a self-hosted RSS aggregator. It fetches and stores your feeds on your own server, presents a clean reading interface, and syncs with mobile apps via standard APIs (Fever, Google Reader, Nextcloud News). No subscription, no tracking, no feed limits.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
freshrss:
|
||||||
|
image: freshrss/freshrss:latest
|
||||||
|
container_name: freshrss
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8086:80"
|
||||||
|
volumes:
|
||||||
|
- ./freshrss/data:/var/www/FreshRSS/data
|
||||||
|
- ./freshrss/extensions:/var/www/FreshRSS/extensions
|
||||||
|
environment:
|
||||||
|
- TZ=America/New_York
|
||||||
|
- CRON_MIN=*/15 # fetch feeds every 15 minutes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
rss.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8086
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to your FreshRSS URL and run through the setup wizard
|
||||||
|
2. Create an admin account
|
||||||
|
3. Go to **Settings → Authentication** — enable API access if you want mobile app sync
|
||||||
|
4. Start adding feeds under **Subscriptions → Add a feed**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mobile App Sync
|
||||||
|
|
||||||
|
FreshRSS exposes a Google Reader-compatible API that most RSS apps support:
|
||||||
|
|
||||||
|
| App | Platform | Protocol |
|
||||||
|
|---|---|---|
|
||||||
|
| NetNewsWire | iOS / macOS | Fever or GReader |
|
||||||
|
| Reeder | iOS / macOS | GReader |
|
||||||
|
| ReadYou | Android | GReader |
|
||||||
|
| FeedMe | Android | GReader / Fever |
|
||||||
|
|
||||||
|
**API URL format:** `https://rss.yourdomain.com/api/greader.php`
|
||||||
|
|
||||||
|
Enable the API in FreshRSS: **Settings → Authentication → Allow API access**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feed Auto-Refresh
|
||||||
|
|
||||||
|
The `CRON_MIN=*/15` environment variable runs feed fetching every 15 minutes inside the container. For more control, add a host-level cron job:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fetch all feeds every 10 minutes
|
||||||
|
*/10 * * * * docker exec freshrss php /var/www/FreshRSS/app/actualize_script.php
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why RSS Over Social Media
|
||||||
|
|
||||||
|
- **You control the feed** — no algorithm decides what you see or in what order
|
||||||
|
- **No engagement optimization** — content ranked by publish date, not outrage potential
|
||||||
|
- **Portable** — OPML export lets you move your subscriptions to any reader
|
||||||
|
- **Works forever** — RSS has been around since 1999 and isn't going anywhere
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#freshrss #rss #self-hosting #docker #linux #alternatives #privacy
|
||||||
95
03-opensource/alternatives/gitea.md
Normal file
95
03-opensource/alternatives/gitea.md
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# Gitea — Self-Hosted Git
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
GitHub is the default home for code, but it's a Microsoft-owned centralized service. Your repositories, commit history, issues, and CI/CD pipelines are all under someone else's control. For personal projects and private infrastructure, there's no reason to depend on it.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[Gitea](https://gitea.com) is a lightweight, self-hosted Git service. It provides the full GitHub-style workflow — repositories, branches, pull requests, webhooks, and a web UI — in a single binary or Docker container that runs comfortably on low-spec hardware.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
gitea:
|
||||||
|
image: docker.gitea.com/gitea:latest
|
||||||
|
container_name: gitea
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "3002:3000"
|
||||||
|
- "222:22" # SSH git access
|
||||||
|
volumes:
|
||||||
|
- ./gitea:/data
|
||||||
|
environment:
|
||||||
|
- USER_UID=1000
|
||||||
|
- USER_GID=1000
|
||||||
|
- GITEA__database__DB_TYPE=sqlite3
|
||||||
|
```
|
||||||
|
|
||||||
|
SQLite is fine for personal use. For team use, swap in PostgreSQL or MySQL.
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
git.yourdomain.com {
|
||||||
|
reverse_proxy localhost:3002
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to your Gitea URL — the first-run wizard handles configuration
|
||||||
|
2. Set the server URL to your public domain
|
||||||
|
3. Create an admin account
|
||||||
|
4. Configure SSH access if you want `git@git.yourdomain.com` cloning
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Webhooks
|
||||||
|
|
||||||
|
Gitea's webhook system is how automated pipelines get triggered on push. Example use case — auto-deploy a MkDocs wiki on every push:
|
||||||
|
|
||||||
|
1. Go to repo → **Settings → Webhooks → Add Webhook**
|
||||||
|
2. Set the payload URL to your webhook endpoint (e.g. `https://notes.yourdomain.com/webhook`)
|
||||||
|
3. Set content type to `application/json`
|
||||||
|
4. Select **Push events**
|
||||||
|
|
||||||
|
The webhook fires on every `git push`, allowing the receiving server to pull and rebuild automatically. See [MajorWiki Setup & Pipeline](../../05-troubleshooting/majwiki-setup-and-pipeline.md) for a complete example.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migrating from GitHub
|
||||||
|
|
||||||
|
Gitea can mirror GitHub repos and import them directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone from GitHub, push to Gitea
|
||||||
|
git clone --mirror https://github.com/user/repo.git
|
||||||
|
cd repo.git
|
||||||
|
git remote set-url origin https://git.yourdomain.com/user/repo.git
|
||||||
|
git push --mirror
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use the Gitea web UI: **+ → New Migration → GitHub**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not Just Use GitHub?
|
||||||
|
|
||||||
|
For public open source — GitHub is fine, the network effects are real. For private infrastructure code, personal projects, and anything you'd rather not hand to Microsoft:
|
||||||
|
|
||||||
|
- Full control over your data and access
|
||||||
|
- No rate limits, no storage quotas on your own hardware
|
||||||
|
- Webhooks and integrations without paying for GitHub Actions minutes
|
||||||
|
- Works entirely over Tailscale — no public exposure required
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#gitea #git #self-hosting #docker #linux #alternatives #vcs
|
||||||
88
03-opensource/alternatives/searxng.md
Normal file
88
03-opensource/alternatives/searxng.md
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
# SearXNG — Private Self-Hosted Search
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Every search query sent to Google, Bing, or DuckDuckGo is logged, profiled, and used to build an advertising model of you. Even "private" search engines are still third-party services with their own data retention policies.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[SearXNG](https://github.com/searxng/searxng) is a self-hosted metasearch engine. It queries multiple search engines simultaneously on your behalf — without sending any identifying information — and aggregates the results. The search engines see a request from your server, not from you.
|
||||||
|
|
||||||
|
Your queries stay on your infrastructure.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
searxng:
|
||||||
|
image: searxng/searxng:latest
|
||||||
|
container_name: searxng
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8090:8080"
|
||||||
|
volumes:
|
||||||
|
- ./searxng:/etc/searxng
|
||||||
|
environment:
|
||||||
|
- SEARXNG_BASE_URL=https://search.yourdomain.com/
|
||||||
|
```
|
||||||
|
|
||||||
|
SearXNG requires a `settings.yml` in the mounted config directory. Generate one from the default:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm searxng/searxng cat /etc/searxng/settings.yml > ./searxng/settings.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Key settings to configure in `settings.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
server:
|
||||||
|
secret_key: "generate-a-random-string-here"
|
||||||
|
bind_address: "0.0.0.0"
|
||||||
|
|
||||||
|
search:
|
||||||
|
safe_search: 0
|
||||||
|
default_lang: "en"
|
||||||
|
|
||||||
|
engines:
|
||||||
|
# Enable/disable specific engines here
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
search.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8090
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using SearXNG as an AI Search Backend
|
||||||
|
|
||||||
|
SearXNG integrates directly with Open WebUI as a web search provider, giving your local AI access to current web results without any third-party API keys:
|
||||||
|
|
||||||
|
**Open WebUI → Settings → Web Search:**
|
||||||
|
- Enable web search
|
||||||
|
- Set provider to `searxng`
|
||||||
|
- Set URL to `http://searxng:8080` (internal Docker network) or your Tailscale/local address
|
||||||
|
|
||||||
|
This is how MajorTwin gets current web context — queries go through SearXNG, not Google.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not DuckDuckGo?
|
||||||
|
|
||||||
|
DDG is better than Google for privacy, but it's still a centralized third-party service. SearXNG:
|
||||||
|
|
||||||
|
- Runs on your own hardware
|
||||||
|
- Has no account, no cookies, no session tracking
|
||||||
|
- Lets you choose which upstream engines to use and weight
|
||||||
|
- Can be kept entirely off the public internet (Tailscale-only)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#searxng #search #privacy #self-hosting #docker #linux #alternatives
|
||||||
102
03-opensource/dev-tools/rsync.md
Normal file
102
03-opensource/dev-tools/rsync.md
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
# rsync — Fast, Resumable File Transfers
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Copying large files or directory trees between drives or servers is slow, fragile, and unresumable with `cp`. A dropped connection or a single error means starting over. You also want to skip files that already exist at the destination without re-copying them.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`rsync` is a file synchronization tool that only transfers what has changed, preserves metadata, and can resume interrupted transfers. It works locally and over SSH.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install rsync
|
||||||
|
```
|
||||||
|
|
||||||
|
### Basic Local Copy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-a` — archive mode: preserves permissions, timestamps, symlinks, ownership
|
||||||
|
- `-v` — verbose: shows what's being transferred
|
||||||
|
|
||||||
|
**Trailing slash on source matters:**
|
||||||
|
- `/source/` — copy the *contents* of source into destination
|
||||||
|
- `/source` — copy the source *directory itself* into destination
|
||||||
|
|
||||||
|
### Resume an Interrupted Transfer
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --partial --progress /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
- `--partial` — keeps partially transferred files so they can be resumed
|
||||||
|
- `--progress` — shows per-file progress and speed
|
||||||
|
|
||||||
|
### Skip Already-Transferred Files
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --ignore-existing /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
Useful when restarting a migration — skips anything already at the destination regardless of timestamp comparison.
|
||||||
|
|
||||||
|
### Dry Run First
|
||||||
|
|
||||||
|
Always preview what rsync will do before committing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --dry-run /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
No files are moved. Output shows exactly what would happen.
|
||||||
|
|
||||||
|
### Transfer Over SSH
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av -e ssh /source/ user@remotehost:/destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
Or with a non-standard port:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av -e "ssh -p 2222" /source/ user@remotehost:/destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exclude Patterns
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --exclude='*.tmp' --exclude='.Trash*' /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real-World Use
|
||||||
|
|
||||||
|
Migrating ~286 files from `/majorRAID` to `/majorstorage` during a RAID dissolution project:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --partial --progress --ignore-existing \
|
||||||
|
/majorRAID/ /majorstorage/ \
|
||||||
|
2>&1 | tee /root/raid_migrate.log
|
||||||
|
```
|
||||||
|
|
||||||
|
Run inside a `tmux` or `screen` session so it survives SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rsync-migrate \
|
||||||
|
"rsync -av --partial --progress /majorRAID/ /majorstorage/ | tee /root/raid_migrate.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Progress on a Running Transfer
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tail -f /root/raid_migrate.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#rsync #linux #storage #file-transfer #sysadmin #dev-tools
|
||||||
76
03-opensource/dev-tools/screen.md
Normal file
76
03-opensource/dev-tools/screen.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# screen — Simple Persistent Terminal Sessions
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Same problem as tmux: SSH sessions die, jobs get killed, long-running tasks need to survive disconnects. screen is the older, simpler alternative to tmux — universally available and gets the job done with minimal setup.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`screen` creates detachable terminal sessions. It's installed by default on many systems, making it useful when tmux isn't available.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install screen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start a named session
|
||||||
|
screen -S mysession
|
||||||
|
|
||||||
|
# Detach (keeps running)
|
||||||
|
Ctrl+a, d
|
||||||
|
|
||||||
|
# List sessions
|
||||||
|
screen -list
|
||||||
|
|
||||||
|
# Reattach
|
||||||
|
screen -r mysession
|
||||||
|
|
||||||
|
# If session shows as "Attached" (stuck)
|
||||||
|
screen -d -r mysession
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start a Background Job Directly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -dmS mysession bash -c "long-running-command 2>&1 | tee /root/output.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-d` — start detached
|
||||||
|
- `-m` — create new session even if already inside screen
|
||||||
|
- `-S` — name the session
|
||||||
|
|
||||||
|
### Capture Current Output Without Attaching
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -S mysession -X hardcopy /tmp/screen_output.txt
|
||||||
|
cat /tmp/screen_output.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Send a Command to a Running Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -S mysession -X stuff "tail -f /root/output.log\n"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## screen vs tmux
|
||||||
|
|
||||||
|
| Feature | screen | tmux |
|
||||||
|
|---|---|---|
|
||||||
|
| Availability | Installed by default on most systems | Usually needs installing |
|
||||||
|
| Split panes | Basic (Ctrl+a, S) | Better (Ctrl+b, ") |
|
||||||
|
| Scripting | Limited | More capable |
|
||||||
|
| Config complexity | Simple | More options |
|
||||||
|
|
||||||
|
Use screen when it's already there or for quick throwaway sessions. Use tmux for anything more complex. See [tmux](tmux.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#screen #terminal #linux #ssh #productivity #dev-tools
|
||||||
93
03-opensource/dev-tools/tmux.md
Normal file
93
03-opensource/dev-tools/tmux.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# tmux — Persistent Terminal Sessions
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
SSH sessions die when your connection drops, your laptop closes, or you walk away. Long-running jobs — storage migrations, file scans, downloads — get killed mid-run. You need a way to detach from a session, come back later, and pick up exactly where you left off.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`tmux` is a terminal multiplexer. It runs sessions that persist independently of your SSH connection. You can detach, disconnect, reconnect from a different machine, and reattach to find everything still running.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install tmux
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start a named session
|
||||||
|
tmux new-session -s mysession
|
||||||
|
|
||||||
|
# Detach from a session (keeps it running)
|
||||||
|
Ctrl+b, d
|
||||||
|
|
||||||
|
# List running sessions
|
||||||
|
tmux ls
|
||||||
|
|
||||||
|
# Reattach to a session
|
||||||
|
tmux attach -t mysession
|
||||||
|
|
||||||
|
# Kill a session when done
|
||||||
|
tmux kill-session -t mysession
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start a Background Job Directly
|
||||||
|
|
||||||
|
Skip the interactive session entirely — start a job in a new detached session in one command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rmlint2 "rmlint /majorstorage// /mnt/usb// /majorRAID 2>&1 | tee /majorRAID/rmlint_scan2.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
The job runs immediately in the background. Attach later to check progress:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux attach -t rmlint2
|
||||||
|
```
|
||||||
|
|
||||||
|
### Capture Output Without Attaching
|
||||||
|
|
||||||
|
Read the current state of a session without interrupting it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux capture-pane -t rmlint2 -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### Split Panes
|
||||||
|
|
||||||
|
Monitor multiple things in one terminal window:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Horizontal split (top/bottom)
|
||||||
|
Ctrl+b, "
|
||||||
|
|
||||||
|
# Vertical split (left/right)
|
||||||
|
Ctrl+b, %
|
||||||
|
|
||||||
|
# Switch between panes
|
||||||
|
Ctrl+b, arrow keys
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real-World Use
|
||||||
|
|
||||||
|
On **majorhome**, all long-running storage operations run inside named tmux sessions so they survive SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rmlint2 "rmlint ..." # dedup scan
|
||||||
|
tmux new-session -d -s rsync-migrate "rsync ..." # file migration
|
||||||
|
tmux ls # check what's running
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## tmux vs screen
|
||||||
|
|
||||||
|
Both work. tmux has better split-pane support and scripting. screen is simpler and more universally installed. I use both — tmux for new jobs, screen for legacy ones. See the [screen](screen.md) article for reference.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#tmux #terminal #linux #ssh #productivity #dev-tools
|
||||||
22
03-opensource/index.md
Normal file
22
03-opensource/index.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# 📂 Open Source & Alternatives
|
||||||
|
|
||||||
|
A curated collection of my favorite open-source tools and privacy-respecting alternatives to mainstream software.
|
||||||
|
|
||||||
|
## 🔄 Alternatives
|
||||||
|
- [SearXNG: Private Self-Hosted Search](alternatives/searxng.md)
|
||||||
|
- [FreshRSS: Self-Hosted RSS Reader](alternatives/freshrss.md)
|
||||||
|
- [Gitea: Self-Hosted Git](alternatives/gitea.md)
|
||||||
|
|
||||||
|
## 🚀 Productivity
|
||||||
|
- [rmlint: Duplicate File Scanning](productivity/rmlint-duplicate-scanning.md)
|
||||||
|
|
||||||
|
## 🛠️ Development Tools
|
||||||
|
- [tmux: Persistent Terminal Sessions](dev-tools/tmux.md)
|
||||||
|
- [screen: Simple Persistent Sessions](dev-tools/screen.md)
|
||||||
|
- [rsync: Fast, Resumable File Transfers](dev-tools/rsync.md)
|
||||||
|
|
||||||
|
## 🎨 Media & Creative
|
||||||
|
- [yt-dlp: Video Downloading](media-creative/yt-dlp.md)
|
||||||
|
|
||||||
|
## 🔐 Privacy & Security
|
||||||
|
- [Vaultwarden: Self-Hosted Password Manager](privacy-security/vaultwarden.md)
|
||||||
129
03-opensource/media-creative/yt-dlp.md
Normal file
129
03-opensource/media-creative/yt-dlp.md
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
# yt-dlp — Video Downloading
|
||||||
|
|
||||||
|
## What It Is
|
||||||
|
|
||||||
|
`yt-dlp` is a feature-rich command-line video downloader, forked from youtube-dl with active maintenance and significantly better performance. It supports YouTube, Twitch, and hundreds of other sites.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### Fedora
|
||||||
|
```bash
|
||||||
|
sudo dnf install yt-dlp
|
||||||
|
# or latest via pip:
|
||||||
|
sudo pip install yt-dlp --break-system-packages
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update
|
||||||
|
```bash
|
||||||
|
sudo pip install -U yt-dlp --break-system-packages
|
||||||
|
# or if installed as standalone binary:
|
||||||
|
yt-dlp -U
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep it current — YouTube pushes extractor changes frequently and old versions break.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download a single video (best quality)
|
||||||
|
yt-dlp https://www.youtube.com/watch?v=VIDEO_ID
|
||||||
|
|
||||||
|
# Download to a specific directory with title as filename
|
||||||
|
yt-dlp -o "/path/to/output/%(title)s.%(ext)s" URL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Plex-Optimized Download
|
||||||
|
|
||||||
|
Download best quality and auto-convert to HEVC for Apple TV direct play:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
yt-dlp URL
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it — if your config is set up correctly (see Config File section below). The config handles format selection, output path, subtitles, and automatic AV1/VP9 → HEVC conversion.
|
||||||
|
|
||||||
|
> [!note] `bestvideo[ext=mp4]` caps at 1080p because YouTube only serves H.264 up to 1080p. Use `bestvideo+bestaudio` to get true 4K, then let the post-download hook convert AV1/VP9 to HEVC. See [Plex 4K Codec Compatibility](../../04-streaming/plex/plex-4k-codec-compatibility.md) for the full setup.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Playlists and Channels
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download a full playlist
|
||||||
|
yt-dlp -o "%(playlist_index)s - %(title)s.%(ext)s" PLAYLIST_URL
|
||||||
|
|
||||||
|
# Download only videos not already present
|
||||||
|
yt-dlp --download-archive archive.txt PLAYLIST_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
`--download-archive` maintains a file of completed video IDs — re-running the command skips already-downloaded videos automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Format Selection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all available formats for a video
|
||||||
|
yt-dlp --list-formats URL
|
||||||
|
|
||||||
|
# Download best video + best audio, merge to mp4
|
||||||
|
yt-dlp -f 'bestvideo+bestaudio' --merge-output-format mp4 URL
|
||||||
|
|
||||||
|
# Download audio only (MP3)
|
||||||
|
yt-dlp -x --audio-format mp3 URL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Config File
|
||||||
|
|
||||||
|
Persist your preferred flags so you don't repeat them every command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.config/yt-dlp
|
||||||
|
cat > ~/.config/yt-dlp/config << 'EOF'
|
||||||
|
--remote-components ejs:github
|
||||||
|
--format bestvideo+bestaudio
|
||||||
|
--merge-output-format mp4
|
||||||
|
--output /plex/plex/%(title)s.%(ext)s
|
||||||
|
--write-auto-subs
|
||||||
|
--embed-subs
|
||||||
|
--exec /usr/local/bin/yt-dlp-hevc-convert.sh {}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
After this, a bare `yt-dlp URL` downloads best quality, saves to `/plex/plex/`, embeds subtitles, and auto-converts AV1/VP9 to HEVC. See [Plex 4K Codec Compatibility](../../04-streaming/plex/plex-4k-codec-compatibility.md) for the conversion hook setup.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running Long Downloads in the Background
|
||||||
|
|
||||||
|
For large downloads or playlists, run inside `screen` or `tmux` so they survive SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -dmS yt-download bash -c \
|
||||||
|
"yt-dlp -o '/plex/plex/%(title)s.%(ext)s' PLAYLIST_URL 2>&1 | tee ~/yt-download.log"
|
||||||
|
|
||||||
|
# Check progress
|
||||||
|
screen -r yt-download
|
||||||
|
# or
|
||||||
|
tail -f ~/yt-download.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
For YouTube JS challenge errors, missing formats, and n-challenge failures on Fedora — see [yt-dlp YouTube JS Challenge Fix](../../05-troubleshooting/yt-dlp-fedora-js-challenge.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#yt-dlp #youtube #video #plex #linux #media #dev-tools
|
||||||
95
03-opensource/privacy-security/vaultwarden.md
Normal file
95
03-opensource/privacy-security/vaultwarden.md
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# Vaultwarden — Self-Hosted Password Manager
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Password managers are a necessity, but handing your credentials to a third-party cloud service is a trust problem. Bitwarden is open source and privacy-respecting, but if you're already running a homelab, there's no reason to depend on their servers.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[Vaultwarden](https://github.com/dani-garcia/vaultwarden) is an unofficial, lightweight Bitwarden-compatible server written in Rust. It exposes the same API that all official Bitwarden clients speak — desktop apps, browser extensions, mobile apps — so you get the full Bitwarden UX pointed at your own hardware.
|
||||||
|
|
||||||
|
Your passwords never leave your network.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker + Caddy)
|
||||||
|
|
||||||
|
### docker-compose.yml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
vaultwarden:
|
||||||
|
image: vaultwarden/server:latest
|
||||||
|
container_name: vaultwarden
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- DOMAIN=https://vault.yourdomain.com
|
||||||
|
- SIGNUPS_ALLOWED=false # disable after creating your account
|
||||||
|
volumes:
|
||||||
|
- ./vw-data:/data
|
||||||
|
ports:
|
||||||
|
- "8080:80"
|
||||||
|
```
|
||||||
|
|
||||||
|
Start it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
vault.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Caddy handles TLS automatically. No extra cert config needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to `https://vault.yourdomain.com` and create your account
|
||||||
|
2. Set `SIGNUPS_ALLOWED=false` in the compose file and restart the container
|
||||||
|
3. Install any official Bitwarden client (browser extension, desktop, mobile)
|
||||||
|
4. In the client, set the **Server URL** to `https://vault.yourdomain.com` before logging in
|
||||||
|
|
||||||
|
That's it. The client has no idea it's not talking to Bitwarden's servers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Access Model
|
||||||
|
|
||||||
|
On MajorInfrastructure, Vaultwarden runs on **majorlab** and is accessible:
|
||||||
|
|
||||||
|
- **Internally** — via Caddy on the local network
|
||||||
|
- **Remotely** — via Tailscale; vault is reachable from any device on the tailnet without exposing it to the public internet
|
||||||
|
|
||||||
|
This means the Caddy vhost does not need to be publicly routable. You can choose to expose it publicly (Let's Encrypt works fine) or keep it Tailscale-only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Backup
|
||||||
|
|
||||||
|
Vaultwarden stores everything in a single SQLite database at `./vw-data/db.sqlite3`. Back it up like any file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Simple copy (stop container first for consistency, or use sqlite backup mode)
|
||||||
|
sqlite3 /path/to/vw-data/db.sqlite3 ".backup '/path/to/backup/vw-backup-$(date +%F).sqlite3'"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or include the `vw-data/` directory in your regular rsync backup run.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not Bitwarden (Official)?
|
||||||
|
|
||||||
|
The official Bitwarden server is also open source but requires significantly more resources (multiple services, SQL Server). Vaultwarden runs in a single container on minimal RAM and handles everything a personal or family vault needs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#vaultwarden #bitwarden #passwords #privacy #self-hosting #docker #linux
|
||||||
58
03-opensource/productivity/rmlint-duplicate-scanning.md
Normal file
58
03-opensource/productivity/rmlint-duplicate-scanning.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# rmlint — Extreme Duplicate File Scanning
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Over time, backups and media collections can accumulate massive amounts of duplicate data. Traditional duplicate finders are often slow and limited in how they handle results. On MajorRAID, I identified **~4.0 TB (113,584 files)** of duplicate data across three different storage points.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`rmlint` is an extremely fast tool for finding (and optionally removing) duplicates. It is significantly faster than `fdupes` or `rdfind` because it uses a multi-stage approach to avoid unnecessary hashing.
|
||||||
|
|
||||||
|
### 1. Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install rmlint
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Scanning Multiple Directories
|
||||||
|
|
||||||
|
To scan for duplicates across multiple mount points and compare them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmlint /majorstorage /majorRAID /mnt/usb
|
||||||
|
```
|
||||||
|
|
||||||
|
This will generate a script named `rmlint.sh` and a summary of the findings.
|
||||||
|
|
||||||
|
### 3. Reviewing Results
|
||||||
|
|
||||||
|
**DO NOT** run the generated script without reviewing it first. You can use the summary to see which paths contain the most duplicates:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View the summary
|
||||||
|
cat rmlint.json | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Advanced Usage: Finding Duplicates by Hash Only
|
||||||
|
|
||||||
|
If you suspect duplicates with different filenames:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmlint --hidden --hard-links /path/to/search
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Repurposing Storage
|
||||||
|
|
||||||
|
After scanning and clearing duplicates, you can reclaim significant space. In my case, this was the first step in repurposing a 12TB USB drive as a **SnapRAID parity drive**.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
Run a scan monthly or before any major storage consolidation project.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#rmlint #linux #storage #cleanup #duplicates
|
||||||
@@ -1,7 +1,11 @@
|
|||||||
# 🎙️ Streaming & Podcasting
|
# 🎙️ Streaming & Podcasting
|
||||||
|
|
||||||
Guides for live streaming and podcast production, with a focus on OBS Studio.
|
Guides for live streaming and podcast production, with a focus on OBS Studio.
|
||||||
|
|
||||||
## OBS Studio
|
## OBS Studio
|
||||||
|
|
||||||
- [OBS Studio Setup & Encoding](obs/obs-studio-setup-encoding.md)
|
- [OBS Studio Setup & Encoding](obs/obs-studio-setup-encoding.md)
|
||||||
|
|
||||||
|
## Plex
|
||||||
|
|
||||||
|
- [Plex 4K Codec Compatibility (Apple TV)](plex/plex-4k-codec-compatibility.md)
|
||||||
|
|||||||
148
04-streaming/plex/plex-4k-codec-compatibility.md
Normal file
148
04-streaming/plex/plex-4k-codec-compatibility.md
Normal file
@@ -0,0 +1,148 @@
|
|||||||
|
# Plex 4K Codec Compatibility (Apple TV)
|
||||||
|
|
||||||
|
4K content on YouTube is delivered in AV1 or VP9 — neither of which the Plex app on Apple TV can direct play. This forces Plex to transcode, and most home server CPUs can't transcode 4K in real time. The fix is converting to HEVC before Plex ever sees the file.
|
||||||
|
|
||||||
|
## Codec Compatibility Matrix
|
||||||
|
|
||||||
|
| Codec | Apple TV (Plex direct play) | YouTube 4K | Notes |
|
||||||
|
|---|---|---|---|
|
||||||
|
| H.264 (AVC) | ✅ | ❌ (max 1080p) | Most compatible, but no 4K |
|
||||||
|
| HEVC (H.265) | ✅ | ❌ | Best choice: 4K compatible, widely supported |
|
||||||
|
| VP9 | ❌ | ✅ | Google's royalty-free codec, forces transcode |
|
||||||
|
| AV1 | ❌ | ✅ | Best compression, requires modern hardware to decode |
|
||||||
|
|
||||||
|
**Target format: HEVC.** Direct plays on Apple TV, supports 4K/HDR, and modern hardware can encode it quickly.
|
||||||
|
|
||||||
|
## Why AV1 and VP9 Cause Problems
|
||||||
|
|
||||||
|
When Plex can't direct play a file it transcodes it on the server. AV1 and VP9 decoding is CPU-intensive — most home server CPUs can't keep up with 4K60 in real time. Intel Quick Sync (HD 630 era) supports VP9 hardware decode but not AV1. AV1 hardware support requires 11th-gen Intel or RTX 30-series+.
|
||||||
|
|
||||||
|
## Batch Converting Existing Files
|
||||||
|
|
||||||
|
For files already in your Plex library, use this script to find all AV1/VP9 files and convert them to HEVC via VAAPI (Intel Quick Sync):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
VAAPI_DEV=/dev/dri/renderD128
|
||||||
|
PLEX_DIR="/plex/plex"
|
||||||
|
LOG="/root/av1_to_hevc.log"
|
||||||
|
TMPDIR="/tmp/av1_convert"
|
||||||
|
|
||||||
|
mkdir -p "$TMPDIR"
|
||||||
|
echo "=== AV1→HEVC batch started $(date) ===" | tee -a "$LOG"
|
||||||
|
|
||||||
|
find "$PLEX_DIR" -iname "*.mp4" -o -iname "*.mkv" | while IFS= read -r f; do
|
||||||
|
codec=$(mediainfo --Inform='Video;%Format%' "$f" 2>/dev/null)
|
||||||
|
[ "$codec" != "AV1" ] && [ "$codec" != "VP9" ] && continue
|
||||||
|
|
||||||
|
echo "[$(date +%H:%M:%S)] Converting: $(basename "$f")" | tee -a "$LOG"
|
||||||
|
tmp="${TMPDIR}/$(basename "${f%.*}").mp4"
|
||||||
|
|
||||||
|
ffmpeg -hide_banner -loglevel error \
|
||||||
|
-vaapi_device "$VAAPI_DEV" \
|
||||||
|
-i "$f" \
|
||||||
|
-vf 'format=nv12,hwupload' \
|
||||||
|
-c:v hevc_vaapi \
|
||||||
|
-qp 22 \
|
||||||
|
-c:a copy \
|
||||||
|
-movflags +faststart \
|
||||||
|
"$tmp"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ] && [ -s "$tmp" ]; then
|
||||||
|
mv "$tmp" "${f%.*}_hevc.mp4"
|
||||||
|
rm -f "$f"
|
||||||
|
else
|
||||||
|
rm -f "$tmp"
|
||||||
|
echo " FAILED — original kept." | tee -a "$LOG"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
Run in a tmux session so it survives SSH disconnect:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s av1-convert '/root/av1_to_hevc.sh'
|
||||||
|
tail -f /root/av1_to_hevc.log
|
||||||
|
```
|
||||||
|
|
||||||
|
After completion, trigger a Plex library scan to pick up the renamed files.
|
||||||
|
|
||||||
|
## Automating Future Downloads (yt-dlp)
|
||||||
|
|
||||||
|
Prevent the problem at the source with a post-download conversion hook.
|
||||||
|
|
||||||
|
### 1. Create the conversion script
|
||||||
|
|
||||||
|
Save to `/usr/local/bin/yt-dlp-hevc-convert.sh`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
INPUT="$1"
|
||||||
|
VAAPI_DEV=/dev/dri/renderD128
|
||||||
|
LOG=/var/log/yt-dlp-convert.log
|
||||||
|
|
||||||
|
[ -z "$INPUT" ] && exit 0
|
||||||
|
[ ! -f "$INPUT" ] && exit 0
|
||||||
|
|
||||||
|
CODEC=$(mediainfo --Inform='Video;%Format%' "$INPUT" 2>/dev/null)
|
||||||
|
if [ "$CODEC" != "AV1" ] && [ "$CODEC" != "VP9" ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Converting ($CODEC): $(basename "$INPUT")" >> "$LOG"
|
||||||
|
TMPOUT="${INPUT%.*}_hevc_tmp.mp4"
|
||||||
|
|
||||||
|
ffmpeg -hide_banner -loglevel error \
|
||||||
|
-vaapi_device "$VAAPI_DEV" \
|
||||||
|
-i "$INPUT" \
|
||||||
|
-vf 'format=nv12,hwupload' \
|
||||||
|
-c:v hevc_vaapi \
|
||||||
|
-qp 22 \
|
||||||
|
-c:a copy \
|
||||||
|
-movflags +faststart \
|
||||||
|
"$TMPOUT"
|
||||||
|
|
||||||
|
if [ $? -eq 0 ] && [ -s "$TMPOUT" ]; then
|
||||||
|
mv "$TMPOUT" "${INPUT%.*}.mp4"
|
||||||
|
[ "${INPUT%.*}.mp4" != "$INPUT" ] && rm -f "$INPUT"
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] OK: $(basename "${INPUT%.*}.mp4")" >> "$LOG"
|
||||||
|
else
|
||||||
|
rm -f "$TMPOUT"
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] FAILED — original kept: $(basename "$INPUT")" >> "$LOG"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
chmod +x /usr/local/bin/yt-dlp-hevc-convert.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure yt-dlp
|
||||||
|
|
||||||
|
`~/.config/yt-dlp/config`:
|
||||||
|
|
||||||
|
```
|
||||||
|
--remote-components ejs:github
|
||||||
|
--format bestvideo+bestaudio
|
||||||
|
--merge-output-format mp4
|
||||||
|
--output /plex/plex/%(title)s.%(ext)s
|
||||||
|
--write-auto-subs
|
||||||
|
--embed-subs
|
||||||
|
--exec /usr/local/bin/yt-dlp-hevc-convert.sh {}
|
||||||
|
```
|
||||||
|
|
||||||
|
With this config, `yt-dlp <URL>` downloads the best available quality (including 4K AV1/VP9), then immediately converts any AV1 or VP9 output to HEVC before Plex indexes it.
|
||||||
|
|
||||||
|
> [!note] The `--format bestvideo+bestaudio` selector gets true 4K from YouTube (served as AV1 or VP9). The hook converts it to HEVC. Without the hook, using `bestvideo[ext=mp4]` would cap downloads at 1080p since YouTube only serves H.264 up to 1080p.
|
||||||
|
|
||||||
|
## Enabling Hardware Transcoding in Plex
|
||||||
|
|
||||||
|
Even with automatic conversion in place, enable hardware acceleration in Plex as a fallback for any files that slip through:
|
||||||
|
|
||||||
|
**Plex Web → Settings → Transcoder → "Use hardware acceleration when available"**
|
||||||
|
|
||||||
|
This requires Plex Pass. On Intel systems with Quick Sync, VP9 will hardware transcode even without pre-conversion. AV1 will still fall back to CPU on pre-Alder Lake hardware.
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [yt-dlp: Video Downloading](../../03-opensource/media-creative/yt-dlp.md)
|
||||||
|
- [OBS Studio Setup & Encoding](../obs/obs-studio-setup-encoding.md)
|
||||||
47
05-troubleshooting/gemini-cli-manual-update.md
Normal file
47
05-troubleshooting/gemini-cli-manual-update.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# 🛠️ Gemini CLI: Manual Update Guide
|
||||||
|
|
||||||
|
If the automatic update fails or you need to force a specific version of the Gemini CLI, use these steps.
|
||||||
|
|
||||||
|
## 🔴 Symptom: Automatic Update Failed
|
||||||
|
You may see an error message like:
|
||||||
|
`✕ Automatic update failed. Please try updating manually`
|
||||||
|
|
||||||
|
## 🟢 Manual Update Procedure
|
||||||
|
|
||||||
|
### 1. Verify Current Version
|
||||||
|
Check the version currently installed on your system:
|
||||||
|
```bash
|
||||||
|
gemini --version
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Check Latest Version
|
||||||
|
Query the npm registry for the latest available version:
|
||||||
|
```bash
|
||||||
|
npm show @google/gemini-cli version
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Perform Manual Update
|
||||||
|
Use `npm` with `sudo` to update the global package:
|
||||||
|
```bash
|
||||||
|
sudo npm install -g @google/gemini-cli@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Confirm Update
|
||||||
|
Verify that the new version is active:
|
||||||
|
```bash
|
||||||
|
gemini --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🛠️ Troubleshooting Update Failures
|
||||||
|
|
||||||
|
### Permissions Issues
|
||||||
|
If you encounter `EACCES` errors without `sudo`, ensure your user has permissions or use `sudo` as shown above.
|
||||||
|
|
||||||
|
### Registry Connectivity
|
||||||
|
If `npm` cannot reach the registry, check your internet connection or any local firewall/proxy settings.
|
||||||
|
|
||||||
|
### Cache Issues
|
||||||
|
If the version doesn't update, try clearing the npm cache:
|
||||||
|
```bash
|
||||||
|
npm cache clean --force
|
||||||
|
```
|
||||||
84
05-troubleshooting/gitea-runner-boot-race-network-target.md
Normal file
84
05-troubleshooting/gitea-runner-boot-race-network-target.md
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
# Gitea Actions Runner: Boot Race Condition Fix
|
||||||
|
|
||||||
|
If your `gitea-runner` (act_runner) service fails to start on boot — crash-looping and eventually hitting systemd's restart rate limit — the service is likely starting before DNS is available.
|
||||||
|
|
||||||
|
## Symptoms
|
||||||
|
|
||||||
|
- `gitea-runner.service` enters a crash loop on boot
|
||||||
|
- `journalctl -u gitea-runner` shows connection/DNS errors on startup:
|
||||||
|
```
|
||||||
|
dial tcp: lookup git.example.com: no such host
|
||||||
|
```
|
||||||
|
or similar resolution failures
|
||||||
|
- Service eventually stops retrying (systemd restart rate limit reached)
|
||||||
|
- `systemctl status gitea-runner` shows `(Result: start-limit-hit)` after reboot
|
||||||
|
- Service works fine if started manually after boot completes
|
||||||
|
|
||||||
|
## Why It Happens
|
||||||
|
|
||||||
|
`After=network.target` only guarantees that the network **interfaces are configured** — not that DNS resolution is functional. systemd-resolved (or your local resolver) starts slightly later. `act_runner` tries to connect to the Gitea instance by hostname on startup, the DNS lookup fails, and the process exits.
|
||||||
|
|
||||||
|
With the default `Restart=always` and no `RestartSec`, systemd restarts the service immediately. After 5 rapid failures within the default burst window (10 attempts in 2 minutes), systemd hits the rate limit and stops restarting.
|
||||||
|
|
||||||
|
## Fix
|
||||||
|
|
||||||
|
### 1. Update the Service File
|
||||||
|
|
||||||
|
Edit `/etc/systemd/system/gitea-runner.service`:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=Gitea Actions Runner
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
User=deploy
|
||||||
|
WorkingDirectory=/opt/gitea-runner
|
||||||
|
ExecStart=/opt/gitea-runner/act_runner daemon
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
Key changes:
|
||||||
|
- `After=network-online.target` + `Wants=network-online.target` — waits for full network stack including DNS
|
||||||
|
- `RestartSec=10` — adds a 10-second delay between restart attempts, preventing rapid failure bursts from hitting the rate limit
|
||||||
|
|
||||||
|
### 2. Add a Local /etc/hosts Entry (Optional but Recommended)
|
||||||
|
|
||||||
|
If your Gitea instance is on the same local network or reachable via Tailscale, add an entry to `/etc/hosts` so act_runner can resolve it without depending on external DNS:
|
||||||
|
|
||||||
|
```
|
||||||
|
127.0.0.1 git.example.com
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `git.example.com` with your Gitea hostname and the IP with the correct local address. This makes resolution instantaneous and eliminates the DNS dependency entirely for startup.
|
||||||
|
|
||||||
|
### 3. Reload and Restart
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl restart gitea-runner
|
||||||
|
sudo systemctl status gitea-runner
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify it shows `active (running)` and stays that way. Then reboot and confirm it comes up automatically.
|
||||||
|
|
||||||
|
## Why `network-online.target` and Not `network.target`
|
||||||
|
|
||||||
|
| Target | What it guarantees |
|
||||||
|
|---|---|
|
||||||
|
| `network.target` | Network interfaces are configured (IP assigned) |
|
||||||
|
| `network-online.target` | Network is fully operational (DNS resolvers reachable) |
|
||||||
|
|
||||||
|
Services that need to make outbound network connections (especially DNS lookups) on startup should always use `network-online.target`. This includes: mail servers, monitoring agents, CI runners, anything that connects to an external host by name.
|
||||||
|
|
||||||
|
> [!note] `network-online.target` can add a few seconds to boot time since systemd waits for the network stack to fully initialize. For server contexts this is always the right tradeoff.
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [Managing Linux Services with systemd](../01-linux/process-management/managing-linux-services-systemd-ansible.md)
|
||||||
|
- [MajorWiki Setup & Publishing Pipeline](majwiki-setup-and-pipeline.md)
|
||||||
58
05-troubleshooting/gpu-display/qwen-14b-oom-3080ti.md
Normal file
58
05-troubleshooting/gpu-display/qwen-14b-oom-3080ti.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# Qwen2.5-14B OOM on RTX 3080 Ti (12GB)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
When attempting to run or fine-tune **Qwen2.5-14B** on an NVIDIA RTX 3080 Ti with 12GB of VRAM, the process fails with an Out of Memory (OOM) error:
|
||||||
|
|
||||||
|
```
|
||||||
|
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate X GiB (GPU 0; 12.00 GiB total capacity; Y GiB already allocated; Z GiB free; ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
The 12GB VRAM limit is hit during the initial model load or immediately upon starting the first training step.
|
||||||
|
|
||||||
|
## Root Causes
|
||||||
|
|
||||||
|
1. **Model Size:** A 14B parameter model in FP16/BF16 requires ~28GB of VRAM just for the weights.
|
||||||
|
2. **Context Length:** High context lengths (e.g., 4096+) significantly increase VRAM usage during training.
|
||||||
|
3. **Training Overhead:** Even with QLoRA (4-bit quantization), the overhead of gradients, optimizer states, and activations can exceed 12GB for a 14B model.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Solutions
|
||||||
|
|
||||||
|
### 1. Pivot to a 7B Model (Recommended)
|
||||||
|
|
||||||
|
For a 12GB GPU, a 7B parameter model (like **Qwen2.5-7B-Instruct**) is the sweet spot. It provides excellent performance while leaving enough VRAM for high context lengths and larger batch sizes.
|
||||||
|
|
||||||
|
- **VRAM Usage (7B QLoRA):** ~6-8GB
|
||||||
|
- **Pros:** Stable, fast, supports long context.
|
||||||
|
- **Cons:** Slightly lower reasoning capability than 14B.
|
||||||
|
|
||||||
|
### 2. Aggressive Quantization
|
||||||
|
|
||||||
|
If you MUST run 14B, use 4-bit quantization (GGUF or EXL2) for inference only. Training 14B on 12GB is not reliably possible even with extreme offloading.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example Ollama run (uses 4-bit quantization by default)
|
||||||
|
ollama run qwen2.5:14b
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Training Optimizations (if attempting 14B)
|
||||||
|
|
||||||
|
If you have no choice but to try 14B training:
|
||||||
|
- Set `max_seq_length` to 512 or 1024.
|
||||||
|
- Use `Unsloth` (it is highly memory-efficient).
|
||||||
|
- Enable `gradient_checkpointing`.
|
||||||
|
- Set `per_device_train_batch_size = 1`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
Keep your NVIDIA drivers and CUDA toolkit updated. On Windows (MajorRig), ensure WSL2 has sufficient memory allocation in `.wslconfig`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#gpu #cuda #oom #qwen #majortwin #llm #fine-tuning
|
||||||
@@ -8,13 +8,21 @@ Practical fixes for common Linux, networking, and application problems.
|
|||||||
## 🌐 Networking & Web
|
## 🌐 Networking & Web
|
||||||
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](networking/fail2ban-self-ban-apache-outage.md)
|
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](networking/fail2ban-self-ban-apache-outage.md)
|
||||||
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](networking/fail2ban-imap-self-ban-mail-client.md)
|
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](networking/fail2ban-imap-self-ban-mail-client.md)
|
||||||
|
- [firewalld: Mail Ports Wiped After Reload](networking/firewalld-mail-ports-reset.md)
|
||||||
- [ISP SNI Filtering & Caddy](isp-sni-filtering-caddy.md)
|
- [ISP SNI Filtering & Caddy](isp-sni-filtering-caddy.md)
|
||||||
- [yt-dlp YouTube JS Challenge Fix](yt-dlp-fedora-js-challenge.md)
|
- [yt-dlp YouTube JS Challenge Fix](yt-dlp-fedora-js-challenge.md)
|
||||||
|
|
||||||
## 📦 Docker & Systems
|
## 📦 Docker & Systems
|
||||||
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](docker-caddy-selinux-post-reboot-recovery.md)
|
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](docker-caddy-selinux-post-reboot-recovery.md)
|
||||||
|
- [Gitea Actions Runner: Boot Race Condition Fix](gitea-runner-boot-race-network-target.md)
|
||||||
- [MajorWiki Setup & Publishing Pipeline](majwiki-setup-and-pipeline.md)
|
- [MajorWiki Setup & Publishing Pipeline](majwiki-setup-and-pipeline.md)
|
||||||
|
|
||||||
|
## 🔒 SELinux
|
||||||
|
- [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](selinux-dovecot-vmail-context.md)
|
||||||
|
|
||||||
|
## 💾 Storage
|
||||||
|
- [mdadm RAID Recovery After USB Hub Disconnect](storage/mdadm-usb-hub-disconnect-recovery.md)
|
||||||
|
|
||||||
## 📝 Application Specific
|
## 📝 Application Specific
|
||||||
- [Obsidian Vault Recovery — Loading Cache Hang](obsidian-cache-hang-recovery.md)
|
- [Obsidian Vault Recovery — Loading Cache Hang](obsidian-cache-hang-recovery.md)
|
||||||
- [Gemini CLI Manual Update](gemini-cli-manual-update.md)
|
- [Gemini CLI Manual Update](gemini-cli-manual-update.md)
|
||||||
|
|||||||
@@ -119,3 +119,20 @@ The webhook runs as a systemd service so it survives reboots:
|
|||||||
systemctl status majwiki-webhook
|
systemctl status majwiki-webhook
|
||||||
systemctl restart majwiki-webhook
|
systemctl restart majwiki-webhook
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Updated 2026-03-13: Obsidian Git plugin dropped. See canonical workflow below.*
|
||||||
|
|
||||||
|
## Canonical Publishing Workflow
|
||||||
|
|
||||||
|
The Obsidian Git plugin was evaluated but dropped — too convoluted for a simple push. Manual git from the terminal is the canonical workflow.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/Documents/MajorVault
|
||||||
|
git add 20-Projects/MajorTwin/08-Wiki/
|
||||||
|
git commit -m "wiki: describe your changes"
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
From there: Gitea receives the push → fires webhook → majorlab pulls → MkDocs rebuilds → `notes.majorshouse.com` updates.
|
||||||
|
|||||||
186
05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md
Normal file
186
05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
# Apache Outage: Fail2ban Self-Ban + Missing iptables Rules
|
||||||
|
|
||||||
|
## 🛑 Problem
|
||||||
|
|
||||||
|
A web server running Apache2 becomes completely unreachable (`ERR_CONNECTION_TIMED_OUT`) despite Apache running normally. SSH access via Tailscale is unaffected.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 Diagnosis
|
||||||
|
|
||||||
|
### Step 1 — Confirm Apache is running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl status apache2
|
||||||
|
```
|
||||||
|
|
||||||
|
If Apache is `active (running)`, the problem is at the firewall layer, not the application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2 — Test the public IP directly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -I --max-time 5 http://<PUBLIC_IP>
|
||||||
|
```
|
||||||
|
|
||||||
|
A **timeout** means traffic is being dropped by the firewall. A **connection refused** means Apache is down.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3 — Check the iptables INPUT chain
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo iptables -L INPUT -n -v
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for ACCEPT rules on ports 80 and 443. If they're missing and the chain policy is `DROP`, HTTP/HTTPS traffic is being silently dropped.
|
||||||
|
|
||||||
|
**Example of broken state:**
|
||||||
|
```
|
||||||
|
Chain INPUT (policy DROP)
|
||||||
|
ACCEPT tcp -- lo * ... # loopback only
|
||||||
|
ACCEPT tcp -- tailscale0 * ... tcp dpt:22
|
||||||
|
# no rules for port 80 or 443
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4 — Check the nftables ruleset for Fail2ban
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nft list tables
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for `table inet f2b-table` — this is Fail2ban's nftables table. It operates at **priority `filter - 1`**, meaning it is evaluated *before* the main iptables INPUT chain.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nft list ruleset | grep -A 10 'f2b-table'
|
||||||
|
```
|
||||||
|
|
||||||
|
Fail2ban rejects banned IPs with rules like:
|
||||||
|
```
|
||||||
|
tcp dport { 80, 443 } ip saddr @addr-set-wordpress-hard reject with icmp port-unreachable
|
||||||
|
```
|
||||||
|
|
||||||
|
A banned admin IP will be rejected here regardless of any ACCEPT rules downstream.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5 — Check if your IP is banned
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
echo "=== $jail ==="; sudo fail2ban-client get $jail banip | tr ',' '\n' | grep <YOUR_IP>
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Solution
|
||||||
|
|
||||||
|
### Fix 1 — Add missing iptables ACCEPT rules for HTTP/HTTPS
|
||||||
|
|
||||||
|
If ports 80/443 are absent from the INPUT chain:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo iptables -I INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
|
||||||
|
sudo iptables -I INPUT -i eth0 -p tcp --dport 443 -j ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
Persist the rules:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo netfilter-persistent save
|
||||||
|
```
|
||||||
|
|
||||||
|
If `netfilter-persistent` is not installed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install -y iptables-persistent
|
||||||
|
sudo netfilter-persistent save
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix 2 — Unban your IP from all Fail2ban jails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
sudo fail2ban-client set $jail unbanip <YOUR_IP> 2>/dev/null && echo "Unbanned from $jail"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix 3 — Add your IP to Fail2ban's ignore list
|
||||||
|
|
||||||
|
Edit `/etc/fail2ban/jail.local`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/fail2ban/jail.local
|
||||||
|
```
|
||||||
|
|
||||||
|
Add or update the `[DEFAULT]` section:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[DEFAULT]
|
||||||
|
ignoreip = 127.0.0.1/8 ::1 <YOUR_IP>
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Fail2ban:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart fail2ban
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔁 Why This Happens
|
||||||
|
|
||||||
|
| Issue | Root Cause |
|
||||||
|
|---|---|
|
||||||
|
| Missing port 80/443 rules | iptables INPUT chain left incomplete after a manual firewall rework (e.g., SSH lockdown) |
|
||||||
|
| Still blocked after adding iptables rules | Fail2ban uses a separate nftables table at higher priority — iptables ACCEPT rules are never reached for banned IPs |
|
||||||
|
| Admin IP gets banned | Automated WordPress/Apache probes trigger Fail2ban jails against the admin's own IP |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Key Architecture Note
|
||||||
|
|
||||||
|
On servers running both iptables and Fail2ban, the evaluation order is:
|
||||||
|
|
||||||
|
1. **`inet f2b-table`** (nftables, priority `filter - 1`) — Fail2ban ban sets; evaluated first
|
||||||
|
2. **`ip filter` INPUT chain** (iptables/nftables, policy DROP) — explicit ACCEPT rules
|
||||||
|
3. **UFW chains** — IP-specific rules; evaluated last
|
||||||
|
|
||||||
|
A banned IP is stopped at step 1 and never reaches the ACCEPT rules in step 2. Always check Fail2ban *after* confirming iptables looks correct.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔎 Quick Diagnostic Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Apache
|
||||||
|
sudo systemctl status apache2
|
||||||
|
|
||||||
|
# Test public connectivity
|
||||||
|
curl -I --max-time 5 http://<PUBLIC_IP>
|
||||||
|
|
||||||
|
# Check iptables INPUT chain
|
||||||
|
sudo iptables -L INPUT -n -v
|
||||||
|
|
||||||
|
# List nftables tables (look for inet f2b-table)
|
||||||
|
sudo nft list tables
|
||||||
|
|
||||||
|
# Check Fail2ban jail status
|
||||||
|
sudo fail2ban-client status
|
||||||
|
|
||||||
|
# Check a specific jail's banned IPs
|
||||||
|
sudo fail2ban-client status wordpress-hard
|
||||||
|
|
||||||
|
# Unban an IP from all jails
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
sudo fail2ban-client set $jail unbanip <YOUR_IP> 2>/dev/null && echo "Unbanned from $jail"
|
||||||
|
done
|
||||||
|
```
|
||||||
70
05-troubleshooting/networking/firewalld-mail-ports-reset.md
Normal file
70
05-troubleshooting/networking/firewalld-mail-ports-reset.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# firewalld: Mail Ports Wiped After Reload (IMAP + Webmail Outage)
|
||||||
|
|
||||||
|
If IMAP, SMTP, and webmail all stop working simultaneously on a Fedora/RHEL mail server, firewalld may have reloaded and lost its mail port configuration.
|
||||||
|
|
||||||
|
## Symptoms
|
||||||
|
|
||||||
|
- `openssl s_client -connect mail.example.com:993` returns `Connection refused`
|
||||||
|
- Webmail returns connection refused or times out
|
||||||
|
- SSH still works (port 22 is typically in the persisted config)
|
||||||
|
- `firewall-cmd --list-services --zone=public` shows only `ssh dhcpv6-client mdns` or similar — no mail services
|
||||||
|
- Mail was working before a service restart or system event
|
||||||
|
|
||||||
|
## Why It Happens
|
||||||
|
|
||||||
|
firewalld uses two layers of configuration:
|
||||||
|
- **Runtime** — active rules in memory (lost on reload or restart)
|
||||||
|
- **Permanent** — written to `/etc/firewalld/zones/public.xml` (survives reloads)
|
||||||
|
|
||||||
|
If mail ports were added with `firewall-cmd --add-service=imaps` (without `--permanent`), they exist only in the runtime config. Any event that triggers a `firewall-cmd --reload` — including Fail2ban restarting, a system update, or manual reload — wipes the runtime config back to the permanent state, dropping all non-permanent rules.
|
||||||
|
|
||||||
|
## Diagnosis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check what's currently allowed
|
||||||
|
firewall-cmd --list-services --zone=public
|
||||||
|
|
||||||
|
# Check nftables for catch-all reject rules
|
||||||
|
nft list ruleset | grep -E '(reject|accept|993|143)'
|
||||||
|
|
||||||
|
# Test port 993 from an external machine
|
||||||
|
openssl s_client -connect mail.example.com:993 -brief
|
||||||
|
```
|
||||||
|
|
||||||
|
If the only services listed are `ssh` and the port test shows `Connection refused`, the rules are gone.
|
||||||
|
|
||||||
|
## Fix
|
||||||
|
|
||||||
|
Add all mail services permanently and reload:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
firewall-cmd --permanent \
|
||||||
|
--add-service=smtp \
|
||||||
|
--add-service=smtps \
|
||||||
|
--add-service=smtp-submission \
|
||||||
|
--add-service=imap \
|
||||||
|
--add-service=imaps \
|
||||||
|
--add-service=http \
|
||||||
|
--add-service=https
|
||||||
|
firewall-cmd --reload
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
firewall-cmd --list-services --zone=public
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```
|
||||||
|
dhcpv6-client http https imap imaps mdns smtp smtp-submission smtps ssh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Notes
|
||||||
|
|
||||||
|
- **Always use `--permanent`** when adding services to firewalld on a server. Without it, the rule exists only until the next reload.
|
||||||
|
- **Fail2ban + firewalld**: Fail2ban uses firewalld as its ban backend (`firewallcmd-rich-rules`). When Fail2ban restarts or crashes, it may trigger a `firewall-cmd --reload`, resetting any runtime-only rules.
|
||||||
|
- **Verify after any firewall event**: After Fail2ban restarts, system reboots, or `firewall-cmd --reload`, always confirm mail services are still present with `firewall-cmd --list-services --zone=public`.
|
||||||
|
- **Check the permanent config directly**: `cat /etc/firewalld/zones/public.xml` — if mail services aren't in this file, they'll be lost on next reload.
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [Linux Server Hardening Checklist](../../02-selfhosting/security/linux-server-hardening-checklist.md)
|
||||||
|
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](fail2ban-imap-self-ban-mail-client.md)
|
||||||
103
05-troubleshooting/selinux-dovecot-vmail-context.md
Normal file
103
05-troubleshooting/selinux-dovecot-vmail-context.md
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
# SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)
|
||||||
|
|
||||||
|
If Dovecot is generating SELinux AVC denials and mail delivery or retrieval is broken on a Fedora/RHEL system with SELinux enforcing, the `/var/vmail` directory tree likely has incorrect file contexts.
|
||||||
|
|
||||||
|
## Symptoms
|
||||||
|
|
||||||
|
- Thousands of AVC denials in `/var/log/audit/audit.log` for Dovecot processes
|
||||||
|
- Denials reference `var_t` context on files under `/var/vmail/`
|
||||||
|
- Mail delivery may fail silently; IMAP folders may appear empty or inaccessible
|
||||||
|
- `ausearch -m avc -ts recent` shows denials like:
|
||||||
|
```
|
||||||
|
type=AVC msg=audit(...): avc: denied { write } for pid=... comm="dovecot" name="..." scontext=system_u:system_r:dovecot_t:s0 tcontext=system_u:object_r:var_t:s0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Why It Happens
|
||||||
|
|
||||||
|
SELinux requires files to have the correct security context for the process that accesses them. When Postfix/Dovecot are installed on a fresh system and `/var/vmail` is created manually (or by the mail stack installer), the directory may inherit the default `var_t` context from `/var/` rather than the mail-specific `mail_spool_t` context Dovecot expects.
|
||||||
|
|
||||||
|
The correct context for the entire `/var/vmail` tree is `mail_spool_t` — including the `tmp/` subdirectories inside each Maildir folder.
|
||||||
|
|
||||||
|
> [!warning] Do NOT apply `dovecot_tmp_t` to Maildir `tmp/` directories
|
||||||
|
> `dovecot_tmp_t` is for Dovecot's own process-level temp files, not for Maildir `tmp/` folders. Postfix's virtual delivery agent writes to `tmp/` when delivering new mail. Applying `dovecot_tmp_t` will block Postfix from delivering any mail, silently deferring all messages with `Permission denied`.
|
||||||
|
|
||||||
|
## Fix
|
||||||
|
|
||||||
|
### 1. Check Current Context
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -Zd /var/vmail/
|
||||||
|
ls -Z /var/vmail/example.com/user/
|
||||||
|
ls -Zd /var/vmail/example.com/user/tmp/
|
||||||
|
```
|
||||||
|
|
||||||
|
If you see `var_t` instead of `mail_spool_t`, the contexts need to be set. If you see `dovecot_tmp_t` on `tmp/`, that needs to be corrected too.
|
||||||
|
|
||||||
|
### 2. Define the Correct File Context Rule
|
||||||
|
|
||||||
|
One rule covers everything — including `tmp/`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo semanage fcontext -a -t mail_spool_t "/var/vmail(/.*)?"
|
||||||
|
```
|
||||||
|
|
||||||
|
If you previously added a `dovecot_tmp_t` rule for `tmp/` directories, remove it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check for an erroneous dovecot_tmp_t rule
|
||||||
|
sudo semanage fcontext -l | grep vmail
|
||||||
|
|
||||||
|
# If you see one like "/var/vmail(/.*)*/tmp(/.*)?" with dovecot_tmp_t, delete it:
|
||||||
|
sudo semanage fcontext -d "/var/vmail(/.*)*/tmp(/.*)?"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Apply the Labels
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo restorecon -Rv /var/vmail
|
||||||
|
```
|
||||||
|
|
||||||
|
This relabels all existing files. On a mail server with many users and messages, this may take a moment and will print every relabeled path.
|
||||||
|
|
||||||
|
### 4. Verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -Zd /var/vmail/
|
||||||
|
ls -Zd /var/vmail/example.com/user/tmp/
|
||||||
|
```
|
||||||
|
|
||||||
|
Both should show `mail_spool_t`:
|
||||||
|
```
|
||||||
|
system_u:object_r:mail_spool_t:s0 /var/vmail/
|
||||||
|
system_u:object_r:mail_spool_t:s0 /var/vmail/example.com/user/tmp/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Flush Deferred Mail
|
||||||
|
|
||||||
|
If mail was queued while the context was wrong, flush it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
postqueue -f
|
||||||
|
postqueue -p # should be empty shortly
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Check That Denials Stopped
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ausearch -m avc -ts recent | grep dovecot
|
||||||
|
```
|
||||||
|
|
||||||
|
No output = no new denials.
|
||||||
|
|
||||||
|
## Key Notes
|
||||||
|
|
||||||
|
- **One rule is enough** — `"/var/vmail(/.*)?"` with `mail_spool_t` covers every file and directory under `/var/vmail`, including all `tmp/` subdirectories.
|
||||||
|
- **`semanage fcontext` is persistent** — the rules survive reboots and `restorecon` calls. You only need to run `semanage` once.
|
||||||
|
- **`restorecon` applies current rules to existing files** — run it after any `semanage` change and any time you manually create directories.
|
||||||
|
- **New mail directories are labeled automatically** — SELinux applies the registered `semanage` rules to any new files created under `/var/vmail`.
|
||||||
|
- **`var_t` context is the default for `/var/`** — any directory created under `/var/` without a specific `semanage` rule will inherit `var_t`. This is almost never correct for service data directories.
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [Linux Server Hardening Checklist](../02-selfhosting/security/linux-server-hardening-checklist.md)
|
||||||
|
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](docker-caddy-selinux-post-reboot-recovery.md)
|
||||||
105
05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md
Normal file
105
05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
# mdadm RAID Recovery After USB Hub Disconnect
|
||||||
|
|
||||||
|
A software RAID array managed by mdadm can appear to catastrophically fail when the drives are connected via USB rather than SATA. The array is fine — the hub dropped out. Here's how to diagnose and recover.
|
||||||
|
|
||||||
|
## Symptoms
|
||||||
|
|
||||||
|
- rsync or other I/O to the RAID mount returns `Input/output error`
|
||||||
|
- `cat /proc/mdstat` shows `broken raid0` or `FAILED`
|
||||||
|
- `mdadm --detail /dev/md0` shows `State: broken, FAILED`
|
||||||
|
- `lsblk` no longer lists the RAID member drives (e.g. `sdd`, `sde` gone)
|
||||||
|
- XFS (or other filesystem) logs in dmesg:
|
||||||
|
```
|
||||||
|
XFS (md0): log I/O error -5
|
||||||
|
XFS (md0): Filesystem has been shut down due to log error (0x2).
|
||||||
|
```
|
||||||
|
- `smartctl -H /dev/sdd` returns `No such device`
|
||||||
|
|
||||||
|
## Why It Happens
|
||||||
|
|
||||||
|
If your RAID drives are in a USB enclosure (e.g. TerraMaster via ASMedia hub), a USB disconnect — triggered by a power fluctuation, plugging in another device, or a hub reset — causes mdadm to see the drives disappear. mdadm cannot distinguish a USB dropout from a physical drive failure, so it declares the array failed.
|
||||||
|
|
||||||
|
The failure message in dmesg will show `hostbyte=DID_ERROR` rather than a drive-level error:
|
||||||
|
|
||||||
|
```
|
||||||
|
md/raid0md0: Disk failure on sdd1 detected, failing array.
|
||||||
|
sd X:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
|
||||||
|
```
|
||||||
|
|
||||||
|
`DID_ERROR` means the SCSI host adapter (USB controller) reported the error — the drives themselves are likely fine.
|
||||||
|
|
||||||
|
## Diagnosis
|
||||||
|
|
||||||
|
### 1. Check if the USB hub recovered
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lsblk -o NAME,SIZE,TYPE,FSTYPE,MODEL
|
||||||
|
```
|
||||||
|
|
||||||
|
After a hub reconnects, drives will reappear — often with **new device names** (e.g. `sdd`/`sde` become `sdg`/`sdh`). Look for drives with `linux_raid_member` filesystem type.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dmesg | grep -iE 'usb|disconnect|DID_ERROR' | tail -30
|
||||||
|
```
|
||||||
|
|
||||||
|
A hub dropout looks like multiple devices disconnecting at the same time on the same USB port.
|
||||||
|
|
||||||
|
### 2. Confirm drives have intact superblocks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mdadm --examine /dev/sdg1
|
||||||
|
mdadm --examine /dev/sdh1
|
||||||
|
```
|
||||||
|
|
||||||
|
If the superblocks are present and show matching UUID/array info, the data is intact.
|
||||||
|
|
||||||
|
## Recovery
|
||||||
|
|
||||||
|
### 1. Unmount and stop the degraded array
|
||||||
|
|
||||||
|
```bash
|
||||||
|
umount /majorRAID # or wherever md0 is mounted
|
||||||
|
mdadm --stop /dev/md0
|
||||||
|
```
|
||||||
|
|
||||||
|
If umount fails due to a busy mount or already-failed filesystem, it may already be unmounted by the kernel. Proceed with `--stop`.
|
||||||
|
|
||||||
|
### 2. Reassemble with the new device names
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mdadm --assemble /dev/md0 /dev/sdg1 /dev/sdh1
|
||||||
|
```
|
||||||
|
|
||||||
|
mdadm matches drives by their superblock UUID, not device name. As long as both drives are present the assembly will succeed regardless of what they're called.
|
||||||
|
|
||||||
|
### 3. Mount and verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mount /dev/md0 /majorRAID
|
||||||
|
df -h /majorRAID
|
||||||
|
ls /majorRAID
|
||||||
|
```
|
||||||
|
|
||||||
|
If the filesystem mounts and data is visible, recovery is complete.
|
||||||
|
|
||||||
|
### 4. Create or update /etc/mdadm.conf
|
||||||
|
|
||||||
|
If `/etc/mdadm.conf` doesn't exist (or references old device names), update it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mdadm --detail --scan > /etc/mdadm.conf
|
||||||
|
cat /etc/mdadm.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
The output uses UUID rather than device names — the array will reassemble correctly on reboot even if drive letters change again.
|
||||||
|
|
||||||
|
## Prevention
|
||||||
|
|
||||||
|
The root cause is drives on USB rather than SATA. Short of moving the drives to a SATA controller, options are limited. When planning a migration off the RAID array (e.g. to SnapRAID + MergerFS), prioritize getting drives onto SATA connections.
|
||||||
|
|
||||||
|
> [!warning] RAID 0 has no redundancy. A USB dropout that causes the array to fail mid-write could corrupt data even if the drives themselves are healthy. Keep current backups before any maintenance involving the enclosure.
|
||||||
|
|
||||||
|
## Related
|
||||||
|
|
||||||
|
- [SnapRAID & MergerFS Storage Setup](../../01-linux/storage/snapraid-mergerfs-setup.md)
|
||||||
|
- [rsync Backup Patterns](../../02-selfhosting/storage-backup/rsync-backup-patterns.md)
|
||||||
@@ -31,7 +31,7 @@ DNS record and Caddy entry have been removed.
|
|||||||
|
|
||||||
## Content
|
## Content
|
||||||
|
|
||||||
- 37 articles across 5 domains
|
- 42 articles across 5 domains
|
||||||
- Source of truth: `MajorVault/20-Projects/MajorTwin/08-Wiki/`
|
- Source of truth: `MajorVault/20-Projects/MajorTwin/08-Wiki/`
|
||||||
- Deployed via Gitea webhook (push from MajorAir → auto-pull on majorlab)
|
- Deployed via Gitea webhook (push from MajorAir → auto-pull on majorlab)
|
||||||
|
|
||||||
@@ -63,7 +63,7 @@ rsync -av --include="*.md" --include="*/" --exclude="*" \
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*Updated 2026-03-14*
|
*Updated 2026-03-15*
|
||||||
|
|
||||||
## Canonical Update Workflow
|
## Canonical Update Workflow
|
||||||
|
|
||||||
|
|||||||
21
README.md
21
README.md
@@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
> A growing reference of Linux, self-hosting, open source, streaming, and troubleshooting guides. Written by MajorLinux. Used by MajorTwin.
|
> A growing reference of Linux, self-hosting, open source, streaming, and troubleshooting guides. Written by MajorLinux. Used by MajorTwin.
|
||||||
>
|
>
|
||||||
**Last updated:** 2026-03-14
|
**Last updated:** 2026-03-15
|
||||||
**Article count:** 37
|
**Article count:** 42
|
||||||
|
|
||||||
## Domains
|
## Domains
|
||||||
|
|
||||||
@@ -12,8 +12,8 @@
|
|||||||
| 🐧 Linux & Sysadmin | `01-linux/` | 9 |
|
| 🐧 Linux & Sysadmin | `01-linux/` | 9 |
|
||||||
| 🏠 Self-Hosting & Homelab | `02-selfhosting/` | 8 |
|
| 🏠 Self-Hosting & Homelab | `02-selfhosting/` | 8 |
|
||||||
| 🔓 Open Source Tools | `03-opensource/` | 9 |
|
| 🔓 Open Source Tools | `03-opensource/` | 9 |
|
||||||
| 🎙️ Streaming & Podcasting | `04-streaming/` | 1 |
|
| 🎙️ Streaming & Podcasting | `04-streaming/` | 2 |
|
||||||
| 🔧 General Troubleshooting | `05-troubleshooting/` | 10 |
|
| 🔧 General Troubleshooting | `05-troubleshooting/` | 14 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -96,12 +96,16 @@
|
|||||||
### OBS Studio
|
### OBS Studio
|
||||||
- [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md) — installation, NVENC/x264 settings, scene setup, audio filters, Linux Wayland notes
|
- [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md) — installation, NVENC/x264 settings, scene setup, audio filters, Linux Wayland notes
|
||||||
|
|
||||||
|
### Plex
|
||||||
|
- [Plex 4K Codec Compatibility (Apple TV)](04-streaming/plex/plex-4k-codec-compatibility.md) — AV1/VP9 vs HEVC, batch conversion script, yt-dlp auto-convert hook
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🔧 General Troubleshooting
|
## 🔧 General Troubleshooting
|
||||||
|
|
||||||
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md) — diagnosing and fixing Apache outages caused by missing firewall rules and Fail2ban self-bans
|
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md) — diagnosing and fixing Apache outages caused by missing firewall rules and Fail2ban self-bans
|
||||||
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) — diagnosing why one device stops receiving email when the mail server is healthy
|
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) — diagnosing why one device stops receiving email when the mail server is healthy
|
||||||
|
- [firewalld: Mail Ports Wiped After Reload](05-troubleshooting/networking/firewalld-mail-ports-reset.md) — recovering IMAP and webmail after firewalld reload drops all mail service rules
|
||||||
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md) — fixing docker.socket, SELinux port blocks, and httpd_can_network_connect after reboot
|
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md) — fixing docker.socket, SELinux port blocks, and httpd_can_network_connect after reboot
|
||||||
- [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md) — troubleshooting why wiki.majorshouse.com was blocked by Google Fiber
|
- [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md) — troubleshooting why wiki.majorshouse.com was blocked by Google Fiber
|
||||||
- [Obsidian Cache Hang Recovery](05-troubleshooting/obsidian-cache-hang-recovery.md) — resolving "Loading cache" hang in Obsidian by cleaning Electron app data and ML artifacts
|
- [Obsidian Cache Hang Recovery](05-troubleshooting/obsidian-cache-hang-recovery.md) — resolving "Loading cache" hang in Obsidian by cleaning Electron app data and ML artifacts
|
||||||
@@ -109,6 +113,9 @@
|
|||||||
- [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md) — fixing YouTube JS challenge solver errors and missing formats on Fedora
|
- [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md) — fixing YouTube JS challenge solver errors and missing formats on Fedora
|
||||||
- [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md) — how to manually update the Gemini CLI when automatic updates fail
|
- [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md) — how to manually update the Gemini CLI when automatic updates fail
|
||||||
- [MajorWiki Setup & Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md) — setting up MajorWiki and the Obsidian → Gitea → MkDocs publishing pipeline
|
- [MajorWiki Setup & Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md) — setting up MajorWiki and the Obsidian → Gitea → MkDocs publishing pipeline
|
||||||
|
- [Gitea Actions Runner: Boot Race Condition Fix](05-troubleshooting/gitea-runner-boot-race-network-target.md) — fixing act_runner crash loop on boot caused by DNS not ready at startup
|
||||||
|
- [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](05-troubleshooting/selinux-dovecot-vmail-context.md) — fixing thousands of AVC denials when /var/vmail has wrong SELinux context
|
||||||
|
- [mdadm RAID Recovery After USB Hub Disconnect](05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md) — diagnosing and recovering a failed mdadm array caused by a USB hub dropout
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -116,6 +123,12 @@
|
|||||||
|
|
||||||
| Date | Article | Domain |
|
| Date | Article | Domain |
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
|
| 2026-03-15 | [firewalld: Mail Ports Wiped After Reload](05-troubleshooting/networking/firewalld-mail-ports-reset.md) | Troubleshooting |
|
||||||
|
| 2026-03-15 | [Plex 4K Codec Compatibility (Apple TV)](04-streaming/plex/plex-4k-codec-compatibility.md) | Streaming |
|
||||||
|
| 2026-03-15 | [mdadm RAID Recovery After USB Hub Disconnect](05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md) | Troubleshooting |
|
||||||
|
| 2026-03-15 | [yt-dlp: Video Downloading](03-opensource/media-creative/yt-dlp.md) | Open Source |
|
||||||
|
| 2026-03-14 | [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](05-troubleshooting/selinux-dovecot-vmail-context.md) | Troubleshooting |
|
||||||
|
| 2026-03-14 | [Gitea Actions Runner: Boot Race Condition Fix](05-troubleshooting/gitea-runner-boot-race-network-target.md) | Troubleshooting |
|
||||||
| 2026-03-14 | [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) | Troubleshooting |
|
| 2026-03-14 | [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) | Troubleshooting |
|
||||||
| 2026-03-14 | [SearXNG: Private Self-Hosted Search](03-opensource/alternatives/searxng.md) | Open Source |
|
| 2026-03-14 | [SearXNG: Private Self-Hosted Search](03-opensource/alternatives/searxng.md) | Open Source |
|
||||||
| 2026-03-14 | [FreshRSS: Self-Hosted RSS Reader](03-opensource/alternatives/freshrss.md) | Open Source |
|
| 2026-03-14 | [FreshRSS: Self-Hosted RSS Reader](03-opensource/alternatives/freshrss.md) | Open Source |
|
||||||
|
|||||||
@@ -30,9 +30,11 @@
|
|||||||
* [yt-dlp: Video Downloading](03-opensource/media-creative/yt-dlp.md)
|
* [yt-dlp: Video Downloading](03-opensource/media-creative/yt-dlp.md)
|
||||||
* [Streaming & Podcasting](04-streaming/index.md)
|
* [Streaming & Podcasting](04-streaming/index.md)
|
||||||
* [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md)
|
* [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md)
|
||||||
|
* [Plex 4K Codec Compatibility (Apple TV)](04-streaming/plex/plex-4k-codec-compatibility.md)
|
||||||
* [Troubleshooting](05-troubleshooting/index.md)
|
* [Troubleshooting](05-troubleshooting/index.md)
|
||||||
* [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md)
|
* [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md)
|
||||||
* [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md)
|
* [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md)
|
||||||
|
* [firewalld: Mail Ports Wiped After Reload](05-troubleshooting/networking/firewalld-mail-ports-reset.md)
|
||||||
* [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md)
|
* [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md)
|
||||||
* [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md)
|
* [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md)
|
||||||
* [Obsidian Vault Recovery — Loading Cache Hang](05-troubleshooting/obsidian-cache-hang-recovery.md)
|
* [Obsidian Vault Recovery — Loading Cache Hang](05-troubleshooting/obsidian-cache-hang-recovery.md)
|
||||||
@@ -40,3 +42,6 @@
|
|||||||
* [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md)
|
* [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md)
|
||||||
* [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md)
|
* [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md)
|
||||||
* [MajorWiki Setup & Publishing Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md)
|
* [MajorWiki Setup & Publishing Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md)
|
||||||
|
* [Gitea Actions Runner: Boot Race Condition Fix](05-troubleshooting/gitea-runner-boot-race-network-target.md)
|
||||||
|
* [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](05-troubleshooting/selinux-dovecot-vmail-context.md)
|
||||||
|
* [mdadm RAID Recovery After USB Hub Disconnect](05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md)
|
||||||
|
|||||||
21
index.md
21
index.md
@@ -2,8 +2,8 @@
|
|||||||
|
|
||||||
> A growing reference of Linux, self-hosting, open source, streaming, and troubleshooting guides. Written by MajorLinux. Used by MajorTwin.
|
> A growing reference of Linux, self-hosting, open source, streaming, and troubleshooting guides. Written by MajorLinux. Used by MajorTwin.
|
||||||
>
|
>
|
||||||
> **Last updated:** 2026-03-14
|
> **Last updated:** 2026-03-15
|
||||||
> **Article count:** 37
|
> **Article count:** 42
|
||||||
|
|
||||||
## Domains
|
## Domains
|
||||||
|
|
||||||
@@ -12,8 +12,8 @@
|
|||||||
| 🐧 Linux & Sysadmin | `01-linux/` | 9 |
|
| 🐧 Linux & Sysadmin | `01-linux/` | 9 |
|
||||||
| 🏠 Self-Hosting & Homelab | `02-selfhosting/` | 8 |
|
| 🏠 Self-Hosting & Homelab | `02-selfhosting/` | 8 |
|
||||||
| 🔓 Open Source Tools | `03-opensource/` | 9 |
|
| 🔓 Open Source Tools | `03-opensource/` | 9 |
|
||||||
| 🎙️ Streaming & Podcasting | `04-streaming/` | 1 |
|
| 🎙️ Streaming & Podcasting | `04-streaming/` | 2 |
|
||||||
| 🔧 General Troubleshooting | `05-troubleshooting/` | 10 |
|
| 🔧 General Troubleshooting | `05-troubleshooting/` | 14 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -96,12 +96,16 @@
|
|||||||
### OBS Studio
|
### OBS Studio
|
||||||
- [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md) — installation, NVENC/x264 settings, scene setup, audio filters, Linux Wayland notes
|
- [OBS Studio Setup & Encoding](04-streaming/obs/obs-studio-setup-encoding.md) — installation, NVENC/x264 settings, scene setup, audio filters, Linux Wayland notes
|
||||||
|
|
||||||
|
### Plex
|
||||||
|
- [Plex 4K Codec Compatibility (Apple TV)](04-streaming/plex/plex-4k-codec-compatibility.md) — AV1/VP9 vs HEVC, batch conversion script, yt-dlp auto-convert hook
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🔧 General Troubleshooting
|
## 🔧 General Troubleshooting
|
||||||
|
|
||||||
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md) — diagnosing and fixing Apache outages caused by missing firewall rules and Fail2ban self-bans
|
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md) — diagnosing and fixing Apache outages caused by missing firewall rules and Fail2ban self-bans
|
||||||
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) — diagnosing why one device stops receiving email when the mail server is healthy
|
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) — diagnosing why one device stops receiving email when the mail server is healthy
|
||||||
|
- [firewalld: Mail Ports Wiped After Reload](05-troubleshooting/networking/firewalld-mail-ports-reset.md) — recovering IMAP and webmail after firewalld reload drops all mail service rules
|
||||||
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md) — fixing docker.socket, SELinux port blocks, and httpd_can_network_connect after reboot
|
- [Docker & Caddy Recovery After Reboot (Fedora + SELinux)](05-troubleshooting/docker-caddy-selinux-post-reboot-recovery.md) — fixing docker.socket, SELinux port blocks, and httpd_can_network_connect after reboot
|
||||||
- [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md) — troubleshooting why wiki.majorshouse.com was blocked by Google Fiber
|
- [ISP SNI Filtering with Caddy](05-troubleshooting/isp-sni-filtering-caddy.md) — troubleshooting why wiki.majorshouse.com was blocked by Google Fiber
|
||||||
- [Obsidian Cache Hang Recovery](05-troubleshooting/obsidian-cache-hang-recovery.md) — resolving "Loading cache" hang in Obsidian by cleaning Electron app data and ML artifacts
|
- [Obsidian Cache Hang Recovery](05-troubleshooting/obsidian-cache-hang-recovery.md) — resolving "Loading cache" hang in Obsidian by cleaning Electron app data and ML artifacts
|
||||||
@@ -109,6 +113,9 @@
|
|||||||
- [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md) — fixing YouTube JS challenge solver errors and missing formats on Fedora
|
- [yt-dlp YouTube JS Challenge Fix on Fedora](05-troubleshooting/yt-dlp-fedora-js-challenge.md) — fixing YouTube JS challenge solver errors and missing formats on Fedora
|
||||||
- [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md) — how to manually update the Gemini CLI when automatic updates fail
|
- [Gemini CLI Manual Update](05-troubleshooting/gemini-cli-manual-update.md) — how to manually update the Gemini CLI when automatic updates fail
|
||||||
- [MajorWiki Setup & Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md) — setting up MajorWiki and the Obsidian → Gitea → MkDocs publishing pipeline
|
- [MajorWiki Setup & Pipeline](05-troubleshooting/majwiki-setup-and-pipeline.md) — setting up MajorWiki and the Obsidian → Gitea → MkDocs publishing pipeline
|
||||||
|
- [Gitea Actions Runner: Boot Race Condition Fix](05-troubleshooting/gitea-runner-boot-race-network-target.md) — fixing act_runner crash loop on boot caused by DNS not ready at startup
|
||||||
|
- [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](05-troubleshooting/selinux-dovecot-vmail-context.md) — fixing thousands of AVC denials when /var/vmail has wrong SELinux context
|
||||||
|
- [mdadm RAID Recovery After USB Hub Disconnect](05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md) — diagnosing and recovering a failed mdadm array caused by a USB hub dropout
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -116,6 +123,12 @@
|
|||||||
|
|
||||||
| Date | Article | Domain |
|
| Date | Article | Domain |
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
|
| 2026-03-15 | [firewalld: Mail Ports Wiped After Reload](05-troubleshooting/networking/firewalld-mail-ports-reset.md) | Troubleshooting |
|
||||||
|
| 2026-03-15 | [Plex 4K Codec Compatibility (Apple TV)](04-streaming/plex/plex-4k-codec-compatibility.md) | Streaming |
|
||||||
|
| 2026-03-15 | [mdadm RAID Recovery After USB Hub Disconnect](05-troubleshooting/storage/mdadm-usb-hub-disconnect-recovery.md) | Troubleshooting |
|
||||||
|
| 2026-03-15 | [yt-dlp: Video Downloading](03-opensource/media-creative/yt-dlp.md) | Open Source |
|
||||||
|
| 2026-03-14 | [SELinux: Fixing Dovecot Mail Spool Context (/var/vmail)](05-troubleshooting/selinux-dovecot-vmail-context.md) | Troubleshooting |
|
||||||
|
| 2026-03-14 | [Gitea Actions Runner: Boot Race Condition Fix](05-troubleshooting/gitea-runner-boot-race-network-target.md) | Troubleshooting |
|
||||||
| 2026-03-14 | [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) | Troubleshooting |
|
| 2026-03-14 | [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](05-troubleshooting/networking/fail2ban-imap-self-ban-mail-client.md) | Troubleshooting |
|
||||||
| 2026-03-14 | [SearXNG: Private Self-Hosted Search](03-opensource/alternatives/searxng.md) | Open Source |
|
| 2026-03-14 | [SearXNG: Private Self-Hosted Search](03-opensource/alternatives/searxng.md) | Open Source |
|
||||||
| 2026-03-14 | [FreshRSS: Self-Hosted RSS Reader](03-opensource/alternatives/freshrss.md) | Open Source |
|
| 2026-03-14 | [FreshRSS: Self-Hosted RSS Reader](03-opensource/alternatives/freshrss.md) | Open Source |
|
||||||
|
|||||||
Reference in New Issue
Block a user