merge: resolve conflicts, keep new IMAP self-ban article
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
18
.gitattributes
vendored
Normal file
18
.gitattributes
vendored
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Normalize line endings to LF for all text files
|
||||||
|
* text=auto eol=lf
|
||||||
|
|
||||||
|
# Explicitly handle markdown
|
||||||
|
*.md text eol=lf
|
||||||
|
|
||||||
|
# Explicitly handle config files
|
||||||
|
*.yml text eol=lf
|
||||||
|
*.yaml text eol=lf
|
||||||
|
*.json text eol=lf
|
||||||
|
*.toml text eol=lf
|
||||||
|
|
||||||
|
# Binary files — don't touch
|
||||||
|
*.png binary
|
||||||
|
*.jpg binary
|
||||||
|
*.jpeg binary
|
||||||
|
*.gif binary
|
||||||
|
*.pdf binary
|
||||||
@@ -1,29 +1,29 @@
|
|||||||
# 🐧 Linux & Sysadmin
|
# 🐧 Linux & Sysadmin
|
||||||
|
|
||||||
A collection of guides covering Linux administration, shell scripting, networking, and distro-specific topics.
|
A collection of guides covering Linux administration, shell scripting, networking, and distro-specific topics.
|
||||||
|
|
||||||
## Files & Permissions
|
## Files & Permissions
|
||||||
|
|
||||||
- [Linux File Permissions and Ownership](files-permissions/linux-file-permissions.md)
|
- [Linux File Permissions and Ownership](files-permissions/linux-file-permissions.md)
|
||||||
|
|
||||||
## Networking
|
## Networking
|
||||||
|
|
||||||
- [SSH Config & Key Management](networking/ssh-config-key-management.md)
|
- [SSH Config & Key Management](networking/ssh-config-key-management.md)
|
||||||
|
|
||||||
## Package Management
|
## Package Management
|
||||||
|
|
||||||
- [Package Management Reference](packages/package-management-reference.md)
|
- [Package Management Reference](packages/package-management-reference.md)
|
||||||
|
|
||||||
## Process Management
|
## Process Management
|
||||||
|
|
||||||
- [Managing Linux Services with systemd](process-management/managing-linux-services-systemd-ansible.md)
|
- [Managing Linux Services with systemd](process-management/managing-linux-services-systemd-ansible.md)
|
||||||
|
|
||||||
## Shell & Scripting
|
## Shell & Scripting
|
||||||
|
|
||||||
- [Ansible Getting Started](shell-scripting/ansible-getting-started.md)
|
- [Ansible Getting Started](shell-scripting/ansible-getting-started.md)
|
||||||
- [Bash Scripting Patterns](shell-scripting/bash-scripting-patterns.md)
|
- [Bash Scripting Patterns](shell-scripting/bash-scripting-patterns.md)
|
||||||
|
|
||||||
## Distro-Specific
|
## Distro-Specific
|
||||||
|
|
||||||
- [Linux Distro Guide for Beginners](distro-specific/linux-distro-guide-beginners.md)
|
- [Linux Distro Guide for Beginners](distro-specific/linux-distro-guide-beginners.md)
|
||||||
- [WSL2 Instance Migration to Fedora 43](distro-specific/wsl2-instance-migration-fedora43.md)
|
- [WSL2 Instance Migration to Fedora 43](distro-specific/wsl2-instance-migration-fedora43.md)
|
||||||
|
|||||||
74
01-linux/storage/snapraid-mergerfs-setup.md
Normal file
74
01-linux/storage/snapraid-mergerfs-setup.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# SnapRAID & MergerFS Storage Setup
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Managing a collection of mismatched hard drives as a single pool while maintaining data redundancy (parity) without the overhead or risk of a traditional RAID 5/6 array.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
A combination of **MergerFS** for pooling and **SnapRAID** for parity. This is ideal for "mostly static" media storage (like MajorRAID) where files aren't changing every second.
|
||||||
|
|
||||||
|
### 1. Concepts
|
||||||
|
|
||||||
|
- **MergerFS:** A FUSE-based union filesystem. It takes multiple drives/folders and presents them as a single mount point. It does NOT provide redundancy.
|
||||||
|
- **SnapRAID:** A backup/parity tool for disk arrays. It creates parity information on a dedicated drive. It is NOT real-time (you must run `snapraid sync`).
|
||||||
|
|
||||||
|
### 2. Implementation Strategy
|
||||||
|
|
||||||
|
1. **Clean the Pool:** Use `rmlint` to clear duplicates and reclaim space.
|
||||||
|
2. **Identify the Parity Drive:** Choose your largest drive (or one equal to the largest data drive) to hold the parity information. In my setup, `/mnt/usb` (sdc) was cleared of 4TB of duplicates to be repurposed for this.
|
||||||
|
3. **Configure MergerFS:** Pool the data drives (e.g., `/mnt/disk1`, `/mnt/disk2`) into `/storage`.
|
||||||
|
4. **Configure SnapRAID:** Point SnapRAID to the data drives and the parity drive.
|
||||||
|
|
||||||
|
### 3. MergerFS Config (/etc/fstab)
|
||||||
|
|
||||||
|
```fstab
|
||||||
|
# Example MergerFS pool
|
||||||
|
/mnt/disk*:/mnt/usb-data /storage fuse.mergerfs defaults,allow_other,cache.files=off,use_ino,category.create=mfs,minfreespace=20G,fsname=mergerfsPool 0 0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. SnapRAID Config (/etc/snapraid.conf)
|
||||||
|
|
||||||
|
```conf
|
||||||
|
# Parity file location
|
||||||
|
parity /mnt/parity/snapraid.parity
|
||||||
|
|
||||||
|
# Data drives
|
||||||
|
content /var/snapraid/snapraid.content
|
||||||
|
content /mnt/disk1/.snapraid.content
|
||||||
|
content /mnt/disk2/.snapraid.content
|
||||||
|
|
||||||
|
data d1 /mnt/disk1/
|
||||||
|
data d2 /mnt/disk2/
|
||||||
|
|
||||||
|
# Exclusions
|
||||||
|
exclude /lost+found/
|
||||||
|
exclude /tmp/
|
||||||
|
exclude .DS_Store
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### SnapRAID Sync
|
||||||
|
|
||||||
|
Run this daily (via cron) or after adding large amounts of data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
snapraid sync
|
||||||
|
```
|
||||||
|
|
||||||
|
### SnapRAID Scrub
|
||||||
|
|
||||||
|
Run this weekly to check for bitrot:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
snapraid scrub
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#snapraid #mergerfs #linux #storage #homelab #raid
|
||||||
@@ -1,29 +1,29 @@
|
|||||||
# 🏠 Self-Hosting & Homelab
|
# 🏠 Self-Hosting & Homelab
|
||||||
|
|
||||||
Guides for running your own services at home, including Docker, reverse proxies, DNS, storage, monitoring, and security.
|
Guides for running your own services at home, including Docker, reverse proxies, DNS, storage, monitoring, and security.
|
||||||
|
|
||||||
## Docker & Containers
|
## Docker & Containers
|
||||||
|
|
||||||
- [Self-Hosting Starter Guide](docker/self-hosting-starter-guide.md)
|
- [Self-Hosting Starter Guide](docker/self-hosting-starter-guide.md)
|
||||||
- [Docker vs VMs for the Homelab](docker/docker-vs-vms-homelab.md)
|
- [Docker vs VMs for the Homelab](docker/docker-vs-vms-homelab.md)
|
||||||
- [Debugging Broken Docker Containers](docker/debugging-broken-docker-containers.md)
|
- [Debugging Broken Docker Containers](docker/debugging-broken-docker-containers.md)
|
||||||
|
|
||||||
## Reverse Proxies
|
## Reverse Proxies
|
||||||
|
|
||||||
- [Setting Up Caddy as a Reverse Proxy](reverse-proxy/setting-up-caddy-reverse-proxy.md)
|
- [Setting Up Caddy as a Reverse Proxy](reverse-proxy/setting-up-caddy-reverse-proxy.md)
|
||||||
|
|
||||||
## DNS & Networking
|
## DNS & Networking
|
||||||
|
|
||||||
- [Tailscale for Homelab Remote Access](dns-networking/tailscale-homelab-remote-access.md)
|
- [Tailscale for Homelab Remote Access](dns-networking/tailscale-homelab-remote-access.md)
|
||||||
|
|
||||||
## Storage & Backup
|
## Storage & Backup
|
||||||
|
|
||||||
- [rsync Backup Patterns](storage-backup/rsync-backup-patterns.md)
|
- [rsync Backup Patterns](storage-backup/rsync-backup-patterns.md)
|
||||||
|
|
||||||
## Monitoring
|
## Monitoring
|
||||||
|
|
||||||
- [Tuning Netdata Web Log Alerts](monitoring/tuning-netdata-web-log-alerts.md)
|
- [Tuning Netdata Web Log Alerts](monitoring/tuning-netdata-web-log-alerts.md)
|
||||||
|
|
||||||
## Security
|
## Security
|
||||||
|
|
||||||
- [Linux Server Hardening Checklist](security/linux-server-hardening-checklist.md)
|
- [Linux Server Hardening Checklist](security/linux-server-hardening-checklist.md)
|
||||||
|
|||||||
89
03-opensource/alternatives/freshrss.md
Normal file
89
03-opensource/alternatives/freshrss.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# FreshRSS — Self-Hosted RSS Reader
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
RSS is the best way to follow websites, blogs, and podcasts without algorithmic feeds, engagement bait, or data harvesting. But hosted RSS services like Feedly gate features behind subscriptions and still have access to your reading habits. Google killed Google Reader in 2013 and has been trying to kill RSS ever since.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[FreshRSS](https://freshrss.org) is a self-hosted RSS aggregator. It fetches and stores your feeds on your own server, presents a clean reading interface, and syncs with mobile apps via standard APIs (Fever, Google Reader, Nextcloud News). No subscription, no tracking, no feed limits.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
freshrss:
|
||||||
|
image: freshrss/freshrss:latest
|
||||||
|
container_name: freshrss
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8086:80"
|
||||||
|
volumes:
|
||||||
|
- ./freshrss/data:/var/www/FreshRSS/data
|
||||||
|
- ./freshrss/extensions:/var/www/FreshRSS/extensions
|
||||||
|
environment:
|
||||||
|
- TZ=America/New_York
|
||||||
|
- CRON_MIN=*/15 # fetch feeds every 15 minutes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
rss.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8086
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to your FreshRSS URL and run through the setup wizard
|
||||||
|
2. Create an admin account
|
||||||
|
3. Go to **Settings → Authentication** — enable API access if you want mobile app sync
|
||||||
|
4. Start adding feeds under **Subscriptions → Add a feed**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mobile App Sync
|
||||||
|
|
||||||
|
FreshRSS exposes a Google Reader-compatible API that most RSS apps support:
|
||||||
|
|
||||||
|
| App | Platform | Protocol |
|
||||||
|
|---|---|---|
|
||||||
|
| NetNewsWire | iOS / macOS | Fever or GReader |
|
||||||
|
| Reeder | iOS / macOS | GReader |
|
||||||
|
| ReadYou | Android | GReader |
|
||||||
|
| FeedMe | Android | GReader / Fever |
|
||||||
|
|
||||||
|
**API URL format:** `https://rss.yourdomain.com/api/greader.php`
|
||||||
|
|
||||||
|
Enable the API in FreshRSS: **Settings → Authentication → Allow API access**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feed Auto-Refresh
|
||||||
|
|
||||||
|
The `CRON_MIN=*/15` environment variable runs feed fetching every 15 minutes inside the container. For more control, add a host-level cron job:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fetch all feeds every 10 minutes
|
||||||
|
*/10 * * * * docker exec freshrss php /var/www/FreshRSS/app/actualize_script.php
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why RSS Over Social Media
|
||||||
|
|
||||||
|
- **You control the feed** — no algorithm decides what you see or in what order
|
||||||
|
- **No engagement optimization** — content ranked by publish date, not outrage potential
|
||||||
|
- **Portable** — OPML export lets you move your subscriptions to any reader
|
||||||
|
- **Works forever** — RSS has been around since 1999 and isn't going anywhere
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#freshrss #rss #self-hosting #docker #linux #alternatives #privacy
|
||||||
95
03-opensource/alternatives/gitea.md
Normal file
95
03-opensource/alternatives/gitea.md
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# Gitea — Self-Hosted Git
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
GitHub is the default home for code, but it's a Microsoft-owned centralized service. Your repositories, commit history, issues, and CI/CD pipelines are all under someone else's control. For personal projects and private infrastructure, there's no reason to depend on it.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[Gitea](https://gitea.com) is a lightweight, self-hosted Git service. It provides the full GitHub-style workflow — repositories, branches, pull requests, webhooks, and a web UI — in a single binary or Docker container that runs comfortably on low-spec hardware.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
gitea:
|
||||||
|
image: docker.gitea.com/gitea:latest
|
||||||
|
container_name: gitea
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "3002:3000"
|
||||||
|
- "222:22" # SSH git access
|
||||||
|
volumes:
|
||||||
|
- ./gitea:/data
|
||||||
|
environment:
|
||||||
|
- USER_UID=1000
|
||||||
|
- USER_GID=1000
|
||||||
|
- GITEA__database__DB_TYPE=sqlite3
|
||||||
|
```
|
||||||
|
|
||||||
|
SQLite is fine for personal use. For team use, swap in PostgreSQL or MySQL.
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
git.yourdomain.com {
|
||||||
|
reverse_proxy localhost:3002
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to your Gitea URL — the first-run wizard handles configuration
|
||||||
|
2. Set the server URL to your public domain
|
||||||
|
3. Create an admin account
|
||||||
|
4. Configure SSH access if you want `git@git.yourdomain.com` cloning
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Webhooks
|
||||||
|
|
||||||
|
Gitea's webhook system is how automated pipelines get triggered on push. Example use case — auto-deploy a MkDocs wiki on every push:
|
||||||
|
|
||||||
|
1. Go to repo → **Settings → Webhooks → Add Webhook**
|
||||||
|
2. Set the payload URL to your webhook endpoint (e.g. `https://notes.yourdomain.com/webhook`)
|
||||||
|
3. Set content type to `application/json`
|
||||||
|
4. Select **Push events**
|
||||||
|
|
||||||
|
The webhook fires on every `git push`, allowing the receiving server to pull and rebuild automatically. See [MajorWiki Setup & Pipeline](../../05-troubleshooting/majwiki-setup-and-pipeline.md) for a complete example.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migrating from GitHub
|
||||||
|
|
||||||
|
Gitea can mirror GitHub repos and import them directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone from GitHub, push to Gitea
|
||||||
|
git clone --mirror https://github.com/user/repo.git
|
||||||
|
cd repo.git
|
||||||
|
git remote set-url origin https://git.yourdomain.com/user/repo.git
|
||||||
|
git push --mirror
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use the Gitea web UI: **+ → New Migration → GitHub**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not Just Use GitHub?
|
||||||
|
|
||||||
|
For public open source — GitHub is fine, the network effects are real. For private infrastructure code, personal projects, and anything you'd rather not hand to Microsoft:
|
||||||
|
|
||||||
|
- Full control over your data and access
|
||||||
|
- No rate limits, no storage quotas on your own hardware
|
||||||
|
- Webhooks and integrations without paying for GitHub Actions minutes
|
||||||
|
- Works entirely over Tailscale — no public exposure required
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#gitea #git #self-hosting #docker #linux #alternatives #vcs
|
||||||
88
03-opensource/alternatives/searxng.md
Normal file
88
03-opensource/alternatives/searxng.md
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
# SearXNG — Private Self-Hosted Search
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Every search query sent to Google, Bing, or DuckDuckGo is logged, profiled, and used to build an advertising model of you. Even "private" search engines are still third-party services with their own data retention policies.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[SearXNG](https://github.com/searxng/searxng) is a self-hosted metasearch engine. It queries multiple search engines simultaneously on your behalf — without sending any identifying information — and aggregates the results. The search engines see a request from your server, not from you.
|
||||||
|
|
||||||
|
Your queries stay on your infrastructure.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
searxng:
|
||||||
|
image: searxng/searxng:latest
|
||||||
|
container_name: searxng
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "8090:8080"
|
||||||
|
volumes:
|
||||||
|
- ./searxng:/etc/searxng
|
||||||
|
environment:
|
||||||
|
- SEARXNG_BASE_URL=https://search.yourdomain.com/
|
||||||
|
```
|
||||||
|
|
||||||
|
SearXNG requires a `settings.yml` in the mounted config directory. Generate one from the default:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm searxng/searxng cat /etc/searxng/settings.yml > ./searxng/settings.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Key settings to configure in `settings.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
server:
|
||||||
|
secret_key: "generate-a-random-string-here"
|
||||||
|
bind_address: "0.0.0.0"
|
||||||
|
|
||||||
|
search:
|
||||||
|
safe_search: 0
|
||||||
|
default_lang: "en"
|
||||||
|
|
||||||
|
engines:
|
||||||
|
# Enable/disable specific engines here
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
search.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8090
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using SearXNG as an AI Search Backend
|
||||||
|
|
||||||
|
SearXNG integrates directly with Open WebUI as a web search provider, giving your local AI access to current web results without any third-party API keys:
|
||||||
|
|
||||||
|
**Open WebUI → Settings → Web Search:**
|
||||||
|
- Enable web search
|
||||||
|
- Set provider to `searxng`
|
||||||
|
- Set URL to `http://searxng:8080` (internal Docker network) or your Tailscale/local address
|
||||||
|
|
||||||
|
This is how MajorTwin gets current web context — queries go through SearXNG, not Google.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not DuckDuckGo?
|
||||||
|
|
||||||
|
DDG is better than Google for privacy, but it's still a centralized third-party service. SearXNG:
|
||||||
|
|
||||||
|
- Runs on your own hardware
|
||||||
|
- Has no account, no cookies, no session tracking
|
||||||
|
- Lets you choose which upstream engines to use and weight
|
||||||
|
- Can be kept entirely off the public internet (Tailscale-only)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#searxng #search #privacy #self-hosting #docker #linux #alternatives
|
||||||
102
03-opensource/dev-tools/rsync.md
Normal file
102
03-opensource/dev-tools/rsync.md
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
# rsync — Fast, Resumable File Transfers
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Copying large files or directory trees between drives or servers is slow, fragile, and unresumable with `cp`. A dropped connection or a single error means starting over. You also want to skip files that already exist at the destination without re-copying them.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`rsync` is a file synchronization tool that only transfers what has changed, preserves metadata, and can resume interrupted transfers. It works locally and over SSH.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install rsync
|
||||||
|
```
|
||||||
|
|
||||||
|
### Basic Local Copy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-a` — archive mode: preserves permissions, timestamps, symlinks, ownership
|
||||||
|
- `-v` — verbose: shows what's being transferred
|
||||||
|
|
||||||
|
**Trailing slash on source matters:**
|
||||||
|
- `/source/` — copy the *contents* of source into destination
|
||||||
|
- `/source` — copy the source *directory itself* into destination
|
||||||
|
|
||||||
|
### Resume an Interrupted Transfer
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --partial --progress /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
- `--partial` — keeps partially transferred files so they can be resumed
|
||||||
|
- `--progress` — shows per-file progress and speed
|
||||||
|
|
||||||
|
### Skip Already-Transferred Files
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --ignore-existing /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
Useful when restarting a migration — skips anything already at the destination regardless of timestamp comparison.
|
||||||
|
|
||||||
|
### Dry Run First
|
||||||
|
|
||||||
|
Always preview what rsync will do before committing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --dry-run /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
No files are moved. Output shows exactly what would happen.
|
||||||
|
|
||||||
|
### Transfer Over SSH
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av -e ssh /source/ user@remotehost:/destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
Or with a non-standard port:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av -e "ssh -p 2222" /source/ user@remotehost:/destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exclude Patterns
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --exclude='*.tmp' --exclude='.Trash*' /source/ /destination/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real-World Use
|
||||||
|
|
||||||
|
Migrating ~286 files from `/majorRAID` to `/majorstorage` during a RAID dissolution project:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rsync -av --partial --progress --ignore-existing \
|
||||||
|
/majorRAID/ /majorstorage/ \
|
||||||
|
2>&1 | tee /root/raid_migrate.log
|
||||||
|
```
|
||||||
|
|
||||||
|
Run inside a `tmux` or `screen` session so it survives SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rsync-migrate \
|
||||||
|
"rsync -av --partial --progress /majorRAID/ /majorstorage/ | tee /root/raid_migrate.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Progress on a Running Transfer
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tail -f /root/raid_migrate.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#rsync #linux #storage #file-transfer #sysadmin #dev-tools
|
||||||
76
03-opensource/dev-tools/screen.md
Normal file
76
03-opensource/dev-tools/screen.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# screen — Simple Persistent Terminal Sessions
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Same problem as tmux: SSH sessions die, jobs get killed, long-running tasks need to survive disconnects. screen is the older, simpler alternative to tmux — universally available and gets the job done with minimal setup.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`screen` creates detachable terminal sessions. It's installed by default on many systems, making it useful when tmux isn't available.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install screen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start a named session
|
||||||
|
screen -S mysession
|
||||||
|
|
||||||
|
# Detach (keeps running)
|
||||||
|
Ctrl+a, d
|
||||||
|
|
||||||
|
# List sessions
|
||||||
|
screen -list
|
||||||
|
|
||||||
|
# Reattach
|
||||||
|
screen -r mysession
|
||||||
|
|
||||||
|
# If session shows as "Attached" (stuck)
|
||||||
|
screen -d -r mysession
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start a Background Job Directly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -dmS mysession bash -c "long-running-command 2>&1 | tee /root/output.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-d` — start detached
|
||||||
|
- `-m` — create new session even if already inside screen
|
||||||
|
- `-S` — name the session
|
||||||
|
|
||||||
|
### Capture Current Output Without Attaching
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -S mysession -X hardcopy /tmp/screen_output.txt
|
||||||
|
cat /tmp/screen_output.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Send a Command to a Running Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -S mysession -X stuff "tail -f /root/output.log\n"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## screen vs tmux
|
||||||
|
|
||||||
|
| Feature | screen | tmux |
|
||||||
|
|---|---|---|
|
||||||
|
| Availability | Installed by default on most systems | Usually needs installing |
|
||||||
|
| Split panes | Basic (Ctrl+a, S) | Better (Ctrl+b, ") |
|
||||||
|
| Scripting | Limited | More capable |
|
||||||
|
| Config complexity | Simple | More options |
|
||||||
|
|
||||||
|
Use screen when it's already there or for quick throwaway sessions. Use tmux for anything more complex. See [tmux](tmux.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#screen #terminal #linux #ssh #productivity #dev-tools
|
||||||
93
03-opensource/dev-tools/tmux.md
Normal file
93
03-opensource/dev-tools/tmux.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# tmux — Persistent Terminal Sessions
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
SSH sessions die when your connection drops, your laptop closes, or you walk away. Long-running jobs — storage migrations, file scans, downloads — get killed mid-run. You need a way to detach from a session, come back later, and pick up exactly where you left off.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`tmux` is a terminal multiplexer. It runs sessions that persist independently of your SSH connection. You can detach, disconnect, reconnect from a different machine, and reattach to find everything still running.
|
||||||
|
|
||||||
|
### Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install tmux
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start a named session
|
||||||
|
tmux new-session -s mysession
|
||||||
|
|
||||||
|
# Detach from a session (keeps it running)
|
||||||
|
Ctrl+b, d
|
||||||
|
|
||||||
|
# List running sessions
|
||||||
|
tmux ls
|
||||||
|
|
||||||
|
# Reattach to a session
|
||||||
|
tmux attach -t mysession
|
||||||
|
|
||||||
|
# Kill a session when done
|
||||||
|
tmux kill-session -t mysession
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start a Background Job Directly
|
||||||
|
|
||||||
|
Skip the interactive session entirely — start a job in a new detached session in one command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rmlint2 "rmlint /majorstorage// /mnt/usb// /majorRAID 2>&1 | tee /majorRAID/rmlint_scan2.log"
|
||||||
|
```
|
||||||
|
|
||||||
|
The job runs immediately in the background. Attach later to check progress:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux attach -t rmlint2
|
||||||
|
```
|
||||||
|
|
||||||
|
### Capture Output Without Attaching
|
||||||
|
|
||||||
|
Read the current state of a session without interrupting it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux capture-pane -t rmlint2 -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### Split Panes
|
||||||
|
|
||||||
|
Monitor multiple things in one terminal window:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Horizontal split (top/bottom)
|
||||||
|
Ctrl+b, "
|
||||||
|
|
||||||
|
# Vertical split (left/right)
|
||||||
|
Ctrl+b, %
|
||||||
|
|
||||||
|
# Switch between panes
|
||||||
|
Ctrl+b, arrow keys
|
||||||
|
```
|
||||||
|
|
||||||
|
### Real-World Use
|
||||||
|
|
||||||
|
On **majorhome**, all long-running storage operations run inside named tmux sessions so they survive SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tmux new-session -d -s rmlint2 "rmlint ..." # dedup scan
|
||||||
|
tmux new-session -d -s rsync-migrate "rsync ..." # file migration
|
||||||
|
tmux ls # check what's running
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## tmux vs screen
|
||||||
|
|
||||||
|
Both work. tmux has better split-pane support and scripting. screen is simpler and more universally installed. I use both — tmux for new jobs, screen for legacy ones. See the [screen](screen.md) article for reference.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#tmux #terminal #linux #ssh #productivity #dev-tools
|
||||||
22
03-opensource/index.md
Normal file
22
03-opensource/index.md
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
# 📂 Open Source & Alternatives
|
||||||
|
|
||||||
|
A curated collection of my favorite open-source tools and privacy-respecting alternatives to mainstream software.
|
||||||
|
|
||||||
|
## 🔄 Alternatives
|
||||||
|
- [SearXNG: Private Self-Hosted Search](alternatives/searxng.md)
|
||||||
|
- [FreshRSS: Self-Hosted RSS Reader](alternatives/freshrss.md)
|
||||||
|
- [Gitea: Self-Hosted Git](alternatives/gitea.md)
|
||||||
|
|
||||||
|
## 🚀 Productivity
|
||||||
|
- [rmlint: Duplicate File Scanning](productivity/rmlint-duplicate-scanning.md)
|
||||||
|
|
||||||
|
## 🛠️ Development Tools
|
||||||
|
- [tmux: Persistent Terminal Sessions](dev-tools/tmux.md)
|
||||||
|
- [screen: Simple Persistent Sessions](dev-tools/screen.md)
|
||||||
|
- [rsync: Fast, Resumable File Transfers](dev-tools/rsync.md)
|
||||||
|
|
||||||
|
## 🎨 Media & Creative
|
||||||
|
- [yt-dlp: Video Downloading](media-creative/yt-dlp.md)
|
||||||
|
|
||||||
|
## 🔐 Privacy & Security
|
||||||
|
- [Vaultwarden: Self-Hosted Password Manager](privacy-security/vaultwarden.md)
|
||||||
131
03-opensource/media-creative/yt-dlp.md
Normal file
131
03-opensource/media-creative/yt-dlp.md
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
# yt-dlp — Video Downloading
|
||||||
|
|
||||||
|
## What It Is
|
||||||
|
|
||||||
|
`yt-dlp` is a feature-rich command-line video downloader, forked from youtube-dl with active maintenance and significantly better performance. It supports YouTube, Twitch, and hundreds of other sites.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### Fedora
|
||||||
|
```bash
|
||||||
|
sudo dnf install yt-dlp
|
||||||
|
# or latest via pip:
|
||||||
|
sudo pip install yt-dlp --break-system-packages
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update
|
||||||
|
```bash
|
||||||
|
sudo pip install -U yt-dlp --break-system-packages
|
||||||
|
# or if installed as standalone binary:
|
||||||
|
yt-dlp -U
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep it current — YouTube pushes extractor changes frequently and old versions break.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download a single video (best quality)
|
||||||
|
yt-dlp https://www.youtube.com/watch?v=VIDEO_ID
|
||||||
|
|
||||||
|
# Download to a specific directory with title as filename
|
||||||
|
yt-dlp -o "/path/to/output/%(title)s.%(ext)s" URL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Plex-Optimized Download
|
||||||
|
|
||||||
|
For Plex direct play, you want H.264 video in an MP4 container with embedded subtitles:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
yt-dlp -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/bestvideo+bestaudio' \
|
||||||
|
--merge-output-format mp4 \
|
||||||
|
-o "/plex/plex/%(title)s.%(ext)s" \
|
||||||
|
--write-auto-subs --embed-subs \
|
||||||
|
URL
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/...'` — prefer MP4 video + M4A audio; fall back to best available
|
||||||
|
- `--merge-output-format mp4` — merge streams into MP4 container (requires ffmpeg)
|
||||||
|
- `--write-auto-subs --embed-subs` — download auto-generated subtitles and bake them in
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Playlists and Channels
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download a full playlist
|
||||||
|
yt-dlp -o "%(playlist_index)s - %(title)s.%(ext)s" PLAYLIST_URL
|
||||||
|
|
||||||
|
# Download only videos not already present
|
||||||
|
yt-dlp --download-archive archive.txt PLAYLIST_URL
|
||||||
|
```
|
||||||
|
|
||||||
|
`--download-archive` maintains a file of completed video IDs — re-running the command skips already-downloaded videos automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Format Selection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all available formats for a video
|
||||||
|
yt-dlp --list-formats URL
|
||||||
|
|
||||||
|
# Download best video + best audio, merge to mp4
|
||||||
|
yt-dlp -f 'bestvideo+bestaudio' --merge-output-format mp4 URL
|
||||||
|
|
||||||
|
# Download audio only (MP3)
|
||||||
|
yt-dlp -x --audio-format mp3 URL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Config File
|
||||||
|
|
||||||
|
Persist your preferred flags so you don't repeat them every command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.config/yt-dlp
|
||||||
|
cat > ~/.config/yt-dlp/config << 'EOF'
|
||||||
|
-f bestvideo[ext=mp4]+bestaudio[ext=m4a]/bestvideo+bestaudio
|
||||||
|
--merge-output-format mp4
|
||||||
|
--write-auto-subs
|
||||||
|
--embed-subs
|
||||||
|
--remote-components ejs:github
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
After this, a bare `yt-dlp URL` uses all your preferred settings automatically.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running Long Downloads in the Background
|
||||||
|
|
||||||
|
For large downloads or playlists, run inside `screen` or `tmux` so they survive SSH disconnects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
screen -dmS yt-download bash -c \
|
||||||
|
"yt-dlp -o '/plex/plex/%(title)s.%(ext)s' PLAYLIST_URL 2>&1 | tee ~/yt-download.log"
|
||||||
|
|
||||||
|
# Check progress
|
||||||
|
screen -r yt-download
|
||||||
|
# or
|
||||||
|
tail -f ~/yt-download.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
For YouTube JS challenge errors, missing formats, and n-challenge failures on Fedora — see [yt-dlp YouTube JS Challenge Fix](../../05-troubleshooting/yt-dlp-fedora-js-challenge.md).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#yt-dlp #youtube #video #plex #linux #media #dev-tools
|
||||||
95
03-opensource/privacy-security/vaultwarden.md
Normal file
95
03-opensource/privacy-security/vaultwarden.md
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# Vaultwarden — Self-Hosted Password Manager
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Password managers are a necessity, but handing your credentials to a third-party cloud service is a trust problem. Bitwarden is open source and privacy-respecting, but if you're already running a homelab, there's no reason to depend on their servers.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
[Vaultwarden](https://github.com/dani-garcia/vaultwarden) is an unofficial, lightweight Bitwarden-compatible server written in Rust. It exposes the same API that all official Bitwarden clients speak — desktop apps, browser extensions, mobile apps — so you get the full Bitwarden UX pointed at your own hardware.
|
||||||
|
|
||||||
|
Your passwords never leave your network.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment (Docker + Caddy)
|
||||||
|
|
||||||
|
### docker-compose.yml
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
vaultwarden:
|
||||||
|
image: vaultwarden/server:latest
|
||||||
|
container_name: vaultwarden
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
- DOMAIN=https://vault.yourdomain.com
|
||||||
|
- SIGNUPS_ALLOWED=false # disable after creating your account
|
||||||
|
volumes:
|
||||||
|
- ./vw-data:/data
|
||||||
|
ports:
|
||||||
|
- "8080:80"
|
||||||
|
```
|
||||||
|
|
||||||
|
Start it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy reverse proxy
|
||||||
|
|
||||||
|
```
|
||||||
|
vault.yourdomain.com {
|
||||||
|
reverse_proxy localhost:8080
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Caddy handles TLS automatically. No extra cert config needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
1. Browse to `https://vault.yourdomain.com` and create your account
|
||||||
|
2. Set `SIGNUPS_ALLOWED=false` in the compose file and restart the container
|
||||||
|
3. Install any official Bitwarden client (browser extension, desktop, mobile)
|
||||||
|
4. In the client, set the **Server URL** to `https://vault.yourdomain.com` before logging in
|
||||||
|
|
||||||
|
That's it. The client has no idea it's not talking to Bitwarden's servers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Access Model
|
||||||
|
|
||||||
|
On MajorInfrastructure, Vaultwarden runs on **majorlab** and is accessible:
|
||||||
|
|
||||||
|
- **Internally** — via Caddy on the local network
|
||||||
|
- **Remotely** — via Tailscale; vault is reachable from any device on the tailnet without exposing it to the public internet
|
||||||
|
|
||||||
|
This means the Caddy vhost does not need to be publicly routable. You can choose to expose it publicly (Let's Encrypt works fine) or keep it Tailscale-only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Backup
|
||||||
|
|
||||||
|
Vaultwarden stores everything in a single SQLite database at `./vw-data/db.sqlite3`. Back it up like any file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Simple copy (stop container first for consistency, or use sqlite backup mode)
|
||||||
|
sqlite3 /path/to/vw-data/db.sqlite3 ".backup '/path/to/backup/vw-backup-$(date +%F).sqlite3'"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or include the `vw-data/` directory in your regular rsync backup run.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Not Bitwarden (Official)?
|
||||||
|
|
||||||
|
The official Bitwarden server is also open source but requires significantly more resources (multiple services, SQL Server). Vaultwarden runs in a single container on minimal RAM and handles everything a personal or family vault needs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#vaultwarden #bitwarden #passwords #privacy #self-hosting #docker #linux
|
||||||
58
03-opensource/productivity/rmlint-duplicate-scanning.md
Normal file
58
03-opensource/productivity/rmlint-duplicate-scanning.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# rmlint — Extreme Duplicate File Scanning
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Over time, backups and media collections can accumulate massive amounts of duplicate data. Traditional duplicate finders are often slow and limited in how they handle results. On MajorRAID, I identified **~4.0 TB (113,584 files)** of duplicate data across three different storage points.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
`rmlint` is an extremely fast tool for finding (and optionally removing) duplicates. It is significantly faster than `fdupes` or `rdfind` because it uses a multi-stage approach to avoid unnecessary hashing.
|
||||||
|
|
||||||
|
### 1. Installation (Fedora)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo dnf install rmlint
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Scanning Multiple Directories
|
||||||
|
|
||||||
|
To scan for duplicates across multiple mount points and compare them:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmlint /majorstorage /majorRAID /mnt/usb
|
||||||
|
```
|
||||||
|
|
||||||
|
This will generate a script named `rmlint.sh` and a summary of the findings.
|
||||||
|
|
||||||
|
### 3. Reviewing Results
|
||||||
|
|
||||||
|
**DO NOT** run the generated script without reviewing it first. You can use the summary to see which paths contain the most duplicates:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View the summary
|
||||||
|
cat rmlint.json | jq .
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Advanced Usage: Finding Duplicates by Hash Only
|
||||||
|
|
||||||
|
If you suspect duplicates with different filenames:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rmlint --hidden --hard-links /path/to/search
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Repurposing Storage
|
||||||
|
|
||||||
|
After scanning and clearing duplicates, you can reclaim significant space. In my case, this was the first step in repurposing a 12TB USB drive as a **SnapRAID parity drive**.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
Run a scan monthly or before any major storage consolidation project.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#rmlint #linux #storage #cleanup #duplicates
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
# 🎙️ Streaming & Podcasting
|
# 🎙️ Streaming & Podcasting
|
||||||
|
|
||||||
Guides for live streaming and podcast production, with a focus on OBS Studio.
|
Guides for live streaming and podcast production, with a focus on OBS Studio.
|
||||||
|
|
||||||
## OBS Studio
|
## OBS Studio
|
||||||
|
|
||||||
- [OBS Studio Setup & Encoding](obs/obs-studio-setup-encoding.md)
|
- [OBS Studio Setup & Encoding](obs/obs-studio-setup-encoding.md)
|
||||||
|
|||||||
47
05-troubleshooting/gemini-cli-manual-update.md
Normal file
47
05-troubleshooting/gemini-cli-manual-update.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# 🛠️ Gemini CLI: Manual Update Guide
|
||||||
|
|
||||||
|
If the automatic update fails or you need to force a specific version of the Gemini CLI, use these steps.
|
||||||
|
|
||||||
|
## 🔴 Symptom: Automatic Update Failed
|
||||||
|
You may see an error message like:
|
||||||
|
`✕ Automatic update failed. Please try updating manually`
|
||||||
|
|
||||||
|
## 🟢 Manual Update Procedure
|
||||||
|
|
||||||
|
### 1. Verify Current Version
|
||||||
|
Check the version currently installed on your system:
|
||||||
|
```bash
|
||||||
|
gemini --version
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Check Latest Version
|
||||||
|
Query the npm registry for the latest available version:
|
||||||
|
```bash
|
||||||
|
npm show @google/gemini-cli version
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Perform Manual Update
|
||||||
|
Use `npm` with `sudo` to update the global package:
|
||||||
|
```bash
|
||||||
|
sudo npm install -g @google/gemini-cli@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Confirm Update
|
||||||
|
Verify that the new version is active:
|
||||||
|
```bash
|
||||||
|
gemini --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🛠️ Troubleshooting Update Failures
|
||||||
|
|
||||||
|
### Permissions Issues
|
||||||
|
If you encounter `EACCES` errors without `sudo`, ensure your user has permissions or use `sudo` as shown above.
|
||||||
|
|
||||||
|
### Registry Connectivity
|
||||||
|
If `npm` cannot reach the registry, check your internet connection or any local firewall/proxy settings.
|
||||||
|
|
||||||
|
### Cache Issues
|
||||||
|
If the version doesn't update, try clearing the npm cache:
|
||||||
|
```bash
|
||||||
|
npm cache clean --force
|
||||||
|
```
|
||||||
58
05-troubleshooting/gpu-display/qwen-14b-oom-3080ti.md
Normal file
58
05-troubleshooting/gpu-display/qwen-14b-oom-3080ti.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# Qwen2.5-14B OOM on RTX 3080 Ti (12GB)
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
When attempting to run or fine-tune **Qwen2.5-14B** on an NVIDIA RTX 3080 Ti with 12GB of VRAM, the process fails with an Out of Memory (OOM) error:
|
||||||
|
|
||||||
|
```
|
||||||
|
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate X GiB (GPU 0; 12.00 GiB total capacity; Y GiB already allocated; Z GiB free; ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
The 12GB VRAM limit is hit during the initial model load or immediately upon starting the first training step.
|
||||||
|
|
||||||
|
## Root Causes
|
||||||
|
|
||||||
|
1. **Model Size:** A 14B parameter model in FP16/BF16 requires ~28GB of VRAM just for the weights.
|
||||||
|
2. **Context Length:** High context lengths (e.g., 4096+) significantly increase VRAM usage during training.
|
||||||
|
3. **Training Overhead:** Even with QLoRA (4-bit quantization), the overhead of gradients, optimizer states, and activations can exceed 12GB for a 14B model.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Solutions
|
||||||
|
|
||||||
|
### 1. Pivot to a 7B Model (Recommended)
|
||||||
|
|
||||||
|
For a 12GB GPU, a 7B parameter model (like **Qwen2.5-7B-Instruct**) is the sweet spot. It provides excellent performance while leaving enough VRAM for high context lengths and larger batch sizes.
|
||||||
|
|
||||||
|
- **VRAM Usage (7B QLoRA):** ~6-8GB
|
||||||
|
- **Pros:** Stable, fast, supports long context.
|
||||||
|
- **Cons:** Slightly lower reasoning capability than 14B.
|
||||||
|
|
||||||
|
### 2. Aggressive Quantization
|
||||||
|
|
||||||
|
If you MUST run 14B, use 4-bit quantization (GGUF or EXL2) for inference only. Training 14B on 12GB is not reliably possible even with extreme offloading.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example Ollama run (uses 4-bit quantization by default)
|
||||||
|
ollama run qwen2.5:14b
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Training Optimizations (if attempting 14B)
|
||||||
|
|
||||||
|
If you have no choice but to try 14B training:
|
||||||
|
- Set `max_seq_length` to 512 or 1024.
|
||||||
|
- Use `Unsloth` (it is highly memory-efficient).
|
||||||
|
- Enable `gradient_checkpointing`.
|
||||||
|
- Set `per_device_train_batch_size = 1`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
Keep your NVIDIA drivers and CUDA toolkit updated. On Windows (MajorRig), ensure WSL2 has sufficient memory allocation in `.wslconfig`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tags
|
||||||
|
|
||||||
|
#gpu #cuda #oom #qwen #majortwin #llm #fine-tuning
|
||||||
@@ -119,3 +119,20 @@ The webhook runs as a systemd service so it survives reboots:
|
|||||||
systemctl status majwiki-webhook
|
systemctl status majwiki-webhook
|
||||||
systemctl restart majwiki-webhook
|
systemctl restart majwiki-webhook
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Updated 2026-03-13: Obsidian Git plugin dropped. See canonical workflow below.*
|
||||||
|
|
||||||
|
## Canonical Publishing Workflow
|
||||||
|
|
||||||
|
The Obsidian Git plugin was evaluated but dropped — too convoluted for a simple push. Manual git from the terminal is the canonical workflow.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/Documents/MajorVault
|
||||||
|
git add 20-Projects/MajorTwin/08-Wiki/
|
||||||
|
git commit -m "wiki: describe your changes"
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
From there: Gitea receives the push → fires webhook → majorlab pulls → MkDocs rebuilds → `notes.majorshouse.com` updates.
|
||||||
|
|||||||
186
05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md
Normal file
186
05-troubleshooting/networking/fail2ban-self-ban-apache-outage.md
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
# Apache Outage: Fail2ban Self-Ban + Missing iptables Rules
|
||||||
|
|
||||||
|
## 🛑 Problem
|
||||||
|
|
||||||
|
A web server running Apache2 becomes completely unreachable (`ERR_CONNECTION_TIMED_OUT`) despite Apache running normally. SSH access via Tailscale is unaffected.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 Diagnosis
|
||||||
|
|
||||||
|
### Step 1 — Confirm Apache is running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl status apache2
|
||||||
|
```
|
||||||
|
|
||||||
|
If Apache is `active (running)`, the problem is at the firewall layer, not the application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2 — Test the public IP directly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -I --max-time 5 http://<PUBLIC_IP>
|
||||||
|
```
|
||||||
|
|
||||||
|
A **timeout** means traffic is being dropped by the firewall. A **connection refused** means Apache is down.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 3 — Check the iptables INPUT chain
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo iptables -L INPUT -n -v
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for ACCEPT rules on ports 80 and 443. If they're missing and the chain policy is `DROP`, HTTP/HTTPS traffic is being silently dropped.
|
||||||
|
|
||||||
|
**Example of broken state:**
|
||||||
|
```
|
||||||
|
Chain INPUT (policy DROP)
|
||||||
|
ACCEPT tcp -- lo * ... # loopback only
|
||||||
|
ACCEPT tcp -- tailscale0 * ... tcp dpt:22
|
||||||
|
# no rules for port 80 or 443
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 4 — Check the nftables ruleset for Fail2ban
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nft list tables
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for `table inet f2b-table` — this is Fail2ban's nftables table. It operates at **priority `filter - 1`**, meaning it is evaluated *before* the main iptables INPUT chain.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nft list ruleset | grep -A 10 'f2b-table'
|
||||||
|
```
|
||||||
|
|
||||||
|
Fail2ban rejects banned IPs with rules like:
|
||||||
|
```
|
||||||
|
tcp dport { 80, 443 } ip saddr @addr-set-wordpress-hard reject with icmp port-unreachable
|
||||||
|
```
|
||||||
|
|
||||||
|
A banned admin IP will be rejected here regardless of any ACCEPT rules downstream.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 5 — Check if your IP is banned
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
echo "=== $jail ==="; sudo fail2ban-client get $jail banip | tr ',' '\n' | grep <YOUR_IP>
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Solution
|
||||||
|
|
||||||
|
### Fix 1 — Add missing iptables ACCEPT rules for HTTP/HTTPS
|
||||||
|
|
||||||
|
If ports 80/443 are absent from the INPUT chain:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo iptables -I INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
|
||||||
|
sudo iptables -I INPUT -i eth0 -p tcp --dport 443 -j ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
Persist the rules:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo netfilter-persistent save
|
||||||
|
```
|
||||||
|
|
||||||
|
If `netfilter-persistent` is not installed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install -y iptables-persistent
|
||||||
|
sudo netfilter-persistent save
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix 2 — Unban your IP from all Fail2ban jails
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
sudo fail2ban-client set $jail unbanip <YOUR_IP> 2>/dev/null && echo "Unbanned from $jail"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix 3 — Add your IP to Fail2ban's ignore list
|
||||||
|
|
||||||
|
Edit `/etc/fail2ban/jail.local`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nano /etc/fail2ban/jail.local
|
||||||
|
```
|
||||||
|
|
||||||
|
Add or update the `[DEFAULT]` section:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[DEFAULT]
|
||||||
|
ignoreip = 127.0.0.1/8 ::1 <YOUR_IP>
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Fail2ban:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart fail2ban
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔁 Why This Happens
|
||||||
|
|
||||||
|
| Issue | Root Cause |
|
||||||
|
|---|---|
|
||||||
|
| Missing port 80/443 rules | iptables INPUT chain left incomplete after a manual firewall rework (e.g., SSH lockdown) |
|
||||||
|
| Still blocked after adding iptables rules | Fail2ban uses a separate nftables table at higher priority — iptables ACCEPT rules are never reached for banned IPs |
|
||||||
|
| Admin IP gets banned | Automated WordPress/Apache probes trigger Fail2ban jails against the admin's own IP |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Key Architecture Note
|
||||||
|
|
||||||
|
On servers running both iptables and Fail2ban, the evaluation order is:
|
||||||
|
|
||||||
|
1. **`inet f2b-table`** (nftables, priority `filter - 1`) — Fail2ban ban sets; evaluated first
|
||||||
|
2. **`ip filter` INPUT chain** (iptables/nftables, policy DROP) — explicit ACCEPT rules
|
||||||
|
3. **UFW chains** — IP-specific rules; evaluated last
|
||||||
|
|
||||||
|
A banned IP is stopped at step 1 and never reaches the ACCEPT rules in step 2. Always check Fail2ban *after* confirming iptables looks correct.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔎 Quick Diagnostic Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Apache
|
||||||
|
sudo systemctl status apache2
|
||||||
|
|
||||||
|
# Test public connectivity
|
||||||
|
curl -I --max-time 5 http://<PUBLIC_IP>
|
||||||
|
|
||||||
|
# Check iptables INPUT chain
|
||||||
|
sudo iptables -L INPUT -n -v
|
||||||
|
|
||||||
|
# List nftables tables (look for inet f2b-table)
|
||||||
|
sudo nft list tables
|
||||||
|
|
||||||
|
# Check Fail2ban jail status
|
||||||
|
sudo fail2ban-client status
|
||||||
|
|
||||||
|
# Check a specific jail's banned IPs
|
||||||
|
sudo fail2ban-client status wordpress-hard
|
||||||
|
|
||||||
|
# Unban an IP from all jails
|
||||||
|
for jail in $(sudo fail2ban-client status | grep "Jail list" | sed 's/.*://;s/,/ /g'); do
|
||||||
|
sudo fail2ban-client set $jail unbanip <YOUR_IP> 2>/dev/null && echo "Unbanned from $jail"
|
||||||
|
done
|
||||||
|
```
|
||||||
Reference in New Issue
Block a user