wiki: add SELinux AVC chart, enriched alerts, new server setup, and pending articles; update indexes

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-27 03:34:33 -04:00
parent 38fe720e63
commit fb2e3f6168
18 changed files with 881 additions and 15 deletions

View File

@@ -0,0 +1,59 @@
# Ansible: Vault Password File Not Found
## Error
```
[WARNING]: Error getting vault password file (default): The vault password file /Users/majorlinux/.ansible/vault_pass was not found
[ERROR]: The vault password file /Users/majorlinux/.ansible/vault_pass was not found
```
## Cause
Ansible is configured to look for a vault password file at `~/.ansible/vault_pass`, but the file does not exist. This is typically set in `ansible.cfg` via the `vault_password_file` directive.
## Solutions
### Option 1: Remove the vault config (if you're not using Vault)
Check your `ansible.cfg` for this line and remove it if Vault is not needed:
```ini
[defaults]
vault_password_file = ~/.ansible/vault_pass
```
### Option 2: Create the vault password file
```bash
echo 'your_vault_password' > ~/.ansible/vault_pass
chmod 600 ~/.ansible/vault_pass
```
> **Security note:** Keep permissions tight (`600`) so only your user can read the file. The actual vault password is stored in Bitwarden under the "Ansible Vault Password" entry.
### Option 3: Pass the password at runtime (no file needed)
```bash
ansible-playbook test.yml --ask-vault-pass
```
## Diagnosing the Source of the Config
To find which config file is setting `vault_password_file`, run:
```bash
ansible-config dump --only-changed
```
This shows all non-default config values and their source files. Config is loaded in this order of precedence:
1. `ANSIBLE_CONFIG` environment variable
2. `./ansible.cfg` (current directory)
3. `~/.ansible.cfg`
4. `/etc/ansible/ansible.cfg`
## Related
- [Ansible Getting Started](../01-linux/shell-scripting/ansible-getting-started.md)
- Vault password is stored in Bitwarden under **"Ansible Vault Password"**
- Ansible playbooks live at `~/MajorAnsible` on MajorAir/MajorMac

View File

@@ -9,6 +9,7 @@ Practical fixes for common Linux, networking, and application problems.
- [Apache Outage: Fail2ban Self-Ban + Missing iptables Rules](networking/fail2ban-self-ban-apache-outage.md)
- [Mail Client Stops Receiving: Fail2ban IMAP Self-Ban](networking/fail2ban-imap-self-ban-mail-client.md)
- [firewalld: Mail Ports Wiped After Reload](networking/firewalld-mail-ports-reset.md)
- [Tailscale SSH: Unexpected Re-Authentication Prompt](networking/tailscale-ssh-reauth-prompt.md)
- [ISP SNI Filtering & Caddy](isp-sni-filtering-caddy.md)
- [yt-dlp YouTube JS Challenge Fix](yt-dlp-fedora-js-challenge.md)

View File

@@ -0,0 +1,66 @@
# Tailscale SSH: Unexpected Re-Authentication Prompt
If a Tailscale SSH connection unexpectedly presents a browser authentication URL mid-session, the first instinct is to check the ACL policy. However, this is often a one-off Tailscale hiccup rather than a misconfiguration.
## Symptoms
- SSH connection to a fleet node displays a Tailscale auth URL:
```
To authenticate, visit: https://login.tailscale.com/a/xxxxxxxx
```
- The prompt appears even though the node worked fine previously
- Other nodes in the fleet connect without prompting
## What Causes It
Tailscale SSH supports two ACL `action` values:
| Action | Behavior |
|---|---|
| `accept` | Trusts Tailscale identity — no additional auth required |
| `check` | Requires periodic browser-based re-authentication |
If `action: "check"` is set, every session (or after token expiry) will prompt for browser auth. However, even with `action: "accept"`, a one-off prompt can appear due to a Tailscale daemon glitch or key refresh event.
## How to Diagnose
### 1. Verify the ACL policy
In the Tailscale admin console (or via `tailscale debug acl`), inspect the SSH rules. For a trusted homelab fleet, the rule should use `accept`:
```json
{
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"users": ["autogroup:nonroot", "root"],
"action": "accept",
}
```
If `action` is `check`, that is the root cause — change it to `accept` for trusted source/destination pairs.
### 2. Confirm it was a one-off
If the ACL already shows `accept`, the prompt was transient. Test with:
```bash
ssh <hostname> "echo ok"
```
No auth prompt + `ok` output = resolved. Note that this test is only meaningful if the previous session's auth token has expired, or you test from a different device that hasn't recently authenticated.
## Fix
**If ACL shows `check`:** Change to `accept` in the Tailscale admin console under Access Controls. Takes effect immediately — no server changes needed.
**If ACL already shows `accept`:** No action required. The prompt was a one-off Tailscale event (daemon restart, key refresh, etc.). Monitor for recurrence.
## Notes
- ~~Port 2222 on **MajorRig** previously existed as a hard bypass for Tailscale SSH browser auth. This workaround was retired on 2026-03-25 after the Tailscale SSH authentication issue was resolved. The entire fleet now uses port 22 uniformly.~~
- The `autogroup:self` destination means the rule applies when connecting from your own devices to your own devices — appropriate for a personal homelab fleet.
## Related
- [[Network Overview]] — Tailscale fleet inventory and SSH access model
- [[SSH-Aliases]] — Fleet SSH access shortcuts

View File

@@ -48,7 +48,7 @@ The Windows OpenSSH Server is installed as a Windows Feature (`Add-WindowsCapabi
- **This is a Windows-side issue** — WSL2 itself is unaffected. The service must be started and configured from Windows, not from within WSL2.
- **Elevated PowerShell required** — `Start-Service` and `Set-Service` for sshd will return "Access is denied" if run without Administrator privileges.
- **Port 2222 is also affected** — both the standard port 22 and the bypass port 2222 on MajorRig are served by the same `sshd` service.
- **Port 2222 was retired (2026-03-25)** — the bypass port 2222 on MajorRig is no longer in use. The entire fleet now uses port 22 uniformly after the Tailscale SSH auth fix. Only port 22 needs to be verified when troubleshooting sshd.
- **Default shell still works once fixed** — MajorRig's sshd is configured to use `C:\Windows\System32\wsl.exe` as the default shell, dropping SSH sessions directly into WSL2/Bash. This config is preserved across service restarts.
---

View File

@@ -0,0 +1,73 @@
# ClamAV Safe Scheduling on Live Servers
Running `clamscan` unthrottled on a live server will peg CPU until completion. On a small VPS (1 vCPU), a full recursive scan can sustain 70100% CPU for an hour or more, degrading or taking down hosted services.
## The Problem
A common out-of-the-box ClamAV cron setup looks like this:
```cron
0 1 * * 0 clamscan --infected --recursive / --exclude=/sys
```
This runs at Linux's default scheduling priority (`nice 0`) with normal I/O priority. On a live server it will:
- Monopolize the CPU for the scan duration
- Cause high I/O wait, degrading web serving, databases, and other services
- Trigger monitoring alerts (e.g., Netdata `10min_cpu_usage`)
## The Fix
Throttle the scan with `nice` and `ionice`:
```cron
0 1 * * 0 nice -n 19 ionice -c 3 clamscan --infected --recursive / --exclude=/sys
```
| Flag | Meaning |
|------|---------|
| `nice -n 19` | Lowest CPU scheduling priority (range: -20 to 19) |
| `ionice -c 3` | Idle I/O class — only uses disk when no other process needs it |
The scan will take longer but will not impact server performance.
## Applying the Fix
Edit root's crontab:
```bash
crontab -e
```
Or apply non-interactively:
```bash
crontab -l | sed 's|clamscan|nice -n 19 ionice -c 3 clamscan|' | crontab -
```
Verify:
```bash
crontab -l | grep clam
```
## Diagnosing a Runaway Scan
If CPU is already pegged, identify and kill the process:
```bash
ps aux --sort=-%cpu | head -15
# Look for clamscan
kill <PID>
```
## Notes
- `ionice -c 3` (Idle) requires Linux kernel ≥ 2.6.13 and CFQ/BFQ I/O scheduler. Works on most Ubuntu/Debian/Fedora systems.
- On multi-core servers, consider also using `cpulimit` for a hard cap: `cpulimit -l 30 -- clamscan ...`
- Always keep `--exclude=/sys` (and optionally `--exclude=/proc`, `--exclude=/dev`) to avoid scanning virtual filesystems.
## Related
- [ClamAV Documentation](https://docs.clamav.net/)
- [[02-selfhosting/security/linux-server-hardening-checklist|Linux Server Hardening Checklist]]