docs: sync local vault content to remote

Update index pages, troubleshooting articles, README, and deploy status
to match current local vault state.
This commit is contained in:
2026-03-12 18:03:52 -04:00
parent ca81761cb3
commit f35d1abdc6
8 changed files with 171 additions and 234 deletions

View File

@@ -1,129 +1,22 @@
---
title: ISP SNI Filtering Blocking Caddy Reverse Proxy
domain: troubleshooting
category: networking
tags:
- caddy
- tls
- sni
- isp
- google-fiber
- reverse-proxy
- troubleshooting
status: published
created: '2026-03-11'
updated: '2026-03-11'
---
# ISP SNI Filtering & Caddy Troubleshooting
# ISP SNI Filtering Blocking Caddy Reverse Proxy
## 🛑 Problem
When deploying the MajorWiki at `wiki.majorshouse.com`, the site was unreachable over HTTPS. Browsers reported a `TLS_CONNECTION_REFUSED` error.
Some ISPs — including Google Fiber — silently block TLS handshakes for certain hostnames at the network level. The connection reaches your server, TCP completes, but the TLS handshake never finishes. The symptom looks identical to a misconfigured Caddy setup or a missing certificate, which makes it a frustrating thing to debug.
## 🔍 Diagnosis
1. **Direct IP Check:** Accessing the server via IP on port 8092 worked fine.
2. **Tailscale Check:** Accessing via the Tailscale magic DNS worked fine.
3. **SNI Analysis:** Using `openssl s_client -connect <IP>:443 -servername wiki.majorshouse.com` resulted in an immediate reset by peer.
4. **Root Cause:** Google Fiber (the local ISP) appears to be performing SNI-based filtering on hostnames containing the string "wiki".
## What Happened
## ✅ Solution
The domain was changed from `wiki.majorshouse.com` to `notes.majorshouse.com`.
Deployed a new Caddy vhost for `wiki.majorshouse.com` on a Google Fiber residential connection. Everything on the server was correct:
- Let's Encrypt cert provisioned successfully
- Caddy validated clean with `caddy validate`
- `curl --resolve wiki.majorshouse.com:443:127.0.0.1 https://wiki.majorshouse.com` returned 200 from loopback
- iptables had ACCEPT rules for ports 80 and 443
- All other Caddy vhosts on the same IP and port worked fine externally
But from any external host, `curl` timed out with no response. `ss -tn` showed SYN-RECV connections piling up on port 443 — the TCP handshake was completing, but the TLS handshake was stalling.
## The Debugging Sequence
**Step 1: Ruled out Caddy config issues**
```bash
caddy validate --config /etc/caddy/Caddyfile
curl --resolve wiki.majorshouse.com:443:127.0.0.1 https://wiki.majorshouse.com
### Caddy Configuration Update
```caddy
notes.majorshouse.com {
reverse_proxy :8092
}
```
Both clean. Loopback returned 200.
**Step 2: Ruled out certificate issues**
```bash
ls /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wiki.majorshouse.com/
openssl x509 -in wiki.majorshouse.com.crt -noout -text | grep -E "Subject:|Not Before|Not After"
```
Valid cert, correct subject, not expired.
**Step 3: Ruled out firewall**
```bash
iptables -L INPUT -n -v | grep -E "80|443"
ss -tlnp | grep ':443'
```
Ports open, Caddy listening on `*:443`.
**Step 4: Ruled out hairpin NAT**
Testing `curl https://wiki.majorshouse.com` from the server itself returned "No route to host" — the server can't reach its own public IP. This is normal for residential connections without NAT loopback. It's not the problem.
**Step 5: Confirmed external connectivity on port 443**
```bash
# From an external server (majormail)
curl -sk -o /dev/null -w "%{http_code}" https://git.majorshouse.com # 200
curl -sk -o /dev/null -w "%{http_code}" https://wiki.majorshouse.com # 000
```
Same IP, same port, same Caddy process. `git` works, `wiki` doesn't.
**Step 6: Tested a different subdomain**
Added `notes.majorshouse.com` as a new Caddyfile entry pointing to the same upstream. Cert provisioned via HTTP-01 challenge successfully (proving port 80 is reachable). Then:
```bash
curl -sk -o /dev/null -w "%{http_code}" https://notes.majorshouse.com # 200
curl -sk -o /dev/null -w "%{http_code}" https://wiki.majorshouse.com # 000
```
`notes` worked immediately. `wiki` still timed out.
**Conclusion:** Google Fiber is performing SNI-based filtering and blocking TLS connections where the ClientHello contains `wiki.majorshouse.com` as the server name.
## The Fix
Rename the subdomain. Use anything that doesn't trigger the filter. `notes.majorshouse.com` works fine.
```bash
# Remove the blocked entry
sed -i '/^wiki\.majorshouse\.com/,/^}/d' /etc/caddy/Caddyfile
systemctl reload caddy
```
Update `mkdocs.yml` or whatever service's config references the domain, add DNS for the new subdomain, and done.
## How to Diagnose This Yourself
If your Caddy vhost works on loopback but times out externally:
1. Confirm other vhosts on the same IP and port work externally
2. Test the specific domain from multiple external networks (different ISP, mobile data)
3. Add a second vhost with a different subdomain pointing to the same upstream
4. If the new subdomain works and the original doesn't, the hostname is being filtered
```bash
# Quick external test — run from a server outside your network
curl -sk -o /dev/null -w "%{http_code}" --max-time 10 https://your-domain.com
```
If you get `000` (connection timeout, not a TLS error like `curl: (35)`), the TCP connection isn't completing — pointing to network-level blocking rather than a Caddy or cert issue.
## Gotchas & Notes
- **`curl: (35) TLS error` is different from `000`.** A TLS error means TCP connected but the handshake failed — usually a missing or invalid cert. A `000` timeout means TCP never completed — a network or firewall issue.
- **SYN-RECV in `ss -tn` means TCP is partially open.** If you see SYN-RECV entries for your domain but the connection never moves to ESTAB, something between the client and your TLS stack is dropping the handshake.
- **ISP SNI filtering is uncommon but real.** Residential ISPs sometimes filter on SNI for terms associated with piracy, proxies, or certain categories of content. "Wiki" may trigger a content-type heuristic.
- **Loopback testing isn't enough.** Always test from an external host before declaring a service working. The server can't test its own public IP on most residential connections.
## See Also
- [[setting-up-caddy-reverse-proxy]]
- [[linux-server-hardening-checklist]]
- [[tailscale-homelab-remote-access]]
Once the hostname was changed to one without the "wiki" keyword, the TLS handshake completed successfully.

View File

@@ -135,3 +135,42 @@ This is a YouTube-side experiment. yt-dlp falls back to other clients automatica
yt-dlp --version
pip show yt-dlp
```
### Format Not Available: Strict AVC+M4A Selector
The format selector `bestvideo[vcodec^=avc]+bestaudio[ext=m4a]` will hard-fail if YouTube doesn't serve H.264 (AVC) video for a given video:
```
ERROR: [youtube] Requested format is not available. Use --list-formats for a list of available formats
```
This is separate from the n-challenge issue — the format simply doesn't exist for that video (common with newer uploads that are VP9/AV1-only).
**Fix 1 — Relax the selector to mp4 container without enforcing codec:**
```bash
yt-dlp -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/bestvideo+bestaudio' \
--merge-output-format mp4 \
-o "/plex/plex/%(title)s.%(ext)s" \
--write-auto-subs --embed-subs \
https://youtu.be/VIDEO_ID
```
**Fix 2 — Let yt-dlp pick best and re-encode to H.264 via ffmpeg (Plex-safe, slower):**
```bash
yt-dlp -f 'bestvideo+bestaudio' \
--merge-output-format mp4 \
--recode-video mp4 \
-o "/plex/plex/%(title)s.%(ext)s" \
--write-auto-subs --embed-subs \
https://youtu.be/VIDEO_ID
```
Use `--recode-video mp4` when Plex direct play is required and the source stream may be VP9/AV1. Requires ffmpeg.
**Inspect available formats first:**
```bash
yt-dlp --list-formats https://youtu.be/VIDEO_ID
```