chore: link vault wiki to Gitea
This commit is contained in:
0
05-troubleshooting/boot-system/.keep
Normal file
0
05-troubleshooting/boot-system/.keep
Normal file
0
05-troubleshooting/docker/.keep
Normal file
0
05-troubleshooting/docker/.keep
Normal file
0
05-troubleshooting/gpu-display/.keep
Normal file
0
05-troubleshooting/gpu-display/.keep
Normal file
129
05-troubleshooting/isp-sni-filtering-caddy.md
Normal file
129
05-troubleshooting/isp-sni-filtering-caddy.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
title: ISP SNI Filtering Blocking Caddy Reverse Proxy
|
||||
domain: troubleshooting
|
||||
category: networking
|
||||
tags:
|
||||
- caddy
|
||||
- tls
|
||||
- sni
|
||||
- isp
|
||||
- google-fiber
|
||||
- reverse-proxy
|
||||
- troubleshooting
|
||||
status: published
|
||||
created: '2026-03-11'
|
||||
updated: '2026-03-11'
|
||||
---
|
||||
|
||||
# ISP SNI Filtering Blocking Caddy Reverse Proxy
|
||||
|
||||
Some ISPs — including Google Fiber — silently block TLS handshakes for certain hostnames at the network level. The connection reaches your server, TCP completes, but the TLS handshake never finishes. The symptom looks identical to a misconfigured Caddy setup or a missing certificate, which makes it a frustrating thing to debug.
|
||||
|
||||
## What Happened
|
||||
|
||||
Deployed a new Caddy vhost for `wiki.majorshouse.com` on a Google Fiber residential connection. Everything on the server was correct:
|
||||
|
||||
- Let's Encrypt cert provisioned successfully
|
||||
- Caddy validated clean with `caddy validate`
|
||||
- `curl --resolve wiki.majorshouse.com:443:127.0.0.1 https://wiki.majorshouse.com` returned 200 from loopback
|
||||
- iptables had ACCEPT rules for ports 80 and 443
|
||||
- All other Caddy vhosts on the same IP and port worked fine externally
|
||||
|
||||
But from any external host, `curl` timed out with no response. `ss -tn` showed SYN-RECV connections piling up on port 443 — the TCP handshake was completing, but the TLS handshake was stalling.
|
||||
|
||||
## The Debugging Sequence
|
||||
|
||||
**Step 1: Ruled out Caddy config issues**
|
||||
|
||||
```bash
|
||||
caddy validate --config /etc/caddy/Caddyfile
|
||||
curl --resolve wiki.majorshouse.com:443:127.0.0.1 https://wiki.majorshouse.com
|
||||
```
|
||||
|
||||
Both clean. Loopback returned 200.
|
||||
|
||||
**Step 2: Ruled out certificate issues**
|
||||
|
||||
```bash
|
||||
ls /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wiki.majorshouse.com/
|
||||
openssl x509 -in wiki.majorshouse.com.crt -noout -text | grep -E "Subject:|Not Before|Not After"
|
||||
```
|
||||
|
||||
Valid cert, correct subject, not expired.
|
||||
|
||||
**Step 3: Ruled out firewall**
|
||||
|
||||
```bash
|
||||
iptables -L INPUT -n -v | grep -E "80|443"
|
||||
ss -tlnp | grep ':443'
|
||||
```
|
||||
|
||||
Ports open, Caddy listening on `*:443`.
|
||||
|
||||
**Step 4: Ruled out hairpin NAT**
|
||||
|
||||
Testing `curl https://wiki.majorshouse.com` from the server itself returned "No route to host" — the server can't reach its own public IP. This is normal for residential connections without NAT loopback. It's not the problem.
|
||||
|
||||
**Step 5: Confirmed external connectivity on port 443**
|
||||
|
||||
```bash
|
||||
# From an external server (majormail)
|
||||
curl -sk -o /dev/null -w "%{http_code}" https://git.majorshouse.com # 200
|
||||
curl -sk -o /dev/null -w "%{http_code}" https://wiki.majorshouse.com # 000
|
||||
```
|
||||
|
||||
Same IP, same port, same Caddy process. `git` works, `wiki` doesn't.
|
||||
|
||||
**Step 6: Tested a different subdomain**
|
||||
|
||||
Added `notes.majorshouse.com` as a new Caddyfile entry pointing to the same upstream. Cert provisioned via HTTP-01 challenge successfully (proving port 80 is reachable). Then:
|
||||
|
||||
```bash
|
||||
curl -sk -o /dev/null -w "%{http_code}" https://notes.majorshouse.com # 200
|
||||
curl -sk -o /dev/null -w "%{http_code}" https://wiki.majorshouse.com # 000
|
||||
```
|
||||
|
||||
`notes` worked immediately. `wiki` still timed out.
|
||||
|
||||
**Conclusion:** Google Fiber is performing SNI-based filtering and blocking TLS connections where the ClientHello contains `wiki.majorshouse.com` as the server name.
|
||||
|
||||
## The Fix
|
||||
|
||||
Rename the subdomain. Use anything that doesn't trigger the filter. `notes.majorshouse.com` works fine.
|
||||
|
||||
```bash
|
||||
# Remove the blocked entry
|
||||
sed -i '/^wiki\.majorshouse\.com/,/^}/d' /etc/caddy/Caddyfile
|
||||
systemctl reload caddy
|
||||
```
|
||||
|
||||
Update `mkdocs.yml` or whatever service's config references the domain, add DNS for the new subdomain, and done.
|
||||
|
||||
## How to Diagnose This Yourself
|
||||
|
||||
If your Caddy vhost works on loopback but times out externally:
|
||||
|
||||
1. Confirm other vhosts on the same IP and port work externally
|
||||
2. Test the specific domain from multiple external networks (different ISP, mobile data)
|
||||
3. Add a second vhost with a different subdomain pointing to the same upstream
|
||||
4. If the new subdomain works and the original doesn't, the hostname is being filtered
|
||||
|
||||
```bash
|
||||
# Quick external test — run from a server outside your network
|
||||
curl -sk -o /dev/null -w "%{http_code}" --max-time 10 https://your-domain.com
|
||||
```
|
||||
|
||||
If you get `000` (connection timeout, not a TLS error like `curl: (35)`), the TCP connection isn't completing — pointing to network-level blocking rather than a Caddy or cert issue.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **`curl: (35) TLS error` is different from `000`.** A TLS error means TCP connected but the handshake failed — usually a missing or invalid cert. A `000` timeout means TCP never completed — a network or firewall issue.
|
||||
- **SYN-RECV in `ss -tn` means TCP is partially open.** If you see SYN-RECV entries for your domain but the connection never moves to ESTAB, something between the client and your TLS stack is dropping the handshake.
|
||||
- **ISP SNI filtering is uncommon but real.** Residential ISPs sometimes filter on SNI for terms associated with piracy, proxies, or certain categories of content. "Wiki" may trigger a content-type heuristic.
|
||||
- **Loopback testing isn't enough.** Always test from an external host before declaring a service working. The server can't test its own public IP on most residential connections.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[setting-up-caddy-reverse-proxy]]
|
||||
- [[linux-server-hardening-checklist]]
|
||||
- [[tailscale-homelab-remote-access]]
|
||||
0
05-troubleshooting/networking/.keep
Normal file
0
05-troubleshooting/networking/.keep
Normal file
116
05-troubleshooting/obsidian-cache-hang-recovery.md
Normal file
116
05-troubleshooting/obsidian-cache-hang-recovery.md
Normal file
@@ -0,0 +1,116 @@
|
||||
---
|
||||
tags:
|
||||
- obsidian
|
||||
- troubleshooting
|
||||
- windows
|
||||
- majortwin
|
||||
created: '2026-03-11'
|
||||
status: resolved
|
||||
---
|
||||
# Obsidian Vault Recovery — Loading Cache Hang
|
||||
|
||||
## Problem
|
||||
|
||||
Obsidian refused to open MajorVault, hanging indefinitely on "Loading cache" with no progress. The issue began with an `EACCES` permission error on a Python venv symlink inside the vault, then persisted even after the offending files were removed.
|
||||
|
||||
## Root Causes
|
||||
|
||||
Two compounding issues caused the hang:
|
||||
|
||||
1. **79GB of ML project files inside the vault.** The `20-Projects/MajorTwin` directory contained model weights, training artifacts, and venvs that Obsidian tried to index on every launch. Specifically:
|
||||
- `06-Models` — ~39GB of model weights
|
||||
- `09-Artifacts` — ~38GB of training artifacts
|
||||
- `10-Training` — ~1.8GB of training data
|
||||
- `11-Tools` — Python venvs and llama.cpp builds (with broken symlinks on Windows)
|
||||
|
||||
2. **Stale Electron app data.** After Obsidian attempted to index the 79GB, it wrote corrupt state into its global app data (`%APPDATA%\obsidian`). This persisted across vault config resets and caused the hang even after the large files were removed.
|
||||
|
||||
A secondary contributing factor was `"open": true` in `obsidian.json`, which forced Obsidian to resume the broken session on every launch.
|
||||
|
||||
## Resolution Steps
|
||||
|
||||
### 1. Remove large non-note directories from the vault
|
||||
|
||||
```powershell
|
||||
Move-Item "C:\Users\majli\Documents\MajorVault\20-Projects\MajorTwin\06-Models" "D:\MajorTwin\06-Models"
|
||||
Move-Item "C:\Users\majli\Documents\MajorVault\20-Projects\MajorTwin\09-Artifacts" "D:\MajorTwin\09-Artifacts"
|
||||
Move-Item "C:\Users\majli\Documents\MajorVault\20-Projects\MajorTwin\10-Training" "D:\MajorTwin\10-Training"
|
||||
Remove-Item -Recurse -Force "C:\Users\majli\Documents\MajorVault\20-Projects\MajorTwin\11-Tools\venv-unsloth"
|
||||
Remove-Item -Recurse -Force "C:\Users\majli\Documents\MajorVault\20-Projects\MajorTwin\11-Tools\llama.cpp"
|
||||
```
|
||||
|
||||
### 2. Reset the vault config
|
||||
|
||||
```powershell
|
||||
Rename-Item "C:\Users\majli\Documents\MajorVault\.obsidian" "C:\Users\majli\Documents\MajorVault\.obsidian.bak"
|
||||
```
|
||||
|
||||
### 3. Fix the open flag in obsidian.json
|
||||
|
||||
```powershell
|
||||
'{"vaults":{"9147b890194dceb0":{"path":"C:\\Users\\majli\\Documents\\MajorVault","ts":1773207898521,"open":false}}}' | Set-Content "$env:APPDATA\obsidian\obsidian.json"
|
||||
```
|
||||
|
||||
### 4. Wipe Obsidian global app data (the key fix)
|
||||
|
||||
```powershell
|
||||
Stop-Process -Name "Obsidian" -Force -ErrorAction SilentlyContinue
|
||||
Rename-Item "$env:APPDATA\obsidian" "$env:APPDATA\obsidian.bak"
|
||||
```
|
||||
|
||||
### 5. Launch Obsidian and reselect the vault
|
||||
|
||||
Obsidian will treat it as a fresh install. Select MajorVault — it should load cleanly.
|
||||
|
||||
### 6. Restore vault config and plugins
|
||||
|
||||
```powershell
|
||||
Copy-Item "$env:APPDATA\obsidian.bak\obsidian.json" "$env:APPDATA\obsidian\obsidian.json"
|
||||
Copy-Item "C:\Users\majli\Documents\MajorVault\.obsidian.bak\*.json" "C:\Users\majli\Documents\MajorVault\.obsidian\"
|
||||
Copy-Item "C:\Users\majli\Documents\MajorVault\.obsidian.bak\plugins.bak" "C:\Users\majli\Documents\MajorVault\.obsidian\plugins" -Recurse
|
||||
```
|
||||
|
||||
### 7. Clean up backups
|
||||
|
||||
```powershell
|
||||
Remove-Item -Recurse -Force "$env:APPDATA\obsidian.bak"
|
||||
Remove-Item -Recurse -Force "C:\Users\majli\Documents\MajorVault\.obsidian.bak"
|
||||
```
|
||||
|
||||
## Prevention
|
||||
|
||||
### Add a .obsidianignore file to the vault root
|
||||
|
||||
```
|
||||
20-Projects/MajorTwin/06-Models
|
||||
20-Projects/MajorTwin/09-Artifacts
|
||||
20-Projects/MajorTwin/10-Training
|
||||
20-Projects/MajorTwin/11-Tools
|
||||
```
|
||||
|
||||
### Add exclusions in Obsidian settings
|
||||
|
||||
Settings → Files & Links → Excluded files → add `20-Projects/MajorTwin/11-Tools`
|
||||
|
||||
### Keep ML project files off the vault entirely
|
||||
|
||||
Model weights, venvs, training artifacts, and datasets do not belong in Obsidian. Store them on D drive or in WSL2. WSL2 (Fedora43) can access D drive at `/mnt/d/MajorTwin/`.
|
||||
|
||||
## Key Diagnostic Commands
|
||||
|
||||
```powershell
|
||||
# Check vault size by top-level directory
|
||||
Get-ChildItem "C:\Users\majli\Documents\MajorVault" -Directory | ForEach-Object {
|
||||
$size = (Get-ChildItem $_.FullName -Recurse -File -ErrorAction SilentlyContinue | Measure-Object -Property Length -Sum).Sum
|
||||
[PSCustomObject]@{ Name = $_.Name; SizeMB = [math]::Round($size/1MB, 1) }
|
||||
} | Sort-Object SizeMB -Descending
|
||||
|
||||
# Check obsidian.json vault state
|
||||
Get-Content "$env:APPDATA\obsidian\obsidian.json"
|
||||
|
||||
# Check Obsidian log
|
||||
Get-Content "$env:APPDATA\obsidian\obsidian.log" -Tail 50
|
||||
|
||||
# Check if Obsidian is running/frozen
|
||||
Get-Process -Name "Obsidian" | Select-Object CPU, WorkingSet, PagedMemorySize
|
||||
```
|
||||
0
05-troubleshooting/performance/.keep
Normal file
0
05-troubleshooting/performance/.keep
Normal file
137
05-troubleshooting/yt-dlp-fedora-js-challenge.md
Normal file
137
05-troubleshooting/yt-dlp-fedora-js-challenge.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# yt-dlp YouTube JS Challenge Fix (Fedora)
|
||||
|
||||
## Problem
|
||||
|
||||
When running `yt-dlp` on Fedora, downloads may fail with the following warnings and errors:
|
||||
|
||||
```
|
||||
WARNING: [youtube] No supported JavaScript runtime could be found.
|
||||
WARNING: [youtube] [jsc:deno] Challenge solver lib script version 0.3.2 is not supported (supported version: 0.4.0)
|
||||
WARNING: [youtube] 0qhgPKRzlvs: n challenge solving failed: Some formats may be missing.
|
||||
ERROR: Did not get any data blocks
|
||||
ERROR: fragment 1 not found, unable to continue
|
||||
```
|
||||
|
||||
This causes subtitle downloads (and sometimes video formats) to fail silently, with the MP4 completing but subtitles being skipped.
|
||||
|
||||
### Root Causes
|
||||
|
||||
1. **No JavaScript runtime installed** — yt-dlp requires Deno or Node.js to solve YouTube's JS challenges
|
||||
2. **Outdated yt-dlp** — the bundled challenge solver script is behind the required version
|
||||
3. **Remote challenge solver not enabled** — the updated solver script must be explicitly fetched
|
||||
|
||||
---
|
||||
|
||||
## Fix
|
||||
|
||||
### 1. Install Deno
|
||||
|
||||
Deno is not in the Fedora repos. Install via the official installer:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://deno.land/install.sh | sh
|
||||
sudo mv ~/.deno/bin/deno /usr/local/bin/deno
|
||||
deno --version
|
||||
```
|
||||
|
||||
> Alternatively, Node.js works and is available via `sudo dnf install nodejs`.
|
||||
|
||||
### 2. Update yt-dlp
|
||||
|
||||
```bash
|
||||
sudo pip install -U yt-dlp --break-system-packages
|
||||
```
|
||||
|
||||
> If installed via standalone binary: `yt-dlp -U`
|
||||
|
||||
### 3. Enable Remote Challenge Solver
|
||||
|
||||
Add `--remote-components ejs:github` to the yt-dlp command. This fetches the latest JS challenge solver from GitHub at runtime:
|
||||
|
||||
```bash
|
||||
yt-dlp -f 'bestvideo[vcodec^=avc]+bestaudio[ext=m4a]/bestvideo+bestaudio' \
|
||||
--merge-output-format mp4 \
|
||||
-o "/plex/plex/%(title)s.%(ext)s" \
|
||||
--write-auto-subs --embed-subs \
|
||||
--remote-components ejs:github \
|
||||
https://www.youtube.com/watch?v=VIDEO_ID
|
||||
```
|
||||
|
||||
### 4. Persist the Config
|
||||
|
||||
Create a yt-dlp config file so `--remote-components` is applied automatically:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.config/yt-dlp
|
||||
echo '--remote-components ejs:github' > ~/.config/yt-dlp/config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
YouTube pushes extractor changes frequently. Keep yt-dlp current:
|
||||
|
||||
```bash
|
||||
sudo pip install -U yt-dlp --break-system-packages
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tags
|
||||
|
||||
#yt-dlp #fedora #youtube #plex #self-hosted
|
||||
|
||||
## Known Limitations
|
||||
|
||||
### n-Challenge Failure: "found 0 n function possibilities"
|
||||
|
||||
Even with Deno installed, the remote solver downloaded, and yt-dlp up to date, some YouTube player versions can still fail n-challenge solving:
|
||||
|
||||
```
|
||||
WARNING: [youtube] [jsc] Error solving n challenge request using "deno" provider:
|
||||
Error running deno process (returncode: 1): error: Uncaught (in promise)
|
||||
"found 0 n function possibilities".
|
||||
WARNING: [youtube] n challenge solving failed: Some formats may be missing.
|
||||
ERROR: [youtube] Requested format is not available.
|
||||
```
|
||||
|
||||
This is a known upstream issue tied to specific YouTube player builds (e.g. `e42f4bf8`). It is not fixable locally — it requires a yt-dlp patch when YouTube rotates the player.
|
||||
|
||||
**Workaround:** Use a permissive format fallback instead of forcing AVC:
|
||||
|
||||
```bash
|
||||
yt-dlp -f 'bestvideo+bestaudio/best' \
|
||||
--merge-output-format mp4 \
|
||||
-o "/plex/plex/%(title)s.%(ext)s" \
|
||||
--write-auto-subs --embed-subs \
|
||||
--remote-components ejs:github \
|
||||
https://www.youtube.com/watch?v=VIDEO_ID
|
||||
```
|
||||
|
||||
This lets yt-dlp pick the best available format rather than failing on a missing AVC stream. To inspect what formats are actually available:
|
||||
|
||||
```bash
|
||||
yt-dlp --list-formats --remote-components ejs:github \
|
||||
https://www.youtube.com/watch?v=VIDEO_ID
|
||||
```
|
||||
|
||||
### SABR-Only Streaming Warning
|
||||
|
||||
Some videos may show:
|
||||
|
||||
```
|
||||
WARNING: [youtube] Some android_vr client https formats have been skipped as they
|
||||
are missing a URL. YouTube may have enabled the SABR-only streaming experiment.
|
||||
```
|
||||
|
||||
This is a YouTube-side experiment. yt-dlp falls back to other clients automatically — no action needed.
|
||||
|
||||
### pip Version Check
|
||||
|
||||
`pip show` does not accept `--break-system-packages`. Run separately:
|
||||
|
||||
```bash
|
||||
yt-dlp --version
|
||||
pip show yt-dlp
|
||||
```
|
||||
Reference in New Issue
Block a user