wiki: add 4 new articles from archive, merge 8 archive notes into existing articles (73 articles)
New: mdadm RAID rebuild, Mastodon instance tuning, Ventoy, Fedora networking/kernel recovery. Merged: Glacier Deep Archive into rsync, SpamAssassin into hardening checklist, OBS captions/VLC capture into OBS setup, yt-dlp subtitles/temp fix into yt-dlp. Updated index.md, README.md, SUMMARY.md with 21 previously missing articles. Fixed merge conflict in index.md Recently Updated table. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -194,6 +194,38 @@ sudo systemctl disable --now servicename
|
||||
|
||||
Common ones to disable on a dedicated server: `avahi-daemon`, `cups`, `bluetooth`.
|
||||
|
||||
## 8. Mail Server: SpamAssassin
|
||||
|
||||
If you're running Postfix (like on majormail), SpamAssassin filters incoming spam before it hits your mailbox.
|
||||
|
||||
**Install (Fedora/RHEL):**
|
||||
|
||||
```bash
|
||||
sudo dnf install spamassassin
|
||||
sudo systemctl enable --now spamassassin
|
||||
```
|
||||
|
||||
**Integrate with Postfix** by adding a content filter in `/etc/postfix/master.cf`. See the [full setup guide](https://www.davekb.com/browse_computer_tips:spamassassin_with_postfix:txt) for Postfix integration on RedHat-based systems.
|
||||
|
||||
**Train the filter with sa-learn:**
|
||||
|
||||
SpamAssassin gets better when you feed it examples of spam and ham (legitimate mail):
|
||||
|
||||
```bash
|
||||
# Train on known spam
|
||||
sa-learn --spam /path/to/spam-folder/
|
||||
|
||||
# Train on known good mail
|
||||
sa-learn --ham /path/to/ham-folder/
|
||||
|
||||
# Check what sa-learn knows
|
||||
sa-learn --dump magic
|
||||
```
|
||||
|
||||
Run `sa-learn` periodically against your Maildir to keep the Bayesian filter accurate. The more examples it sees, the fewer false positives and missed spam you'll get.
|
||||
|
||||
Reference: [sa-learn documentation](https://spamassassin.apache.org/full/3.0.x/dist/doc/sa-learn.html)
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Don't lock yourself out.** Test SSH key auth in a second terminal before disabling passwords. Keep the original session open.
|
||||
|
||||
68
02-selfhosting/services/mastodon-instance-tuning.md
Normal file
68
02-selfhosting/services/mastodon-instance-tuning.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: "Mastodon Instance Tuning"
|
||||
domain: selfhosting
|
||||
category: services
|
||||
tags: [mastodon, fediverse, self-hosting, majortoot, docker]
|
||||
status: published
|
||||
created: 2026-04-02
|
||||
updated: 2026-04-02
|
||||
---
|
||||
|
||||
# Mastodon Instance Tuning
|
||||
|
||||
Running your own Mastodon instance means you control the rules — including limits the upstream project imposes by default. These are the tweaks applied to **majortoot** (MajorsHouse's Mastodon instance).
|
||||
|
||||
## Increase Character Limit
|
||||
|
||||
Mastodon's default 500-character post limit is low for longer-form thoughts. You can raise it, but it requires modifying the source — there's no config toggle.
|
||||
|
||||
The process depends on your deployment method (Docker vs bare metal) and Mastodon version. The community-maintained guide covers the approaches:
|
||||
|
||||
- [How to increase the max number of characters of a post](https://qa.mastoadmin.social/questions/10010000000000011/how-do-i-increase-the-max-number-of-characters-of-a-post)
|
||||
|
||||
**Key points:**
|
||||
- The limit is enforced in both the backend (Ruby) and frontend (React). Both must be changed or the UI will reject posts the API would accept.
|
||||
- After changing, you need to rebuild assets and restart services.
|
||||
- Other instances will still display the full post — the character limit is per-instance, not a federation constraint.
|
||||
- Some Mastodon forks (Glitch, Hometown) expose this as a config option without source patches.
|
||||
|
||||
## Media Cache Management
|
||||
|
||||
Federated content (avatars, headers, media from remote posts) gets cached locally. On a small instance this grows slowly, but over months it adds up — especially if you follow active accounts on large instances.
|
||||
|
||||
Reference: [Fedicache — Understanding Mastodon's media cache](https://notes.neatnik.net/2024/08/fedicache)
|
||||
|
||||
**Clean up cached remote media:**
|
||||
|
||||
```bash
|
||||
# Preview what would be removed (older than 7 days)
|
||||
tootctl media remove --days 7 --dry-run
|
||||
|
||||
# Actually remove it
|
||||
tootctl media remove --days 7
|
||||
|
||||
# For Docker deployments
|
||||
docker exec mastodon-web tootctl media remove --days 7
|
||||
```
|
||||
|
||||
**Automate with cron or systemd timer:**
|
||||
|
||||
```bash
|
||||
# Weekly cache cleanup — crontab
|
||||
0 3 * * 0 docker exec mastodon-web tootctl media remove --days 7
|
||||
```
|
||||
|
||||
**What gets removed:** Only cached copies of remote media. Local uploads (your posts, your users' posts) are never touched. Remote media will be re-fetched on demand if someone views the post again.
|
||||
|
||||
**Storage impact:** On a single-user instance, remote media cache can still reach several GB over a few months of active federation. Regular cleanup keeps disk usage predictable.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Character limit changes break on upgrades.** Any source patch gets overwritten when you pull a new Mastodon release. Track your changes and reapply after updates.
|
||||
- **`tootctl` is your admin CLI.** It handles media cleanup, user management, federation diagnostics, and more. Run `tootctl --help` for the full list.
|
||||
- **Monitor disk usage.** Even with cache cleanup, the PostgreSQL database and local media uploads grow over time. Keep an eye on it.
|
||||
|
||||
## See Also
|
||||
|
||||
- [[self-hosting-starter-guide]]
|
||||
- [[docker-healthchecks]]
|
||||
@@ -148,6 +148,29 @@ WantedBy=timers.target
|
||||
sudo systemctl enable --now rsync-backup.timer
|
||||
```
|
||||
|
||||
## Cold Storage — AWS Glacier Deep Archive
|
||||
|
||||
rsync handles local and remote backups, but for true offsite cold storage — disaster recovery, archival copies you rarely need to retrieve — AWS Glacier Deep Archive is the cheapest option at ~$1/TB/month.
|
||||
|
||||
Upload files directly to an S3 bucket with the `DEEP_ARCHIVE` storage class:
|
||||
|
||||
```bash
|
||||
# Single file
|
||||
aws s3 cp backup.tar.gz s3://your-bucket/ --storage-class DEEP_ARCHIVE
|
||||
|
||||
# Entire directory
|
||||
aws s3 sync /backup/offsite/ s3://your-bucket/offsite/ --storage-class DEEP_ARCHIVE
|
||||
```
|
||||
|
||||
**When to use it:** Long-term backups you'd only need in a disaster scenario — media archives, yearly snapshots, irreplaceable data. Not for anything you'd need to restore quickly.
|
||||
|
||||
**Retrieval tradeoffs:**
|
||||
- **Standard retrieval:** 12 hours, cheapest restore cost
|
||||
- **Bulk retrieval:** Up to 48 hours, even cheaper
|
||||
- **Expedited:** Not available for Deep Archive — if you need faster access, use regular Glacier or S3 Infrequent Access
|
||||
|
||||
**In the MajorsHouse backup strategy**, rsync handles the daily local and cross-host backups. Glacier Deep Archive is the final tier — offsite, durable, cheap, and slow to retrieve by design. A good backup plan has both.
|
||||
|
||||
## Gotchas & Notes
|
||||
|
||||
- **Test with `--dry-run` first.** Especially when using `--delete`. See what would be removed before actually removing it.
|
||||
|
||||
Reference in New Issue
Block a user