chore: link vault wiki to Gitea

This commit is contained in:
2026-03-11 11:20:12 -04:00
parent fe6ec20351
commit 9c22a661ea
54 changed files with 3069 additions and 3 deletions

View File

View File

@@ -0,0 +1,90 @@
---
title: "Linux Distro Guide for Beginners"
domain: linux
category: distro-specific
tags: [linux, distros, beginners, ubuntu, fedora, mint]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Linux Distro Guide for Beginners
If you're new to Linux and trying to figure out where to start, Ubuntu is the answer I give most often. I've been out of the beginner game for a while so there may be better options now, but Ubuntu has the widest community, the most documentation, and the best chance of finding a guide for whatever breaks on your hardware.
## The Short Answer
Start with **Ubuntu LTS** (the Long Term Support release). It's stable, well-documented, and has the largest community for getting help. Once you're comfortable, explore from there.
## Why Ubuntu for Beginners
Ubuntu hits the right marks for someone starting out:
- **Hardware support is broad.** Most laptops and desktops work out of the box or close to it, including NVIDIA drivers via the additional drivers tool.
- **Documentation everywhere.** Years of Ask Ubuntu, Ubuntu Forums, and community guides means almost any problem you hit has already been solved somewhere.
- **LTS releases are supported for 5 years.** You're not chasing upgrades every six months while you're still learning the basics.
- **Software availability.** Most Linux software either provides Ubuntu packages first or builds for it. Snap and Flatpak are both available.
```bash
# Check your Ubuntu version
lsb_release -a
# Update the system
sudo apt update && sudo apt upgrade
# Install software
sudo apt install packagename
```
## Other Distros Worth Knowing About
Once you've got your footing, the Linux ecosystem is wide. Here's how I'd categorize the common options:
**If Ubuntu feels like too much hand-holding:**
- **Fedora** — cutting edge packages, great for developers, ships very recent software. More DIY than Ubuntu but well-documented. My go-to for anything development-focused.
- **Linux Mint** — Ubuntu base with a more Windows-like desktop. Good if the Ubuntu GNOME interface feels unfamiliar.
**If you want something rolling (always up to date):**
- **Arch Linux** — you build it from scratch. Not for beginners, but you'll learn a lot. The Arch Wiki is the best Linux documentation that exists and is useful even if you're not running Arch.
- **Manjaro** — Arch base with an installer and some guardrails. Middle ground between Arch and something like Fedora.
**If you're building a server:**
- **Ubuntu Server** or **Debian** — rock solid, wide support, what most tutorials assume.
- **RHEL/AlmaLinux/Rocky Linux** — if you want to learn the Red Hat ecosystem for professional reasons.
## The Desktop Environment Question
Most beginners don't realize that Linux separates the OS from the desktop environment. Ubuntu ships with GNOME by default, but you can install others or pick a distro that comes with a different one.
| Desktop | Feel | Distro that ships it |
|---|---|---|
| GNOME | Modern, minimal, touch-friendly | Ubuntu, Fedora |
| KDE Plasma | Feature-rich, highly customizable | Kubuntu, KDE Neon |
| XFCE | Lightweight, traditional | Xubuntu, MX Linux |
| MATE | Classic, stable | Ubuntu MATE |
| Cinnamon | Windows-like | Linux Mint |
If you're not sure, start with whatever comes default on your chosen distro. You can always install another desktop later or try a different distro flavor.
## Getting Help
The community is the best part of Linux. When you get stuck:
- **Ask Ubuntu** (askubuntu.com) — for Ubuntu-specific questions
- **The Arch Wiki** — for general Linux concepts even if you're not on Arch
- **r/linux4noobs** — beginner-friendly community
- **Your distro's forums** — most major distros have their own
Be specific when asking for help. Include your distro and version, what you tried, and the exact error message. People can't help you with "it doesn't work."
## Gotchas & Notes
- **Don't dual-boot as your first step.** It adds complexity. Use a VM (VirtualBox, VMware) or a spare machine first until you're confident.
- **NVIDIA on Linux** can be annoying. Ubuntu's additional drivers GUI makes it manageable, but know that going in. AMD graphics tend to work better out of the box.
- **The terminal is your friend, not something to fear.** You'll use it. The earlier you get comfortable with basic commands, the easier everything gets.
- **Backups before you start.** If you're installing on real hardware, back up your data first. Not because Linux will eat it, but because installation steps can go sideways on any OS.
## See Also
- [[wsl2-instance-migration-fedora43]]
- [[managing-linux-services-systemd-ansible]]

View File

@@ -0,0 +1,101 @@
---
title: WSL2 Instance Migration (Fedora 43)
domain: linux
category: distro-specific
tags:
- wsl2
- fedora
- windows
- migration
- majorrig
status: published
created: '2026-03-06'
updated: '2026-03-08'
---
# WSL2 Instance Migration (Fedora 43)
To move a WSL2 distro from C: to another drive, export it to a tar file with `wsl --export`, unregister the original, then re-import it at the new location with `wsl --import`. After import you'll need to fix the default user — WSL always resets it to root on import, which you patch via `/etc/wsl.conf`.
## The Short Answer
```powershell
wsl --terminate Fedora-43
wsl --export Fedora-43 D:\fedora_backup.tar
wsl --unregister Fedora-43
mkdir D:\WSL\Fedora43
wsl --import Fedora-43 D:\WSL\Fedora43 D:\fedora_backup.tar --version 2
```
Then fix the default user — see Steps below.
## Background
WSL2 stores each distro as a VHDX on whatever drive it was installed to, which is C: by default. If you're running Unsloth fine-tuning runs or doing anything that generates large files in WSL2, C: fills up fast. The migration is straightforward but the import resets your default user to root, which you have to fix manually.
## Steps
1. Shut down the instance cleanly
```powershell
wsl --terminate Fedora-43
```
2. Export to the destination drive
```powershell
wsl --export Fedora-43 D:\fedora_backup.tar
```
3. Remove the C: instance
```powershell
wsl --unregister Fedora-43
```
4. Create the new directory and import
```powershell
mkdir D:\WSL\Fedora43
wsl --import Fedora-43 D:\WSL\Fedora43 D:\fedora_backup.tar --version 2
```
5. Fix the default user and enable systemd — edit `/etc/wsl.conf` inside the distro
```ini
[boot]
systemd=true
[user]
default=majorlinux
```
6. Restart WSL to apply
```powershell
wsl --shutdown
wsl -d Fedora-43
```
## Gotchas & Notes
- **Default user always resets to root on import** — this is expected WSL behavior. The `/etc/wsl.conf` fix is mandatory, not optional.
- **Windows Terminal profiles:** If the GUID changed after re-registration, update the profile. The command line stays the same: `wsl.exe -d Fedora-43`.
- **Verify the VHDX landed correctly:** Check `D:\WSL\Fedora43\ext4.vhdx` exists before deleting the backup tar.
- **Keep the tar until verified:** Don't delete `D:\fedora_backup.tar` until you've confirmed the migrated instance works correctly.
- **systemd=true is required for Fedora 43** — without it, services (including Docker and Ollama in WSL) won't start properly.
## Maintenance Aliases (DNF5)
Fedora 43 ships with DNF5. Add these to `~/.bashrc`:
```bash
alias update='sudo dnf upgrade --refresh'
alias install='sudo dnf install'
alias clean='sudo dnf clean all'
```
## See Also
- [[Managing disk space on MajorRig]]
- [[Unsloth QLoRA fine-tuning setup]]

View File

View File

@@ -0,0 +1,157 @@
---
title: "Linux File Permissions and Ownership"
domain: linux
category: files-permissions
tags: [permissions, chmod, chown, linux, acl, security]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Linux File Permissions and Ownership
File permissions are how Linux controls who can read, write, and execute files. Misunderstanding them is responsible for a lot of broken setups — both "why can't I access this" and "why is this insecure." This is the reference I'd want when something permission-related goes wrong.
## The Short Answer
```bash
# Change permissions
chmod 755 /path/to/file
chmod +x script.sh # add execute for all
chmod u+w file # add write for owner
chmod o-r file # remove read for others
# Change ownership
chown user file
chown user:group file
chown -R user:group /directory/ # recursive
```
## Reading Permission Bits
When you run `ls -la`, you see something like:
```
-rwxr-xr-- 1 major wheel 4096 Mar 8 10:00 myscript.sh
drwxr-xr-x 2 root root 4096 Mar 8 10:00 mydir/
```
The first column breaks down as:
```
- rwx r-x r--
│ │ │ └── Others: read only
│ │ └────── Group: read + execute
│ └────────── Owner: read + write + execute
└──────────── Type: - = file, d = directory, l = symlink
```
**Permission bits:**
| Symbol | Octal | Meaning |
|---|---|---|
| `r` | 4 | Read |
| `w` | 2 | Write |
| `x` | 1 | Execute (or traverse for directories) |
| `-` | 0 | No permission |
Common permission patterns:
| Octal | Symbolic | Common use |
|---|---|---|
| `755` | `rwxr-xr-x` | Executables, directories |
| `644` | `rw-r--r--` | Regular files |
| `600` | `rw-------` | Private files (SSH keys, config with credentials) |
| `700` | `rwx------` | Private directories (e.g., `~/.ssh`) |
| `777` | `rwxrwxrwx` | Everyone can do everything — almost never use this |
## chmod
```bash
# Symbolic mode
chmod u+x file # add execute for user (owner)
chmod g-w file # remove write from group
chmod o=r file # set others to read-only exactly
chmod a+r file # add read for all (user, group, other)
chmod ug=rw file # set user and group to read+write
# Octal mode
chmod 755 file # rwxr-xr-x
chmod 644 file # rw-r--r--
chmod 600 file # rw-------
# Recursive
chmod -R 755 /var/www/html/
```
## chown
```bash
# Change owner
chown major file
# Change owner and group
chown major:wheel file
# Change group only
chown :wheel file
# or
chgrp wheel file
# Recursive
chown -R major:major /home/major/
```
## Special Permissions
**Setuid (SUID):** Execute as the file owner, not the caller. Used by system tools like `sudo`.
```bash
chmod u+s /path/to/executable
# Shows as 's' in owner execute position: rwsr-xr-x
```
**Setgid (SGID):** Files created in a directory inherit the directory's group. Useful for shared directories.
```bash
chmod g+s /shared/directory
# Shows as 's' in group execute position
```
**Sticky bit:** Only the file owner can delete files in the directory. Used on `/tmp`.
```bash
chmod +t /shared/directory
# Shows as 't' in others execute position: drwxrwxrwt
```
## Finding and Fixing Permission Problems
```bash
# Find files writable by everyone
find /path -perm -o+w -type f
# Find SUID files (security audit)
find / -perm -4000 -type f 2>/dev/null
# Find files owned by a user
find /path -user major
# Fix common web server permissions (files 644, dirs 755)
find /var/www/html -type f -exec chmod 644 {} \;
find /var/www/html -type d -exec chmod 755 {} \;
```
## Gotchas & Notes
- **Directories need execute to traverse.** You can't `cd` into a directory without execute permission, even if you have read. This catches people off guard — `chmod 644` on a directory locks you out of it.
- **SSH is strict about permissions.** `~/.ssh` must be `700`, `~/.ssh/authorized_keys` must be `600`, and private keys must be `600`. SSH silently ignores keys with wrong permissions.
- **`chmod -R 777` is almost never the right answer.** If something isn't working because of permissions, find the actual issue. Blanket 777 creates security holes and usually breaks setuid/setgid behavior.
- **umask controls default permissions.** New files are created with `0666 & ~umask`, new directories with `0777 & ~umask`. The default umask is usually `022`, giving files `644` and directories `755`.
- **ACLs for more complex needs.** When standard user/group/other isn't enough (e.g., multiple users need different access to the same file), look at `setfacl` and `getfacl`.
## See Also
- [[linux-server-hardening-checklist]]
- [[ssh-config-key-management]]
- [[bash-scripting-patterns]]

View File

View File

@@ -0,0 +1,135 @@
---
title: "SSH Config and Key Management"
domain: linux
category: networking
tags: [ssh, keys, security, linux, remote-access]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# SSH Config and Key Management
SSH is how you get into remote servers. Key-based authentication is safer than passwords and, once set up, faster too. The `~/.ssh/config` file is what turns SSH from something you type long commands for into something that actually works the way you want.
## The Short Answer
```bash
# Generate a key (use ed25519 — it's faster and more secure than RSA now)
ssh-keygen -t ed25519 -C "yourname@hostname"
# Copy the public key to a server
ssh-copy-id user@server-ip
# SSH using a specific key
ssh -i ~/.ssh/id_ed25519 user@server-ip
```
## Key Generation
```bash
# ed25519 — preferred
ssh-keygen -t ed25519 -C "home-laptop"
# RSA 4096 — use this if the server is old and doesn't support ed25519
ssh-keygen -t rsa -b 4096 -C "home-laptop"
```
The `-C` comment is just a label — use something that tells you which machine the key came from. Comes in handy when you look at `authorized_keys` on a server and need to know what's what.
Keys land in `~/.ssh/`:
- `id_ed25519` — private key. **Never share this.**
- `id_ed25519.pub` — public key. This is what you put on servers.
## Copying Your Key to a Server
```bash
# Easiest way
ssh-copy-id user@server-ip
# If the server is on a non-standard port
ssh-copy-id -p 2222 user@server-ip
# Manual way (if ssh-copy-id isn't available)
cat ~/.ssh/id_ed25519.pub | ssh user@server-ip "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
```
After copying, test that key auth works before doing anything else, especially before disabling password auth.
## SSH Config File
`~/.ssh/config` lets you define aliases for servers so you can type `ssh myserver` instead of `ssh -i ~/.ssh/id_ed25519 -p 2222 admin@192.168.1.50`.
```
# ~/.ssh/config
# Home server
Host homelab
HostName 192.168.1.50
User admin
IdentityFile ~/.ssh/id_ed25519
Port 22
# Remote VPS
Host vps
HostName vps.yourdomain.com
User ubuntu
IdentityFile ~/.ssh/vps_key
Port 2222
# Jump host pattern — SSH through a bastion to reach internal servers
Host internal-server
HostName 10.0.0.50
User admin
ProxyJump bastion.yourdomain.com
# Default settings for all hosts
Host *
ServerAliveInterval 60
ServerAliveCountMax 3
IdentityFile ~/.ssh/id_ed25519
```
After saving, `ssh homelab` connects with all those settings automatically.
## Managing Multiple Keys
One key per machine you connect from is reasonable. One key per server you connect to is overkill for personal use but correct for anything sensitive.
```bash
# List keys loaded in the SSH agent
ssh-add -l
# Add a key to the agent (so you don't type the passphrase every time)
ssh-add ~/.ssh/id_ed25519
# On macOS, persist the key in Keychain
ssh-add --apple-use-keychain ~/.ssh/id_ed25519
```
The SSH agent stores decrypted keys in memory for the session. You enter the passphrase once and the agent handles authentication for the rest of the session.
## Server-Side: Authorized Keys
Public keys live in `~/.ssh/authorized_keys` on the server. One key per line.
```bash
# Check permissions — wrong permissions break SSH key auth silently
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
```
If key auth isn't working and the config looks right, permissions are the first thing to check.
## Gotchas & Notes
- **Permissions must be right.** SSH ignores `authorized_keys` if the file or directory is world-writable. `chmod 700 ~/.ssh` and `chmod 600 ~/.ssh/authorized_keys` are required.
- **ed25519 over RSA.** ed25519 keys are shorter, faster, and currently considered more secure. Use them unless you have a compatibility reason not to.
- **Add a passphrase to your private key.** If your machine is compromised, an unprotected private key gives the attacker access to everything it's authorized on. A passphrase mitigates that.
- **`ServerAliveInterval` in your config** keeps connections from timing out on idle sessions. Saves you from the annoyance of reconnecting after stepping away.
- **Never put private keys in cloud storage, Git repos, or Docker images.** It happens more than you'd think.
## See Also
- [[linux-server-hardening-checklist]]
- [[managing-linux-services-systemd-ansible]]

0
01-linux/packages/.keep Normal file
View File

View File

@@ -0,0 +1,172 @@
---
title: "Linux Package Management Reference: apt, dnf, pacman"
domain: linux
category: packages
tags: [packages, apt, dnf, pacman, linux, distros]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Linux Package Management Reference: apt, dnf, pacman
Every major Linux distro has a package manager. The commands are different but the concepts are the same: install, remove, update, search. Here's the equivalent commands across the three you're most likely to encounter.
## Common Tasks by Package Manager
| Task | apt (Debian/Ubuntu) | dnf (Fedora/RHEL) | pacman (Arch) |
|---|---|---|---|
| Update package index | `apt update` | `dnf check-update` | `pacman -Sy` |
| Upgrade all packages | `apt upgrade` | `dnf upgrade` | `pacman -Su` |
| Update index + upgrade | `apt update && apt upgrade` | `dnf upgrade` | `pacman -Syu` |
| Install a package | `apt install pkg` | `dnf install pkg` | `pacman -S pkg` |
| Remove a package | `apt remove pkg` | `dnf remove pkg` | `pacman -R pkg` |
| Remove + config files | `apt purge pkg` | `dnf remove pkg` | `pacman -Rn pkg` |
| Search for a package | `apt search term` | `dnf search term` | `pacman -Ss term` |
| Show package info | `apt show pkg` | `dnf info pkg` | `pacman -Si pkg` |
| List installed | `apt list --installed` | `dnf list installed` | `pacman -Q` |
| Which pkg owns file | `dpkg -S /path/to/file` | `rpm -qf /path/to/file` | `pacman -Qo /path/to/file` |
| List files in pkg | `dpkg -L pkg` | `rpm -ql pkg` | `pacman -Ql pkg` |
| Clean cache | `apt clean` | `dnf clean all` | `pacman -Sc` |
## apt (Debian, Ubuntu, and derivatives)
```bash
# Always update before installing
sudo apt update
# Install
sudo apt install nginx
# Install multiple
sudo apt install nginx curl git
# Remove
sudo apt remove nginx
# Remove with config files
sudo apt purge nginx
# Autoremove orphaned dependencies
sudo apt autoremove
# Search
apt search nginx
# Show package details
apt show nginx
# Upgrade a specific package
sudo apt install --only-upgrade nginx
```
**APT sources** live in `/etc/apt/sources.list` and `/etc/apt/sources.list.d/`. Third-party PPAs go here. After adding a source, run `apt update` before installing from it.
## dnf (Fedora, RHEL, AlmaLinux, Rocky)
```bash
# Update everything
sudo dnf upgrade
# Install
sudo dnf install nginx
# Remove
sudo dnf remove nginx
# Search
dnf search nginx
# Info
dnf info nginx
# List groups (collections of packages)
dnf group list
# Install a group
sudo dnf group install "Development Tools"
# History — see what was installed and when
dnf history
dnf history info <id>
# Undo a transaction
sudo dnf history undo <id>
```
dnf's history and undo features are underused and genuinely useful when you've installed something that broke things.
## pacman (Arch, Manjaro)
```bash
# Full system update (do this before anything else, Arch is rolling)
sudo pacman -Syu
# Install
sudo pacman -S nginx
# Remove
sudo pacman -R nginx
# Remove with dependencies not needed by anything else
sudo pacman -Rs nginx
# Search
pacman -Ss nginx
# Info
pacman -Si nginx
# Query installed packages
pacman -Q # all installed
pacman -Qs nginx # search installed
pacman -Qi nginx # info on installed package
# Find orphaned packages
pacman -Qdt
# Clean package cache
sudo pacman -Sc # keep installed versions
sudo pacman -Scc # remove all cached packages
```
**AUR (Arch User Repository)** — packages not in the official repos. Use an AUR helper like `yay` or `paru`:
```bash
# Install yay
git clone https://aur.archlinux.org/yay.git
cd yay && makepkg -si
# Then use like pacman
yay -S package-name
```
## Flatpak and Snap (Distro-Agnostic)
For software that isn't in your distro's repos or when you want a sandboxed installation:
```bash
# Flatpak
flatpak install flathub com.spotify.Client
flatpak run com.spotify.Client
flatpak update
# Snap
sudo snap install spotify
sudo snap refresh
```
Flatpak is what I prefer — better sandboxing story, Flathub has most things you'd want. Snap works fine but the infrastructure is more centralized.
## Gotchas & Notes
- **Always `apt update` before `apt install`.** Installing from a stale index can grab outdated versions or fail entirely.
- **`apt upgrade` vs `apt full-upgrade`:** `full-upgrade` (or `dist-upgrade`) allows package removal to resolve conflicts. Use it for major upgrades. `upgrade` won't remove anything.
- **Arch is rolling — update frequently.** Partial upgrades on Arch cause breakage. Always do `pacman -Syu` (full update) before installing anything.
- **dnf is noticeably slower than apt on first run** due to metadata downloads. Gets faster after the cache is warm.
- **Don't mix package sources carelessly.** Adding random PPAs (apt) or COPR repos (dnf) can conflict with each other. Keep third-party sources to a minimum.
## See Also
- [[linux-distro-guide-beginners]]
- [[linux-server-hardening-checklist]]

View File

View File

@@ -0,0 +1,150 @@
---
title: "Managing Linux Services: systemd and Ansible"
domain: linux
category: process-management
tags: [systemd, ansible, services, linux, automation]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Managing Linux Services: systemd and Ansible
If you're running services on a Linux server, systemd is what you're working with day-to-day — starting, stopping, restarting, and checking on things. For managing services across multiple machines, Ansible is where I've landed. It handles the repetitive stuff so you can focus on what actually matters.
## The Short Answer
```bash
# Check a service status
systemctl status servicename
# Start / stop / restart
sudo systemctl start servicename
sudo systemctl stop servicename
sudo systemctl restart servicename
# Enable at boot
sudo systemctl enable servicename
# Disable at boot
sudo systemctl disable servicename
# Reload config without full restart (if supported)
sudo systemctl reload servicename
```
## systemd Basics
systemd is the init system on basically every major Linux distro now. Love it or not, it's what you're using. The `systemctl` command is your interface to it.
**Checking what's running:**
```bash
# List all active services
systemctl list-units --type=service --state=active
# List failed services (run this when something breaks)
systemctl list-units --type=service --state=failed
```
**Reading logs for a service:**
```bash
# Last 50 lines
journalctl -u servicename -n 50
# Follow live (like tail -f)
journalctl -u servicename -f
# Since last boot
journalctl -u servicename -b
```
`journalctl` is your friend. When a service fails, go here before you do anything else.
**Writing a simple service file:**
Drop a `.service` file in `/etc/systemd/system/` and systemd will pick it up.
```ini
[Unit]
Description=My Custom App
After=network.target
[Service]
Type=simple
User=myuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/start.sh
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
```
After creating or editing a service file, reload the daemon before doing anything else:
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now myapp
```
## Ansible for Service Management Across Machines
For a single box, `systemctl` is fine. Once you've got two or more servers doing similar things, Ansible starts paying for itself fast. I've been using it heavily at work and it's changed how I think about managing infrastructure.
The `ansible.builtin.service` module handles start/stop/enable/disable:
```yaml
- name: Ensure nginx is started and enabled
ansible.builtin.service:
name: nginx
state: started
enabled: true
```
A more complete playbook pattern:
```yaml
---
- name: Manage web services
hosts: webservers
become: true
tasks:
- name: Install nginx
ansible.builtin.package:
name: nginx
state: present
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
- name: Reload nginx after config change
ansible.builtin.service:
name: nginx
state: reloaded
```
Run it with:
```bash
ansible-playbook -i inventory.ini manage-services.yml
```
## Gotchas & Notes
- **daemon-reload is mandatory after editing service files.** Forgetting this is the most common reason changes don't take effect.
- **`restart` vs `reload`:** `restart` kills and relaunches the process. `reload` sends SIGHUP and asks the service to re-read its config without dropping connections — only works if the service supports it. nginx and most web servers do. Not everything does.
- **Ansible's `restarted` vs `reloaded` state:** Same distinction applies. Use `reloaded` in Ansible handlers when you're pushing config changes to a running service.
- **Checking if a service is masked:** A masked service can't be started at all. `systemctl status servicename` will tell you. Unmask with `sudo systemctl unmask servicename`.
- **On Fedora/RHEL:** SELinux can block a custom service from running even if systemd says it started fine. If you see permission errors in `journalctl`, check `ausearch -m avc` for SELinux denials.
## See Also
- [[wsl2-instance-migration-fedora43]]
- [[tuning-netdata-web-log-alerts]]

View File

View File

@@ -0,0 +1,208 @@
---
title: "Ansible Getting Started: Inventory, Playbooks, and Ad-Hoc Commands"
domain: linux
category: shell-scripting
tags: [ansible, automation, infrastructure, linux, idempotent]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Ansible Getting Started: Inventory, Playbooks, and Ad-Hoc Commands
Ansible is how I manage infrastructure at scale — or even just across a handful of machines. You write what you want the end state to look like, Ansible figures out how to get there. No agents needed on the remote machines, just SSH.
## The Short Answer
```bash
# Install Ansible
pip install ansible
# Run a one-off command on all hosts
ansible all -i inventory.ini -m ping
# Run a playbook
ansible-playbook -i inventory.ini site.yml
```
## Core Concepts
**Inventory** — the list of machines Ansible manages. Can be a static file or dynamically generated.
**Playbook** — a YAML file describing tasks to run on hosts. The main thing you write.
**Module** — the building blocks of tasks. `apt`, `dnf`, `service`, `copy`, `template`, `user`, etc. Ansible has modules for almost everything.
**Idempotency** — run the same playbook ten times, the result is the same as running it once. Ansible modules are designed this way. This matters because it means you can re-run playbooks safely without side effects.
## Inventory File
```ini
# inventory.ini
[webservers]
web1.example.com
web2.example.com ansible_user=admin
[databases]
db1.example.com ansible_user=ubuntu ansible_port=2222
[all:vars]
ansible_user=myuser
ansible_ssh_private_key_file=~/.ssh/id_ed25519
```
Test connectivity:
```bash
ansible all -i inventory.ini -m ping
```
A successful response looks like:
```
web1.example.com | SUCCESS => {
"ping": "pong"
}
```
## Ad-Hoc Commands
For quick one-offs without writing a playbook:
```bash
# Run a shell command
ansible all -i inventory.ini -m shell -a "uptime"
# Install a package
ansible webservers -i inventory.ini -m apt -a "name=nginx state=present" --become
# Restart a service
ansible webservers -i inventory.ini -m service -a "name=nginx state=restarted" --become
# Copy a file
ansible all -i inventory.ini -m copy -a "src=./myfile dest=/tmp/myfile"
```
`--become` escalates to sudo.
## Writing a Playbook
```yaml
---
# site.yml
- name: Configure web servers
hosts: webservers
become: true
vars:
app_port: 8080
tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Install nginx
ansible.builtin.apt:
name: nginx
state: present
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
- name: Deploy config file
ansible.builtin.template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0644'
notify: Reload nginx
handlers:
- name: Reload nginx
ansible.builtin.service:
name: nginx
state: reloaded
```
Run it:
```bash
ansible-playbook -i inventory.ini site.yml
# Dry run — shows what would change without doing it
ansible-playbook -i inventory.ini site.yml --check
# Verbose output
ansible-playbook -i inventory.ini site.yml -v
```
## Handlers
Handlers run at the end of a play, only if notified. The canonical use is "reload service after config change":
```yaml
tasks:
- name: Deploy config
ansible.builtin.template:
src: templates/app.conf.j2
dest: /etc/app/app.conf
notify: Restart app
handlers:
- name: Restart app
ansible.builtin.service:
name: myapp
state: restarted
```
If the config file didn't change (idempotent — it was already in the right state), the notify never fires and the service isn't restarted.
## Roles
Once playbooks get complex, organize them into roles:
```
roles/
webserver/
tasks/
main.yml
handlers/
main.yml
templates/
nginx.conf.j2
defaults/
main.yml
```
Use a role in a playbook:
```yaml
- name: Set up web servers
hosts: webservers
become: true
roles:
- webserver
```
Roles keep things organized and reusable across projects.
## Gotchas & Notes
- **YAML indentation matters.** Two spaces is standard. Tab characters will break your playbooks.
- **`--check` is your friend.** Always dry-run against production before applying changes.
- **SSH key access is required.** Ansible connects over SSH — password auth works but key auth is what you want for automation.
- **`gather_facts: false`** speeds up playbooks when you don't need host facts (OS, IP, etc.). Add it at the play level for simple playbooks.
- **Ansible is not idempotent by magic.** Shell and command modules run every time regardless of state. Use the appropriate module (`apt`, `service`, `file`, etc.) instead of `shell` whenever possible.
- **The `ansible-lint` tool** catches common mistakes before they run. Worth adding to your workflow.
## See Also
- [[managing-linux-services-systemd-ansible]]
- [[linux-server-hardening-checklist]]

View File

@@ -0,0 +1,215 @@
---
title: "Bash Scripting Patterns for Sysadmins"
domain: linux
category: shell-scripting
tags: [bash, scripting, automation, linux, shell]
status: published
created: 2026-03-08
updated: 2026-03-08
---
# Bash Scripting Patterns for Sysadmins
These are the patterns I reach for when writing bash scripts for server automation and maintenance tasks. Not a tutorial from scratch — this assumes you know basic bash syntax and want to write scripts that don't embarrass you later.
## The Short Answer
```bash
#!/usr/bin/env bash
set -euo pipefail
```
Start every script with these two lines. `set -e` exits on error. `set -u` treats unset variables as errors. `set -o pipefail` catches errors in pipes. Together they prevent a lot of silent failures.
## Script Header
```bash
#!/usr/bin/env bash
set -euo pipefail
# ── Config ─────────────────────────────────────────────────────────────────────
SCRIPT_NAME="$(basename "$0")"
LOG_FILE="/var/log/myscript.log"
# ───────────────────────────────────────────────────────────────────────────────
```
## Logging
```bash
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
log "Script started"
log "Processing $1"
```
## Error Handling
```bash
# Exit with a message
die() {
echo "ERROR: $*" >&2
exit 1
}
# Check a condition
[ -f "$CONFIG_FILE" ] || die "Config file not found: $CONFIG_FILE"
# Trap to run cleanup on exit
cleanup() {
log "Cleaning up temp files"
rm -f "$TMPFILE"
}
trap cleanup EXIT
```
## Checking Dependencies
```bash
check_deps() {
local deps=("curl" "jq" "rsync")
for dep in "${deps[@]}"; do
command -v "$dep" &>/dev/null || die "Required dependency not found: $dep"
done
}
check_deps
```
## Argument Parsing
```bash
usage() {
cat <<EOF
Usage: $SCRIPT_NAME [OPTIONS] <target>
Options:
-v, --verbose Enable verbose output
-n, --dry-run Show what would be done without doing it
-h, --help Show this help
EOF
exit 0
}
VERBOSE=false
DRY_RUN=false
while [[ $# -gt 0 ]]; do
case $1 in
-v|--verbose) VERBOSE=true; shift ;;
-n|--dry-run) DRY_RUN=true; shift ;;
-h|--help) usage ;;
--) shift; break ;;
-*) die "Unknown option: $1" ;;
*) TARGET="$1"; shift ;;
esac
done
[[ -z "${TARGET:-}" ]] && die "Target is required"
```
## Running Commands
```bash
# Dry-run aware command execution
run() {
if $DRY_RUN; then
echo "DRY RUN: $*"
else
"$@"
fi
}
run rsync -av /source/ /dest/
run systemctl restart myservice
```
## Working with Files and Directories
```bash
# Check existence before use
[[ -d "$DIR" ]] || mkdir -p "$DIR"
[[ -f "$FILE" ]] || die "Expected file not found: $FILE"
# Safe temp files
TMPFILE="$(mktemp)"
trap 'rm -f "$TMPFILE"' EXIT
# Loop over files
find /path/to/files -name "*.log" -mtime +30 | while read -r file; do
log "Processing: $file"
run gzip "$file"
done
```
## String Operations
```bash
# Extract filename without extension
filename="${filepath##*/}" # basename
stem="${filename%.*}" # strip extension
# Check if string contains substring
if [[ "$output" == *"error"* ]]; then
die "Error detected in output"
fi
# Convert to lowercase
lower="${str,,}"
# Trim whitespace
trimmed="${str#"${str%%[![:space:]]*}"}"
```
## Common Patterns
**Backup with timestamp:**
```bash
backup() {
local source="$1"
local dest="${2:-/backup}"
local timestamp
timestamp="$(date '+%Y%m%d_%H%M%S')"
local backup_path="${dest}/$(basename "$source")_${timestamp}.tar.gz"
log "Backing up $source to $backup_path"
run tar -czf "$backup_path" -C "$(dirname "$source")" "$(basename "$source")"
}
```
**Retry on failure:**
```bash
retry() {
local max_attempts="${1:-3}"
local delay="${2:-5}"
shift 2
local attempt=1
until "$@"; do
if ((attempt >= max_attempts)); then
die "Command failed after $max_attempts attempts: $*"
fi
log "Attempt $attempt failed, retrying in ${delay}s..."
sleep "$delay"
((attempt++))
done
}
retry 3 10 curl -f https://example.com/health
```
## Gotchas & Notes
- **Always quote variables.** `"$var"` not `$var`. Unquoted variables break on spaces and glob characters.
- **Use `[[` not `[` for conditionals.** `[[` is a bash built-in with fewer edge cases.
- **`set -e` exits on the first error — including in pipes.** Add `set -o pipefail` or you'll miss failures in `cmd1 | cmd2`.
- **`$?` after `if` is almost always wrong.** Use `if command; then` not `command; if [[ $? -eq 0 ]]; then`.
- **Bash isn't great for complex data.** If your script needs real data structures or error handling beyond strings, consider Python.
## See Also
- [[ansible-getting-started]]
- [[managing-linux-services-systemd-ansible]]

0
01-linux/storage/.keep Normal file
View File