butitsnotme

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

Help with deployment

Hello nerds! I'm hosting a lot of things on my home lab using docker compose. I have a private repo in GitHub for the config files. This is working fine for me, but every time I want to make a change I have to push the changes, then ssh to the lab, pull the changes, and run docker compose up. This is of course working fine, but...

butitsnotme ,

For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

If so, keep in mind that this is the same as giving root SSH access to the host machine.

As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.

butitsnotme ,

My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.

butitsnotme ,

Getting a domain name may not be enough, if you don’t have a static IP you’ll still need a DDNS service.

What do you get for the paid no-ip service? Is it just a nice subdomain? You can get a custom domain and use a CNAME record to point one or more subdomains to a free DDNS subdomain.

butitsnotme ,

TiddlyWiki might be a good option. Technically it’s a wiki, but it is a single HTML page with all functionality built in JavaScript, you could host it on GH pages, though you wouldn’t be able to use its save feature there (you would have to save to your local machine and the deploy a new version). It stores text in little (or large) cards which can be given a title, tags and other metadata, and it providesa full search system.

butitsnotme ,

It saves into the tiddlywiki HTML file. The default behaviour is to then trigger the browser to download the file. You can absolutely store it in a git repository.

butitsnotme ,

I use WireGaurd, it’s set to on demand for any network or cellular data (so effectively always on), no DNS records (I just use public DNS providing private range IP addresses). It doesn’t make any sort of dent in my battery life. Also, only the wiregaurd network traffic is routed through it, so if my server is down the phone/laptop’s internet continues to work. I borrowed my wife’s phone and laptop for 15 minutes to set it up, and now no one has to think about it.

butitsnotme ,

The peer range shouldn’t be your LAN, it should be a new network range, just for WireGaurd. Make sure that the server running Immich is part of the WireGaurd network.

My phone and laptop see three networks: the internet, the lan (192.168.1.0/24, typically) and WireGaurd (10.30.0.0/16). I can anonymize and share my WireGaurd config if that would help.

butitsnotme ,

Here are a few more details of my setup:

Components:

  • server
  • clients (phone/laptop)
  • domain name (we'll call it custom.domain)
  • home router
  • dynamic DNS provider

The home router has WireGuard port forwarded to server, with no re-mapping (I'm using the default 51820). It's also providing DHCP services to my home network, using the 192.168.1.0/24 network.

The server is running the dynamic DNS client (keeping the dynamic domain name updated to my public IP), and I have a CNAME record on the vpn.custom.domain pointing to the dynamic DNS name (which is an awful random string of characters). I also have server.custom.domain with an A record pointing to 10.30.0.1. All my DNS records are in public DNS (so no need to change the DNS settings on the computer or phone or use DNS overrides with WireGuard.)

Immich config:

version: "3.8"

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:release
    entrypoint: ["/bin/sh", "./start-server.sh"]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    ports:
      - target: 3001
        published: 2283
        host_ip: 10.30.0.1
    depends_on:
      - redis
      - database
    restart: always
    networks:
      - immich

WireGuard is configured using wg-quick (/etc/wireguard/wg0.conf):

[Interface]
Address = 10.30.0.1/16
PrivateKey = <server-private-key>
ListenPort = 51820

[Peer]
PublicKey = <phone-public-key>
AllowedIPs = 10.30.0.12/32

[Peer]
PublicKey = <laptop-public-key>
AllowedIPs = 10.30.0.11/32

Start WireGuard with systemctl enable --now wg-quick@wg0.

Phone WireGuard configuration (iOS):

[Interface]
Name = vpn.custom.domain

Private Key = <phone private key>
Public Key = <phone public key>

Addresses = 10.30.0.12/32
Listen port = <blank>
MTU = <blank>
DNS servers = <blank>

[Peer]
Public Key = <server public key>
Pre-shared key = <blank>
Endpoint = vpn.custom.domain:51820
Allowed IPs = 10.30.0.0/16
Persistent Keepalive = 25

[On Demand Activation]
Cellular = On
Wi-Fi = On
SSIDs = Any SSID

This connection is then left always enabled, and comes on whenever my phone has any kind of network connection.

My laptop (running Linux), is also using wg-quick (/etc/wireguard/wg0.conf):

[Interface]
Address = 10.30.0.14
PrivateKey = <laptop private key>

[Peer]
PublicKey = <server-public-key>
Endpoint = vpn.custom.domain:51820
AllowedIPs = 10.30.0.0/16

My wife's window's laptop is configured using the official WireGuard windows app, with similar settings.

No matter where we are (at home, on a WiFi hotspot, or using cellular data) we access Immich over the VPN: http://server.custom.comain:2283/.

Let me know if you have any further questions.

butitsnotme ,

If you’re seeing an OOM killer messsage note that it doesn’t necessarily kill the problem process, by default the kernel hands out memory upon requestt, regardless of whether it has ram to back the allocation. When a process then writes to the memory (at some later time) and the kernel determines that there is no physical ram to store that write, it then invokes OOM Killer. This then selects a process and kills it. MySQL (and MariaDB) use large quantities of ram for cache, and by default the kernel lies about how much is available, so they often end up using more than the system can handle.

If you have many databases in containers, set memory limits for those containers, that should make all the databases play nicer together. Additionally , you may want to disable overcommit in the kernel, this will cause the kernel to return out of memory to a process attempting to allocate ram and stop lying about free ram to processes that ask, often greatly increasing stability.

butitsnotme ,

I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.

butitsnotme ,

I followed the guide found here, however with a few modifications.

Notably, I did not encrypt the borg repository, and heavily modified the backup script.

#!/bin/bash -ue

# The udev rule is not terribly accurate and may trigger our service before
# the kernel has finished probing partitions. Sleep for a bit to ensure
# the kernel is done.
#
# This can be avoided by using a more precise udev rule, e.g. matching
# a specific hardware path and partition.
sleep 5

#
# Script configuration
#

# The backup partition is mounted there
MOUNTPOINT=/mnt/external

# This is the location of the Borg repository
TARGET=$MOUNTPOINT/backups/backups.borg

# Archive name schema
DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)

# This is the file that will later contain UUIDs of registered backup drives
DISKS=/etc/backups/backup.disk

# Find whether the connected block device is a backup drive
for uuid in $(lsblk --noheadings --list --output uuid)
do
        if grep --quiet --fixed-strings $uuid $DISKS; then
                break
        fi
        uuid=
done

if [ ! $uuid ]; then
        echo "No backup disk found, exiting"
        exit 0
fi

echo "Disk $uuid is a backup disk"
partition_path=/dev/disk/by-uuid/$uuid
# Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else.
(mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive"

# Log Borg version
borg --version

echo "Starting backup for $DATE"

# Make sure all data is written before creating the snapshot
sync


# Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"

# No one can answer if Borg asks these questions, it is better to just fail quickly
# instead of hanging.
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no


#
# Create backups
#

function backup () {
  local DISK="$1"
  local LABEL="$2"
  shift 2

  local SNAPSHOT="$DISK-snapshot"
  local SNAPSHOT_DIR="/mnt/snapshot/$DISK"

  local DIRS=""
  while (( "$#" )); do
    DIRS="$DIRS $SNAPSHOT_DIR/$1"
    shift
  done

  # Make and mount the snapshot volume
  mkdir -p $SNAPSHOT_DIR
  lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
  mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR

  # Create the backup
  borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS


  # Check the snapshot usage before removing it
  lvs
  umount $SNAPSHOT_DIR
  lvremove --yes /dev/data/$SNAPSHOT
}

# usage: backup <lvm volume> <snapshot name> <list of folders to backup>
backup photos immich immich
# Other backups listed here

echo "Completed backup for $DATE"

# Just to be completely paranoid
sync

if [ -f /etc/backups/autoeject ]; then
        umount $MOUNTPOINT
        udisksctl power-off -b $drive
fi

# Send a notification
curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'

Most of my services are stored on individual LVM volumes, all mounted under /mnt, so immich is completely self-contained under /mnt/photos/immich/. The last line of my script sends a notification to my phone using ntfy.

butitsnotme ,

See my other reply here.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines