This profile is from a federated server and may be incomplete. View on remote instance

Should I stick with Docker Swarm for self-hosting?

Hi! I'm starting out with self-hosting. I was setting up Grafana for system monitoring of my mini-PC. However, I ran into issue of keeping credentials secure in my Docker Compose file. I ended up using Docker Swarm since it was the path of least resistance. I've managed to set up Grafana/Prometheus/Node stack and it's working...

Lem453 , (edited )

When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.

Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.

I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).

I do this all via portainer, it will setup the above folder structure for you.
Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine).
Portainer creates an env file itself when you add the env variables from the gui.

If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.

Lem453 ,

True, but the downside of cloudflare is that they are a reverse proxy and can see all your https traffic unencrypted.

Lem453 ,

I like finamp as my android music client for jellyfin

Help with deployment

Hello nerds! I'm hosting a lot of things on my home lab using docker compose. I have a private repo in GitHub for the config files. This is working fine for me, but every time I want to make a change I have to push the changes, then ssh to the lab, pull the changes, and run docker compose up. This is of course working fine, but...

Lem453 ,

I world strongly suggest a second device like an RPI with Gitea. There what I have.

I use portainer to pull straight from git and deploy

Lem453 ,

Yes, you should use something that makes sense to you but ignoring docker is likely going to cause more aggravation than not in the long term.

Lem453 ,

Not to mention the advantage of infrastructure as code. All my docker configs are just a dozen or so text files (compose). I can recreate my server apps from a bare VM in just a few minutes then copy the data over to restore a backup, revert to a previous version or migrate to another server. Massive advantages compared to bare metal.

Lem453 ,

There is an issue with your database persistence. The file is being uploaded but it's not being recorded in your database for some reason.

Describe in detail what your hardware and software setup is, particularly the storage and OS.

You can probably check this by trying to upload something and then checking the database files to see the last modified date.

Lem453 ,

Start with this to learn how snapshots work

https://fedoramagazine.org/working-with-btrfs-snapshots/

Then here the learn how to make automatic snapshots with retention

https://ounapuu.ee/posts/2022/04/05/btrfs-snapshots/

I do something very similar with zfs snapshots and deduplication on. I have one ever 5 mins and save 1 hr worth then save 24 hourlys every day and 1 day for a month etc

For backup to remote locations you can send a snapshot offsite

Lem453 ,

I think the biggest thing you're not taking into account is the amount hardware they have compared to anyone else.

Of course Apollo would be shut down if they were loosing Saturn Vs left and right. Each of those is 1.2 billion in 2019 dollars and they launched 13 of them I'm total. They are way to valuable.

The total estimate cost to date for the entire starship program is 5 billion and they have built around 30 starships. They already have another one ready to go now, only reason to not launch right away is because it needs upgrades based on the data they just collected.

You're also assuming that with more time and analysis they could predict things they have just discovered from a real launch. No man made object of this size has ever made a controlled entry back to earth. Not by a long shot.

Closest is space shuttle which had lots of issues that couldn't be fixed because each launch was so expensive it had to carry real payload (and people) and changes to human flight hardware is near impossible.

The main thing that's different here is that the cost of a launch is way less than the cost of a year of lab testing and still not knowing the answer because it's never been done before. That's the hardest paradigm shift to accept and is true only of SpaceX and no one else right now until they go full force into reusable rockets.

Lem453 ,

The Apollo compairaon above is even more ridiculous when you consider that starship made it to orbit and could've deployed a payload. The part that 'failed' was the soft landing and even that didn't fail. Only reuse failed.

Every Saturn v that was launched is currently sitting at the bottom of the ocean.

Taking shots at starship for failing even though Saturn v didn't even attempt the same mission parameters makes no sense.

Starship will have likely had 100+ missions before putting a human on it. Would you rather fly on something that's proven itself 100 times or something that is flying for the first time?

A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies | TechCrunch ( techcrunch.com )

Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts....

Lem453 ,

This right here. I tried to join Mastodon today.

Download the most recommended app, Moshidon

Open app and get asked which instance i want to join. There are no suggestions.

Do a search for instances and pick one, go to the website and register with email and password. Requires email confirmation. Still waiting on the email confirmation link, 4 hrs later and 2 resends.

Literally haven't been able to sign up yet.

Even if it had worked, the workflow would have been to change back to the app, type out the instance then re-login.

I'm not sure how anyone expects anyone other than the most hardcore to sign up for these services. Maybe that's the point but if the point is to grow the user sign up process to significant overall

Lem453 OP ,

Me messing about with other docker applications. Seafile is one of the first things I setup on my server. I've been adding and playing around with dozens of different apps since then, many of them have numerous containers each. Usually I make the container without a defined storage until I get the compose working, then I set the volumes to be zfs array. When that happens the old default docker volumes remains unused.

Need to remember to delete them periodically

Lem453 OP ,

It's not so much a seafile issue as it is a feature request for proper checksum verification of the files that are copied. The conditions where it it happened were an odd combination of having enough persistent storage, ram and cpu but lacking in ephemeral space...I think. The issue is that seafile failed silently.

Lem453 OP ,

Ya exactly this. I get optimizing for spee but there should at least be an option afterwards to check file integrity. Feels like a crucial feature for a critical system

Lem453 ,

Is it not in the immich_pgdata or immich-app_pgdata folder?

The volumes themselves should be stored at
/var/lib/docker/volumes

For future reference, doing operations like this without backing up first is insane.

Get borgmatic installed to take automatic backups and send them to a backup like another server or borgbase.

Secure portal between Internet and internal services

I thought I was going to use Authentik for this purpose but it just seems to redirect to an otherwise Internet accessible page. I'm looking for a way to remotely access my home network at a site like remote.mywebsite.com. I have Nginx proxy forwarding with SSL working appropriately, so I need an internal service that receives...

Lem453 ,

This is the way. This is the video I followed.

https://www.youtube.com/watch?v=liV3c9m_OX8

I use traefik as reverse proxy. I have externally accessible domains for and then extra secure internal only domains that require wireguard connection first as an extra layer of security.

Authentik can be used as a forward auth proxy and doesn't care if it's an internal or external domain.

Apps that don't have good login or user management just get Authentik proxy for single sign on (sonarr, radar etc).

Apps that have oAuth integration get that for single sign on (seafile, immich, etc)

To make it work the video will talk about adding both the internal and external domains to the local DNS so that if you access it from outside it works and if you access from wireguard or inside the lan it also works.

Microsoft is testing Game Pass ads on the Windows 11 Settings homepage ( www.ghacks.net )

Microsoft's announcement: "We are introducing a new Game Pass recommendation card on the Settings homepage. The Game Pass recommendation card on Settings Homepage will be shown to you if you actively play games on your PC. As a reminder – the Settings homepage will be shown only on the Home and Pro editions of Windows 11 and...

Lem453 , (edited )

If you don't want to think about your computer and just want a tool to use there is Aurora. It's a variant of fedora but it uses an immutable file system which makes it super stable and reliable. If there are any issues you can easily roll back the entire os to a previous version.

This is true of all fedora atomic desktops. Aurora is a variant that takes it to the next level by making all updates and everything require as little human interaction as possible so you don't have to worry about how the computer runs and just use the computer for your actual tasks.

https://getaurora.dev/

Lem453 ,

They advertise as being zero maintenance which is a huge deal with many windows and Mac users than don't want to think about the tool itself, they just want to use it. From the site:

What's the difference between Vanilla Kinoite and Aurora?
Vanilla Kinoite is a very stock experience. Aurora includes many enhancements and tweaks, like included drivers for various printers, network adapters and more as well as included codecs. Aurora also features tweaks to enhance your battery life on a laptop.

Lem453 ,

By zero maintenance they mean you don't even have to hit the update button. It all just happens automatically. Many Linux users won't like that but many windows and Mac users will.

Lem453 ,

I'm curious how using ansible to deploy docker containers is easier than just using docker compose?

Ansible makes sense to setup the OS the way it needs to be (file systems, folder structure etc), but why make every container through ansible instead of just making a docker compose and maybe having ansible deploy that?

Even easier is probably to just run something like portainer and run the compose file through there

Lem453 ,

What makes butter better than btrfs for ostree systems?

Lem453 , (edited )

How do I install this on fedora? I'm not to keen on curling a bash script and running it. Thanks!

Edit: for fedora atomic, the answer is to download the rpm and overlay it with rpm-ostree install

Lem453 ,

If you submit the rpm to rpm-ostree then users can just find it from there with rpm-ostree install xpipe.

That requires an overlay but the alternative is a flatpak which won't work for an app like this I think anyways.

Users that install brew can just get it from here as a proper containerized install rather than an overlay.

The script is definitely not great as he primary way to install, everyone doing that should be doing so very reluctantly. Getting the rpm into package managers will go a long way.

That being said, xpipe is amazing. Only used it for a few hours and already love it and can't believe I didn't have it sooner.

Lem453 ,

Sounds like they actually changed it to Go language regex syntax instead of pearl syntax.

The documentation certainly makes it sounds like they just got rid of regex but this forum post seems to show otherwise.

https://community.traefik.io/t/pathprefix-regex/21819

I'm definitely in the wait for a month at least before attempting this upgrade camp...

Lem453 ,

For others, beware that in a docker, each plugin needs its own docker container.

I run everything in docker except for HA which I run in a VM (HaOS) which makes it super easy to use.

Edit: by plugins I meant add-ons

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines