@TCB13@lemmy.world cover

This profile is from a federated server and may be incomplete. View on remote instance

Pros and cons of Proxmox in a home lab?

Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others...

TCB13 , (edited )
@TCB13@lemmy.world avatar

If you know your way around Linux you most likely don’t need Proxmox and its pseudo-open-source... you can try Incus / LXD instead.

Avoid Proxmox and safe yourself a LOT of headaches down the line. Go with Debian 12 + Incus/LXC, it runs VMs and containers very well. Proxmox ships with an old kernel that is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circumstances.

LXD/Incus provides a management and automation layer that really makes things work smoothly - essentially what Proxmox does but properly done. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

I draw your attention to containers (not docker), LXC containers because for most people full virtualization isn't even required. In a small homelab if you can have containers that behave like full operating systems (minus the kernel) including persistence, VMs might not be required. Either way LXD/Incus will allow for both and you can easily mix and match and use what you require for each use case. Hell, you can even run Docker inside an LXC container.

For eg. I virtualize the official HomeAssistant image with Incus because we all know how hard is to get that thing running, however my NAS / Samba shares are just a LXD Debian 12 container with Samba4, Nginx and FileBrowser. Same goes for torrent client that has its own container. Some other service I've exposed to the internet also runs a full VM for isolation.

Like Proxmox, LXD/Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. I can guarantee you that most people running Proxmox today it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no delayed security updates, no BS licenses and way less overhead.

Also, let's consider something, why use Proxmox when half of it’s technology (the container part) was made by the same people who made LXD/Incus? I mean Incus is free, well funded and can be installed on a clean Debian system with way less overhead and also delivers both containers and VMs.

Yes, there's an optional WebUI for it as well!

https://lemmy.world/pictrs/image/9caa6ea8-17b1-48f6-a8c2-ff3f606f3482.png
https://lemmy.world/pictrs/image/a5a110b2-ed6f-431f-a767-0a21fb337a6b.png

Some documentation for you:

TCB13 ,
@TCB13@lemmy.world avatar

C'mon just move to Incus: https://lemmy.world/comment/10896868 :P

TCB13 ,
@TCB13@lemmy.world avatar

You need to understand what Proxmox gives you, which primarily is ability to run/manage/backup/etc VMs easily

Yeah and after understanding what it gives you then you move to Incus because while it might be a bit harder to setup it delivers around 80% of what Proxmox does without the overhead, mangled kernel and licensing issues.

https://cockpit-project.org/ also does VMs and can work for people without cluster needs.

TCB13 ,
@TCB13@lemmy.world avatar

I'm glad to know that I could help.

I like that I can switch out my distros underneath Incus instead of being stuck on one weird kernel

This is an interesting take that I never considered before, my experience (be it corporate or at home) is usually around Debian machines running Incus and I never had the need to replace the distro underneath it.

TCB13 ,
@TCB13@lemmy.world avatar

Well, I understand your POV... but real software freedom instead of messages asking you to buy a license and a questionable kernel is always a good choice :P

TCB13 ,
@TCB13@lemmy.world avatar

You can put Incus on a lot of different systems. Don’t like systemd? Put it on Void. Want a declarative setup? NixOS. Minimalist? Alpine.

This is great, yeah.

TCB13 ,
@TCB13@lemmy.world avatar

Typically I just run binaries of the services I use, and I don’t tend to use docker or other things

That's essentially what I do in my NAS with LXD, it's a great use case for it.

Enjoy.

TCB13 ,
@TCB13@lemmy.world avatar

Check the bottom of reply, there’s a link there with my experience over the years.

TCB13 ,
@TCB13@lemmy.world avatar

LXC is worse than virtualization as it pins to a single core instead of getting scheduled by the kernel scheduler. It also is quiet slow and dated. Either run Podman, Docker or full VMs.

First what you're saying about the scheduler isn't even what happens by default, that was some crap that Proxmox pulled when they migrated from OpenVZ to LXC. To be fair, they had a bunch of more or less valid reasons to force that configuration, but again it due to kernel related issues that were affecting Proxmox more than regular Ubuntu and those issues were solved around the end of 2021.

Now Docker and LXC serve different purposes and they aren't a replacement for each other. Docker is a stateless application container solution while LXC is a full persistent container aimed at running full operating systems...

Docker and LXC share a bunch of underlaying technologies at on the beginning Docker even used LXC as their backed, they later moved to their execution environment called libcontainer because they weren't using all the featured that LXC provided and wanted more control over the implementation.

For those who really need full systems is LXC definitely faster than a VM. Your argument assumes everything can and should be done inside Docker/Podman when that's very far from the reality. The Docker guys have written a very good article showcasing the differences and optimal use cases for both.

Here two quotes for you:

LXC is especially beneficial for users who need granular control over their environments and applications that require near-native performance. As an open source project, LXC continues to evolve, shaped by a community of developers committed to enhancing its capabilities and integration with the Linux kernel. LXC remains a powerful tool for developers looking for efficient, scalable, and secure containerization solutions. Efficient access to hardware resources (...) Virtual Desktop Infrastructure (VDI) (...) Close to native performance, suitable for intensive computational tasks.

Docker excels in environments where deployment speed and configuration simplicity are paramount, making it an ideal choice for modern software development. Streamlined deployment (...) Microservices architecture (...) CI/CD pipelines.

Anyways...

It also ships with a newer kernel than Debian although it shouldn’t matter as you are using it for virtualization.

It matters, trust me. Once you start requiring modules it will suddenly matter. Either way even if they ship a kernel that is newer than Debian it is so fucked at that point that you'll be better with whatever Debian provides out of the box.

TCB13 , (edited )
@TCB13@lemmy.world avatar

If I connect it to my computer using a SATA to USB adapter instead of directly to the computer’s SATA, can it somehow affect the result of this scan?

It depends on how much power the disk requires and how much power the USB port can deliver. Also note that USB-A is the worst connector out there when it comes to mechanical reliability - it only takes a finger on the plug to screw whatever data transfer is going on.

For external disks (both 2.5 and 3.5") I've a bunch of this powered USB disk enclosures. They've a good chip, are made of metal and a USB-B 3 port. You can connect those to any USB-A device and you'll know that only one side might fail... if you've USB-C a cable like this tends to be more reliable.

Another good option, if you've USB-C and you want something more portable is to get a USB-C disk enclosure as those will be able to deliver more power and be more reliable.

PS: avoid whatever garbage Orico is selling, Inateck is much better.

TCB13 OP ,
@TCB13@lemmy.world avatar

Thanks for the answer.

I block internet access to all Amcrest cameras directly but can still access their local IP.

Yes, and what kind of management do they provide on their WebUI? Can the camera be 100% operated using the WebUI without blue iris or any other software?

TCB13 OP ,
@TCB13@lemmy.world avatar

IP cameras allow you to access the device via web gui where you can view and configure the camera for your needs. Once I’ve set them up I only ever access them again through frigate.

Thanks for the answer. What kind of management do they provide on their WebUI? Can the camera be 100% operated using the WebUI, standalone without anything else? I'm just trying to understand how dependent on external software (be it their apps, cloud or HA) the cameras are.

TCB13 OP ,
@TCB13@lemmy.world avatar

Thanks for the clarification, my ideia was to run the cameras on an isolated VLAN so, no issues with calling home. I just wanted to make sure they can be operated fully locally.

TCB13 OP ,
@TCB13@lemmy.world avatar

Following your advice I found a video for another model and it seems complete, even motion detection was available on the WebUI. I’m assuming you’re on the exact model I was looking at and it doesn’t have that feature am I right?

TCB13 ,
@TCB13@lemmy.world avatar

Wordpress + Woocomerce.

TCB13 , (edited )
@TCB13@lemmy.world avatar

WooCommerce powers 38% of the online stores out there...

WordPress’s data structure is not properly suited for an e-commerce site

To be fair WordPress’ data structure is not properly suited for anything, not even posts and pages, let alone block structures and whatever but the truth is that it works and delivers results. Same goes for WooCommerce, if you don't want to be hostage of Shopify and your objective actually selling shit instead of spending all your time developing store software then WooCommerce is the way to go.

WooCommerce also has an extensive extension list, integrations with all the payment providers out there and it's easy to get help / support be it free or paid.

and it’s a resource hog.

Did you ever they Magento or PrestaShop? Doesn't seem like you did as those are store-first solutions and they're all slower and more of a resource hog than WP can ever be.

TCB13 ,
@TCB13@lemmy.world avatar

How do I know all of this? Well I happen to work with WordPress professionally as the lead developer for an agency where I manage literally hundreds of WordPress sites and host all of them myself on servers I manage for them (not shared hosting reselling).

I used to have the same role and before that I managed a shared hosting provider. At that job the majority of websites hosted there were WordPress and customers would pay us to develop or fix stuff sometimes.

The vast majority of those “extensions” (plugins) are horribly made and are security nightmares,

Yes, this is true and a problem, but at the same time the WordPress ecosystem, as you know, gets shit done.

I also had some experiences with PrestaShop/Magento and they are even worse than WordPress. You still have the performance issues, the 3rd party poorly developed themes and plugins and a convoluted API.

TCB13 ,
@TCB13@lemmy.world avatar

https://github.com/philpagel/debian-headless

It is possible but I wouldn’t do it. Too much effort for too little result.

Just plug your main monitor / keyboard into the server, run the setup and don’t install a DE. Afterwards login, enable SSH, unplug the monitor and do whatever you need over SSH.

Let’s face it, you’ll have to do this procedure once every xyz years, there’s no point in complicating this stuff. Also depending on your motherboard you may or may not be able to boot into the installer without a screen / keyboard attached. Another option is to install the OS in another computer and the move the hard drive to the target server - this is all fine until you run into UEFI security or another detail and it doesn’t boot your OS.

TCB13 ,
@TCB13@lemmy.world avatar

The technology has "been there" for a while, it's trivial do setup what you're asking for, the issue is that games have anti cheat engines that will get triggered by the virtualization and ban you.

TCB13 ,
@TCB13@lemmy.world avatar

@foremanguy92_ ,

Step 1: get a cheap VPS, or even a free one (https://www.oracle.com/cloud/free/)

Step 2: If you've a static IP at home great, if you don't get a dynamic DNS from https://freedns.afraid.org/ or https://www.duckdns.org/

Step 3: Install nginx on the VPS and configure it as reverse proxy to your home address. Something like this:

server {
    listen 80;
    server_name example.org; # your real domain name you want people to use to access your website
    location / {
        proxy_pass http://home-dynamic-dns.freeprovider... # replace with your home server IP or Dynamic DNS.
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_redirect off;
    }
}

Step 4: Point your A record of example.org to your VPS.

Step 5: there's a potential security issue with this option: https://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from and to get around this you can do the following on the home server nginx config:

http {
(...)
        real_ip_header    X-Real-IP;
        set_real_ip_from  x.x.x.x; # Replace with the VPS IP address.
}

This will make sure only the VPS is allowed to override the real IP of the client.

Step 6: Once your setup works you may increase your security by using SSL / disabling plain HTTP setup letsencrypt in both servers to get valid SSL certificates for real domain and the dynamic DNS one.

Proceed to disable plain text / HTTP traffic. To do this simply remove the entire server { listen 80 section on both servers. You should replace them with server { listen 443 ssl; so it listens only for HTTPs traffic.

Step 7: set your home router to allow incoming traffic in port 443 and forward it into the home server;

Step 8: set the home server's firewall to only accept traffic coming from outside the LAN subnet on port 443 and if it comes from the VPS IP. Drop everything else.


Another alternative to this it to setup a Wireguard tunnel between your home server and the VPS and have the reverse proxy send the traffic through that tunnel (change proxy_pass to the IP of the home server inside the tunnel like proxy_pass http://10.0.0.2). This has two advantages: 1) you don't need to setup SSL at your home server as all the traffic will flow encrypted over the tunnel and 2) will not require to open a local port for incoming traffic on the home network... however it also has two drawbacks: you'll need a better VPS because WG requires extra processing power and 2) your home server will have to keep the tunnel connected and working however it will fail. Frankly I wouldn't bother to setup the tunnel as your home server will only accept traffic from the VPS IP so you won't gain much there in terms of security.

TCB13 ,
@TCB13@lemmy.world avatar

You aren't wrong but the things you're mentioned are always an issue, even if he was running the entire website on a VPS.

VPS happily tries to forward 1Gbits, fully saturating your home ISP line. Now you’re knocked offline.

Yeah, but at the same time any VPS provider worth it will have some kind os firewalling in place and block a DDoS like that one. People usually don't ever notice this but big providers actually have those measures in place and do block DDoS attacks without their customers ever noticing. If they didn't hackers would just overrun a few IPs and take all the bandwidth the provider has and take their all their customers down that way.

I'm not saying anyone should actually rely only on the VPS provider ability to block such things but it's still there.

The OP should obviously take a good read at nftables rate limiting options and fail2ban. This should be implemented both at the VPS and his home server to help mitigate potential DDoS attacks.

Say someone abuses a remote code execution bug from the application you’re hosting in order to create a reverse shell to get into your system, this complex stack introduced doesn’t protect that.

It doesn't and it was never supposed to mitigate that as the OP only asked for a way to reverse proxy / hide is real IP.

HP bricks ProBook laptops with bad BIOS delivered via automatic updates — many users face black screen after Windows pushes new firmware ( www.tomshardware.com )

On May 26, a user on HP's support forums reported that a forced, automatic BIOS update had bricked their HP ProBook 455 G7 into an unusable state. Subsequently, other users have joined the thread to sound off about experiencing the same issue....

TCB13 ,
@TCB13@lemmy.world avatar

The irony here is that if you've an HP laptop you'll still need to download certain drivers from HP to get things to work at 100%, for instance you may get all the hardware working after running windows update but your special brightness or wtv keys won't work unless you go into HP's website and download a thing.

TCB13 ,
@TCB13@lemmy.world avatar

User error, should've got an EliteBook instead of that cheaper thing. :P

TCB13 ,
@TCB13@lemmy.world avatar

So goddamn sick of communist populists disguised as middle ground socialists. Goes both ways.

TCB13 ,
@TCB13@lemmy.world avatar

🌈 Awww how can one not love this communist lemmy 🌈

TCB13 ,
@TCB13@lemmy.world avatar

Are you using an Amcrest IP4M-1041B, can you have a look at this? https://lemmy.world/post/16883143. Thank you.

TCB13 ,
@TCB13@lemmy.world avatar

Are you using an Amcrest IP4M-1041B, can you have a look at this? https://lemmy.world/post/16883143. Thank you.

TCB13 ,
@TCB13@lemmy.world avatar

Are you using an Amcrest IP4M-1041B, can you have a look at this? https://lemmy.world/post/16883143. Thank you.

TCB13 ,
@TCB13@lemmy.world avatar

Are you using an Amcrest IP4M-1041B, can you have a look at this? https://lemmy.world/post/16883143. Thank you.

TCB13 ,
@TCB13@lemmy.world avatar

Besides your take anything else against them?

TCB13 ,
@TCB13@lemmy.world avatar

Yeah, but aren’t others as well? My ideia was to run them on an isolated VLAN without Internet access. What brand would you recommend instead.

TCB13 ,
@TCB13@lemmy.world avatar

Well, nothing is reliable over USB type A. If you don't want to DIY you can get a USB JBOD with type-c like this one or that one or this cheaper one. They'll get the job done for a price. :)

However, there are easy ways to get reliable SATA ports from m2 slots that your framework has. NVME to 6 SATA ports: https://www.aliexpress.com/item/1005004263885851.html

To power the disks you can use ANY standard ATX power supply (get something brand-gold second hand for 20$). To make sure the PSU stays ON, just plug a wire between the green and any black wire.

Another option for power is to get a cheap 12V power supply and a step down DC/DC to provide 5V. If you don't have it a SATA cable like this is helpful. Simply cut the white plug and attach the red cable (5v) to the output of the DC/DC and the yellow one (12V) directly to the power supply.

There's also these dual output power supplies that you can regulate to 12v+5v but frankly I would just go for the option above as it will be safer.

Make sure you check every voltage and polarity before plugging anything into your power supply!!

TCB13 ,
@TCB13@lemmy.world avatar

Excellent explanation, however, technically it does not constitute an "odd spot." Rather, it represents a "100% acceptable and evident position" as it brings benefits to all stakeholders, from accounting to the CEO. Moreover, it is noteworthy that investing in services or leasing arrangements increases expenditure, resulting in reduced tax liabilities due to lower reported profits. Compounding this, the prevailing high turnover rate among CEOs diminishes incentives for making significant long-term investments.

In certain instances, there is also plain corruption. This occurs when a supplier offering services such as computer and server leasing or software, as well as company car rentals, is owned by a friend or family member of a C-level executive.

TCB13 , (edited )
@TCB13@lemmy.world avatar

And then I am the one exaggerating... I'll say it again, Proton is just another company that managed to find clever ways to profit from a group of people who value things such as "privacy".

They're just a very large marketing effort with little to nothing to show but everyone is convinced they're actually protecting users while they keep pushing proprietary / half open and non standard stuff as solutions for problems already solved with truly open tools, standards and protocols.

TCB13 ,
@TCB13@lemmy.world avatar

now, imagine if this user were using Gmail instead of Proton.

Now imagine if the user was using Gmail + PGP... same end result. Proton delivered no extra value whatsoever.

Stirling-PDF: Locally hosted web application that allows you to perform various operations on PDF files ( github.com )

This is a robust, locally hosted web-based PDF manipulation tool using Docker. It enables you to carry out various operations on PDF files, including splitting, merging, converting, reorganizing, adding images, rotating, compressing, and more. This locally hosted web application has evolved to encompass a comprehensive set of...

TCB13 ,
@TCB13@lemmy.world avatar

This is a very cool project, but it would be cool to see it all in JS / client side instead of depending on a server-side Java powered component.

The Best Secure Email Providers in 2024 ( blog.thenewoil.org )

Like it or not, email is a critical part of our digital lives. It’s how we sign up for accounts, get notifications, and communicate with a wide range of entities online. Critics of email rightfully point out that email suffers from a significant number of flaws that make it less than ideal, but that doesn’t change the...

TCB13 ,
@TCB13@lemmy.world avatar

The guys who decided to block GrapheneOS for no reason and don't provide reasonable explanations nor fix the issue.. yeah right.

TCB13 ,
@TCB13@lemmy.world avatar
TCB13 ,
@TCB13@lemmy.world avatar

You will never get the same font rendering on Linux as on Windows as Windows font rendering (ClearType) is very strange, complicated and covered by patents.

Font rendering is also kind of a subjective thing. To anyone who is used macOS, windows font rendering looks wrong as well. Apple's font rendering renders fonts much closer to how they would look printed out. Windows tries to increase readability by reducing blurriness and aligning everything perfectly with pixels, but it does this at the expense of accuracy.

Linux's font rendering tends to be a bit behind, but is likely to be more similar to macOS than to Windows rendering as time goes forward. The fonts themselves are often made available by Microsoft for using on different systems, it's just the rendering that is different.

For me, on my screens just by installing Segoe UI and tweaking the hinting / antialiasing under GNOME settings makes it really close to what Windows delivers. The default Ubuntu font, Cantarell and Sans don't seem to be very good fonts for a great rendering experience.

The following links may be of interest to you:

TCB13 ,
@TCB13@lemmy.world avatar

but never really thought to use it in my home network

Because you don't need it. OPNsense and pfSense may make sense in some cases however you're running a small network and you most likely don't require those. OpenWRT will provide you with a much cleaner open-source experience and also allow for all the customization you would like. Another great advantage of OpenWRT you've the ability to install 3rd party stuff in your router, you may even use qemu to virtualize stuff like your Pi-Hole on it or simply run docker containers.

TCB13 ,
@TCB13@lemmy.world avatar

Yes, this mostly works as a managed DNS solution for enterprise networks that actually does what people in large organizations need and solves a ton of issues.

Nextcloud appreciation post

After months of waiting, I finally got myself an instance with Libre Cloud. I was expecting basic file storage with a few goodies but boy, this is soooo much more. I am amaze by how complete this is!!! Apps let me configure my instance to fit everything I need, my workflow is now crazy fast and I can finally say goodbye to...

TCB13 ,
@TCB13@lemmy.world avatar
TCB13 ,
@TCB13@lemmy.world avatar

The point is that every single feature they try to add to it ends up as yet another buggy thing that never gets fixed. They should focus on making the core things works decently instead of adding new features. After all this time they didn’t get the sync to be as reliable as Syncthing, why would they venture into webmail’s and whatnot ?

TCB13 ,
@TCB13@lemmy.world avatar

They improved it? You can’t even add a bullet list. No way to have a full screen typing experience. It’s slow like no other and basic formatting tools are already hidden. Is that what you call improvements?

TCB13 ,
@TCB13@lemmy.world avatar

You clearly never used those extensions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines