lemmyvore

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

lemmyvore ,

I have no idea what the people who recommend CPU are smoking. The difference between a GPU with hardware support and doing it on the CPU is huge.

lemmyvore ,

"Sign-up works without non-free JavaScript" is a super weird criteria for selecting an email platform.

lemmyvore ,

I'm still trying to understand what "proprietary JavaScript" means.

Stack Overflow and OpenAI Partner ( files.mastodon.online )

cross-posted from: https://lemmy.ml/post/15315562...

From the official stackoverflow account: We’re thrilled to announce we’re partnering with @OpenAI to bring best in class technical knowledge and the world’s most popular LLM models for AI development together! This groundbreaking partnership with OpenAI will drive our mission to empower the world to develop technology through collective knowledge.
ALT
lemmyvore ,

Are we sure this deal is about answering new SO questions with LLM? It's more likely to be a deal where SO sells access to its database to OpenAI so they can use human-generated content for LLM training, and SO gets to use LLM as a more efficient search through its human-generated content.

It's possible they could also choose to delegate the duplicate decision to the LLM but let's be honest, that decision is currently crap anyway.

lemmyvore ,

That's SO's problem going forward. OpenAI already got what they wanted – legal access to SO's database up to this moment, when it's still mostly human.

I didn't say this was a good deal for SO.

lemmyvore ,

They're getting it from the facts. 😄

The question is, where are you getting the "fair" moniker from? Who is it fair for? What makes it so much more fair than the other "models" that it's the only one that deserves to be called that?

lemmyvore ,

It's not an article, it's a propaganda website that tries to say that black is white. Just slapping a "fair" or "open" label on something doesn't make it so. Which brings us back to my questions: if this is what fair looks like, what does it make software licenses which are l aren't listed there? Are those "unfair"? To whom?

lemmyvore ,

I love that website. Now I have an easy way to find all the licenses and projects and companies I need to stay away from.

lemmyvore ,

It's GPL, they have to also provide the source. And you benefit from all the rights they do.

"Business" licenses try to prevent competition while still benefiting from free contributions, and pass it as "fairness". But how is it fair for anybody except that particular company? What about the contributors? If OBS used such a license and reaped all the benefits would you still contribute to them?

lemmyvore ,

If they don't obey GPL what makes you think they'd obey BSL?

lemmyvore ,

There are often individual apps for various cities and transport organizations.

Traffic has always been a mixed bag. Yeah it's nice to be able to see that street A is more busy than street B. But so can everybody else, and they're all going to use street B now.

lemmyvore ,

The app doesn't control what people do, it just makes recommendations based on busy segments, based on data which is already obsolete by the time it's being used. Ultimately the lemmings will do whatever their lemming brain tells them to.

(That is, assuming the app doesn't actually try to spread people around the various routes. But I doubt that any app maker wants to assume responsibility for that.)

Ultimately traffic apps are mostly useless. You can't "solve" traffic congestion with apps any more than you can make water flow faster through a pipe. Congestion is constrained by available road space and choke points. Google Maps is mostly an excuse for Google to collect location data, with a thin layer of features on top to make it seem worthwhile.

lemmyvore ,

Because it doesn't need anything in GTK 3 and 4. They're either cosmetic changes or UX changes and Gimp has no reason to adopt either.

lemmyvore ,

When the widget toolkit needs explicit and direct support for the graphics server you're doing something very wrong.

lemmyvore ,

It's not a dig at Wayland. You really don't want to have to add specific support for the OS directly in your widget library. There should be an abstraction layer in-between that deals with that. If that layer had been there they wouldn't have to rewrite the whole thing.

lemmyvore ,

Yeah but now people will get right back to spending money on the game. So at the end of the day it's still Sony laughing all the way to the bank.

Shit like this should result in a boycott not in "at least it's not as bad as it could've been".

lemmyvore ,

You don't owe Arrowhead anything. They're not a dog, they're a company who's made bad choices and now has to deal with them.

What use is a good game if you get blocked out or exploited trying to play it? Do you really want to give your money away? Ok, but stop wondering why the industry is going to shit. It's because of gamers with more money than sense.

lemmyvore ,

Uninstall it?

lemmyvore ,

🤦 Then you probably shouldn't uninstall it. When you enter a discussion about an advanced use case people are going to assume you want to manage /etc/resolv.conf and the network interfaces by hand.

lemmyvore ,

Can we see some screenshots? It's hard to work just with someone's idea of "better". Not to mention that font rendering can be tweaked on both Windows and Linux and we don't know what settings you've changed so far. Oh and I hope you're comparing the same font otherwise there isn't much point you the comparison.

lemmyvore ,

There are pros and cons to keeping the proxy on the VPS or at home.

If you keep it at home you will have end-to-end encryption from the browser to your home server. Downside, you will not get the IP of the remote client, just the IP of the router, so you won't be able to do IP blocking or diagnostics.

By putting the proxy on the VPS and decrypting HTTPS there you can add remote IPs to connections but you have to keep the TLS certificate on the VPS so in theory someone could mess with it.

A third option is to run a minimal passthrough proxy on the VPS that adds the remote IP to the HTTPS connections without decrypting them. To do this you must use the same proxy at both ends (home and VPS) and both must have the PROXY protocol enabled.

I would suggest doing just proxy at home to start with because it's simpler. If you want a GUI use NPM (Nginx Proxy Manager) it's super easy. If you prefer a proxy where you write config by hand use Caddy.

After you have it working at home you can consider adding the one on VPS and enabling the PROXY protocol. Although I'm not 100% sure Caddy supports it, look into it. You may have to use Nginx in both places I'd it doesn't.

You do not need to add subdomains in DNS, not unless you want to. You just need one domain to point an A/AAAA record at the VPS public IP, then you can make the subdomains a wildcard CNAME pointing at the base domain. So A/AAAA example.com -> IP, and CNAME *.example.com -> example.com. Or you can put the A in another domain and point the CNAME at that.

When requesting TLS certificates it's the same thing, you never ask for explicit certificates for each subdomain, you just ask for one wildcard certificate for *.example.com. Aside from the obvious benefit of not having to add and remove certificates every time you add or remove subdomains, there's the not obvious benefit of not having bots learn about your subdomains (certificate application are public records).

The subdomains do not need to resolve in DNS for this to work, the certbot verifies that you own the domain by using a DNS API key to create a temporary TXT on example.com; as long as that works it won't care what's actually defined in there.

lemmyvore ,

I should also add something that lots of beginners miss.

The reverse proxy does not care what the domains that you define in it actually resolve to. It receives the domain name as a HTTP header which is completely at the whim of the client. As long as that domain name matches one of the domains defined in the proxy, it's all good.

You can successfully connect to a proxy with a domain name defined in the domain owner's DNS, or you can make up your own DNS that says whatever you want, or you can define any domain->IP association you want in your hosts file, or you can simply use curl or wget to connect directly to the proxy IP and lie about the domain in the HTTP headers without having it resolve in any DNS.

This means that yes, the proxy will happily serve your "private" *.local.example.com services to someone connecting from outside your LAN. All they have to do is figure out (or guess) your subdomain names. You need to add IP restrictions in the proxy (default deny from all + lan ip mask explicit exception) if you really want those services to be restricted to the LAN.

DNS is not security, it's a public service that maps domains to IPs.

TLS is only security in the sense it protects the connection en route from eavesdropping, but it doesn't restrict access.

lemmyvore ,

No, that's the magic of the reverse proxy. You can transport all HTTP services through just one port. It will route them to the correct service on your service based on the domain (which is passed through the HTTP headers).

It won't work for non-HTTP services, for those you'll have to make a separate ssh tunnel per port.

lemmyvore ,

Check all the steps individually then:

  • check that the ip resolves to the VPS IP at the location you're testing this
  • set up the tunnel to bypass the proxy (connect it directly to jellyfin)
  • check that jellyfin works directly
  • check the proxy directly, with curl connected to the proxy with the header "Host" set to the domain
  • check that the VPS firewall didn't block port 80
  • normally you wouldn't be able to forward port 80 with a normal ssh user but I see you're logging in as root so it should be working
lemmyvore ,

stable is not the only release that debian offers,

Did you mean to say "branch" rather than "release"? Debian only releases stable. Everything else is part of the process of preparing and supporting stable.

Testing branch may work well or it may not. Its goal is to refine packages for the next stable release so it has an inherent strive towards quality, but it doesn't have a commitment to "quality now" like stable does, just to "quality eventually".

Testing's quality is highest towards the start of each release cycle when it picks up from the previous stable release and towards the end when it's getting ready to become the next stable. But the cycle is 2 years long.

lemmyvore ,

Interesting, I didn't know they consider testing and unstable to be releases too.

lemmyvore ,

That's like saying that Apple started the home computer competition going.

There's a big difference between bucking trends and skipping steps for the sake of being different, and actually moving the industry forward.

Under a non-sociopathic leadership Tesla battery and engine tech would have been in most Western car brands by now.

Instead, let's look at what Tesla has really brought us:

  • electric tech that's today just one (rather unremarkable) version among many;
  • failing cruising tech (I don't even want to use "self driving" because that's pure marketing drivel);
  • abysmal build quality and customer support;
  • rising car and insurance prices;
  • lots of car tech moved from hardware into software, which means surveillance, lower quality, lower usability and ergonomy;
  • and last but not least a whole lot of shirking responsibility.

That's the Tesla legacy that the car industry has inherited.

lemmyvore ,

How about when the theming is baked in and impossible to change?

Enshittification doesn't have to be monetary. It's about doing things that go against the interests of the user.

Unfortunately Gnome has taken to heart St. Exupery's law ("perfection is not when there's nothing left to add, but when there's nothing left to take away") but have forgotten that it was coined in an era of mechanical devices and there's more than one aspect to software. Applying it to functionality is very different from applying it to features and customization. The latter ends up making software feel bland and oppressive.

lemmyvore ,

If they did you'd have one theme that works with Gnome and one that works with Mint. Both of which would be irrelevant to someone using GTK apps on, say, XFCE on Arch.

lemmyvore ,

If they're as incompetent as they sound they'd have to change it manually and assuming you could make them do that it would probably break something in the account. 😄 There's no good way to do this if it was badly put together.

lemmyvore ,

No. You should think in terms of offsetting development cost. When you choose non-copyleft you do it to keep code private, which means you will support all dev costs. It limits how the software can grow because it's basically vertical scalability — not to mention being culturally limited inside the company.

When you choose copyleft you commit to open source and so does everybody who wants a piece of that software, which makes it much easier for everybody interested in it to offset their development through everybody's efforts.

With open source there are documented positive feedback effects. Companies who grow to depend on specific software find it cheaper and more efficient for their own interests and benefit to maintain fewer permanent developers as high upstream as possible — as opposed to having many occasional developers downstream, dealing with stuff as it trickles down.

FOSS creates reliable, diverse and ultimately healthy software ecosystems because everybody competes to improve the software first and foremost.

lemmyvore ,

Back when I did LFS I dealt with this by giving each package an /opt prefix, symlinking their respective bin/, sbin/, lib/, man/ and so on dirs under a common place, and adding those places to the relevant system integrations (PATH, /etc/ld.so.conf etc.)

I put together a bash script that could manage the sumlinks and pack/unpack tarballs, and also wrote metadata file and a configure/make "recipe" to each package dir. It worked surprisingly well.

A handful of packages turned out to be hardcoding system paths so they couldn't be prefixed into /opt (without patching) but most things could.

lemmyvore ,

KDE, Gnome, the kernel, you can compile them without any problems. They're large and complex but they're well organized.

X is weird but it can also be compiled fairly easily.

Mozilla stuff is horrendous. There's no rhyme or reason, it's hard to find build instructions, half the time they don't work, when they do the build fails with obscure errors...

lemmyvore ,

No export either, just Google Backups.

Also the feature roadmap looks bad. They don't plan to add any of the features you'd expect from a standalone 2FA app, they just plan to sync with Bitwarden and eventually integrate completely with Workforce. So it looks like a bait and switch with no way to get your codes out.

lemmyvore ,

Nothing, but this particular alternative is pretty awful. Literally zero features besides TOTP code generation, and they don't plan to make it better. I really don't understand why this app exists.

The only people who would possibly care about it is existing Bitwarden users who want to use it to hold the code for their Bitwarden account independently from account. But they say they plan to add Bitwarden sync to it so?....

Honestly it just looks like a super lazy attempt to draw people to Bitwarden (assuming it doesn't turn into a sleazy attempt of holding codes captive with no way to get them out).

lemmyvore ,

Because they use proprietary algorithms not TOTP.

lemmyvore ,

Microsoft MFA has the option of being set up (by admins) with either standard TOTP or with their proprietary algorithm.

If the admins for the realm you're trying to use have chosen the proprietary one you need to use the Microsoft Authenticator app. Regular TOTP generators will accept the code but the code they make won't work.

Can the regular Bitwarden generator make good codes? If so, it means they figured out (or were told by Microsoft) how the proprietary algorithm works. But since this standalone app is open source they couldn't add that algorithm to it.

lemmyvore ,

How do you turn them off. I mean, do you go through every Google product and dig for the settings? Or do you just mean you revisit the ad settings?

lemmyvore ,

FUTO receives the money and pays people to work on various FOSS projects (listed on their website).

Nobody "owns" Immich. It's still a FOSS project licensed under AGPL and very unlikely to change license ever again (not impossible, but getting harder all the time since copyright is now shared among all contributors and any license change would have to either get permission from all contributors or remove their code).

lemmyvore ,

Seafile is a file platform that's more in line with what you mean. It can do sync but also sharing and collaborative editing.

lemmyvore ,

32 GB of DDR4 RAM is about 70 USD here. But you don't need 32, you can selfhost plenty of stuff on 16 or even 8 GB. Heck I ran mine on an old 4 GB stick for a couple of years when I first started.

lemmyvore ,

A self-hosting server does not necessarily crunch data and it doesn't have to have loud fans or use lots of power. It can idle in the 15-20W range with an Intel CPU and if you put the HDDs on standby when idle.

lemmyvore ,

Requiring an ID is security theater anyway. If law enforcement wants to know who got a number they can simply look at what phone it's used in, or what card you paid with etc.

lemmyvore ,

Sony still puts them on all their phones.

Their new models are expensive but if you get the last year's or 2 years old model it's a lot cheaper. The differences between years are minimal anyway because they already put everything you could want in them. They mostly just switch to newer components.

lemmyvore ,

Please keep in mind that staying between 20-80% greatly improves battery life.

lemmyvore ,

Then that site is completely wrong. I'm not even sure where the 38k number comes from.

If you go to https://packages.debian.org/stable/, at the bottom of each branch page (selectable from the top list) is a link to a txt list of all packages for that branch.

If you run a quick wc -l through them you get 234k packages for sid (unstable), 130k for testing and 121k for bookworm (stable).

The wierd thing is that's also what repology links to, but I don't understand what they parse to arrive at that number.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines