hedgehog

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

hedgehog ,

Third, a redirect is obvious

A redirect isn’t necessary if you control the DNS servers. If you control the DNS servers, you can MITM the website for any visitor because you can prove that you own the domain to a certificate authority and generate a new, trusted HTTPS cert. (Depending on specifics this may or may not foil the anti-phishing capabilities of Passkeys / U2F.)

hedgehog ,

They aren’t. From a comment on https://www.reddit.com/r/ublock/comments/32mos6/ublock_vs_ublock_origin/ by u/tehdang:

For people who have stumbled into this thread while googling "ublock vs origin". Take a look at this link:

http://tuxdiary.com/2015/06/14/ublock-origin/

"Chris AlJoudi [current owner of uBlock] is under fire on Reddit due to several actions in recent past:

  • In a Wikipedia edit for uBlock, Chris removed all credits to Raymond [Hill, original author and owner of uBlock Origin] and added his name without any mention of the original author’s contribution.
  • Chris pledged a donation with overblown details on expenses like $25 per week for web hosting.
  • The activities of Chris since he took over the project are more business and advertisement oriented than development driven."

So I would recommend that you go with uBlock Origin and not uBlock. I hope this helps!

Edit: Also got this bit of information from here:

https://www.reddit.com/r/chrome/comments/32ory7/ublock_is_back_under_a_new_name/

TL;DR:

  • gorhill [Raymond Hill] got tired of dozens of "my facebook isnt working plz help" issues.
  • he handed the repository to chrismatic [Chris Aljioudi] while maintaining control of the extension in the Chrome webstore (by forking chrismatic's version back to himself).
  • chrismatic promptly added donate buttons and a "made with love by Chris" note.
  • gorhill took exception to this and asked chrismatic to change the name so people didn't confuse uBlock (the original, now called uBlock Origin) and uBlock (chrismatic's version).
  • Google took down gorhill's extension. Apparently this was because of the naming issue (since technically chrismatic has control of the repo).
  • gorhill renamed and rebranded his version of ublock to uBlock Origin.
hedgehog ,

it's still not a profitable venture

Source? My understanding is that Google doesn’t publish Youtube’s expenses directly but that Youtube has been responsible for 10% of Google’s revenue for the past few years (on the order of $31.5 Billion in 2023) and that it’s more likely than not profitable when looked at in isolation.

hedgehog ,

It’s more like paying the ticket without ever showing up in court. And at least where I live, I can do that.

hedgehog ,

Have you looked into configuring them directly from your NVR? Or third party options? I did a quick search and saw a list of several that as far as I can tell can display Reolink streams (though I haven’t confirmed any can configure the cameras):

And some proprietary options that have native Linux builds:

hedgehog ,

The dice method is great. https://www.eff.org/dice

hedgehog ,

Being a bit pedantic here, but I doubt this is because they trained their model on the entire internet. More likely they added Reddit and many other sites to an index that can be referenced by the LLM and they don’t have enough safeguards in place. Look up “RAG” (Retrieval-augmented generation) if you want to learn more.

hedgehog ,

Sure, and that’s roughly the same amount of entropy as a 13 character randomly generated mixed case alphanumeric password. I’ve run into more password validation prohibiting a 13 character password for being too long than for being too short, and for end-user passwords I can’t recall an instance where 77.5 bits of entropy was insufficient.

But if you disagree - when do you think 77.5 bits of entropy is insufficient for an end-user? And what process for password generation can you name that has higher entropy and is still easily memorized by users?

hedgehog ,

Ah, fair enough. I was just giving people interested in that method a resource to learn more about it.

The problem is that your method doesn’t consistently generate memorable passwords with anywhere near 77 bits of entropy.

First, the example you gave ended up being 11 characters long. For a completely random password using alphanumeric characters + punctuation, that’s 66.5 bits of entropy. Your lower bound was 8 characters, which is even worse (48 bits of entropy). And when you consider that the process will result in some letters being much more probable, particularly in certain positions, that results in a more vulnerable process. I’m not sure how much that reduces the entropy, but it would have an impact. And that’s without exploiting the fact that you’re using quoted as part of your process.

The quote selection part is the real problem. If someone knows your quote and your process, game over, as the number of remaining possibilities at that point is quite low - maybe a thousand? That’s worse than just adding a word with the dice method. So quote selection is key.

But how many quotes is a user likely to select from? My guess is that most users would be picking from a set of fewer than 7,776 quotes, but your set and my set would be different. Even so, I doubt that the set an attacker would need to discern from is higher than 470 billion quotes (the equivalent of three dice method words), and it’s certainly not 2.8 quintillion quotes (the equivalent of 5 dice method words).

If your method were used for a one-off, you could use a poorly known quote and maybe have it not be in that 470 billion quote set, but that won’t remain true at scale. It certainly wouldn’t be feasible to have a set of 2.8 quintillion quotes, which means that even a 20 character password has less than 77.5 bits of entropy.

Realistically, since the user is choosing a memorable quote, we could probably find a lot of them in a very short list - on the order of thousands at best. Even with 1 million quotes to choose from, that’s at best 30 bits of entropy. And again, user choice is a problem, as user choice doesn’t result in fully random selections.

If you’re randomly selecting from a 60 million quote database, then that’s still only 36 bits of entropy. When the database has 470 billion quotes, that’ll get you to 49 bits of entropy - but good luck ensuring that all 470 billion quotes are memorable.

There are also things you can do, at an individual level, to make dice method passwords stronger or more suitable to a purpose. You can modify the word lists, for one. You can use the other lists. When it comes to password length restrictions, you can use the EFF short list #2 and truncate words after the third character without losing entropy - meaning your 8 word password only needs to be 32 characters long, or 24 characters, if you omit word separators. You can randomly insert a symbol and a number and/or substitute them, sacrificing memorizability for a bit more entropy (mainly useful when there are short password length limits).

The dice method also has baked-in flexibility when it comes to the necessary level of entropy. If you need more than 82 bits of entropy, just add more words. If you’re okay with having less entropy, you can generate shorter passwords - 62 bits of entropy is achieved with a 6 short-word password (which can be reduced to 18 characters) and a 4 short-word password - minimum 12 characters - still has 41 bits of entropy.

With your method, you could choose longer quotes for applications you want to be more secure or shorter quotes for ones where that’s less important, but that reduces entropy overall by reducing the set of quotes you can choose from. What you’d want to do is to have a larger set of quotes for your more critical passwords. But as we already showed, unless you have an impossibly huge quote database, you can’t generate high entropy passwords with this method anyway. You could select multiple unrelated quotes, sure - two quotes selected from a list of 10 billion gives you 76.4 bits of entropy - but that’s the starting point for the much easier to memorize, much easier to generate, dice method password. You’ve also ended up with a password that’s just as long - up to 40 characters - and much harder to type.

This problem is even worse with the method that the EFF proposes, as it'll output passphrases with an average of 42 characters, all of them alphabetic.

Yes, but as pass phrases become more common, sites restricting password length become less common. My point wasn’t that this was a problem but that many site operators felt that it was fine to cap their users’ passwords’ max entropy at lower than 77.5 bits, and few applications require more than that much entropy. (Those applications, for what it’s worth, generally use randomly generated keys rather than relying on user-generated ones.)

And, as I outlined above, you can use the truncated short words #2 list method to generate short but memorable passwords when limited in this way. My general recommendation in this situation is to use a password manager for those passwords and to generate a high entropy, completely random password for them, rather than trying to memorize them. But if you’re opposed to password managers for some reason, the dice method is still a great option.

hedgehog ,

Just sharing this link to another comment I made replying to you, since it addresses your calculations regarding entropy: https://ttrpg.network/comment/7142027

hedgehog ,

Why should shadow bans be illegal?

hedgehog ,

Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.

If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.

In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.

Where this gets tricky, though, is with bots and multiple accounts.

If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?

Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.

Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.

A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.

You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.

What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.

One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.

Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.

But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.

You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.

Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.

I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?

Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?

hedgehog ,

But major social media companies do exist. If your real point was that they shouldn’t, you should have said that upfront.

hedgehog ,

That's a bit abstract, but saying what others "should" do is both stupid and rude.

Buddy, if anyone’s being stupid and rude in this exchange, it’s not me.

And any true statement is the same as all other true statements in an interconnected world.

It sounds like the interconnected world you’re referring to is entirely in your own head, with logic that you’re not able or willing to share with others.

Even if I accepted that you were right - and I don’t accept that, to be clear - your statements would still be nonsensical given that you’re making them without any effort to clarify why you think them. That makes me think you don’t understand why you think them - and if you don’t understand why you think something, how can you be so confident that you’re correct?

hedgehog ,

No, I don’t think anything you do has any bearing on reality, period.

hedgehog ,

Context?

First, some background on Gina and Vincent, as well as the Streisand Effect (I assume you’re familiar with it, but maybe not):

More background:

Other articles on the same event:

Problems with creating my own instance

I am currently trying to create my own Lemmy instance and am following the join-lemmy.org docker guide. But unfortunately docker compose up doesn't work with the default config and throw's a yaml: line 32: found character that cannot start any token error. Is there something I can do to fix this?...

hedgehog ,

If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.

hedgehog ,

There's an idea that just won't die that Linux is extremely difficult to use/maintain/troubleshoot. It's certainly often a lot easier than windows, so it just gets to me to see that idea propagated.

Pretending it’s all sunshine and rainbows isn’t realistic, either. That said, I had a completely different takeaway - that the issues are mostly kinda random and obscure or nitpicky, and the sorts of things you would encounter in any mature OS.

The issue about PopOS not having a Paint application is actually the most mainstream of them - and it feels very similar to the complaints about iPadOS not including a Calculator app by default. But nobody is concluding that iPads aren’t usable as a result.

Teams having issues is believable and relevant to many users. It doesn’t matter whose fault an issue is if the user is impacted. TBH, I didn’t even know that Teams was available on Linux.

That said, the only people who should care about Teams issues on Linux are the ones who need to use them, and anyone who’s used Microsoft products understands that they’re buggy regardless of the platform. Teams has issues on MacOS, too. OneDrive has issues on MacOS. On Windows 10, you can’t even use a local account with Office 365.

hedgehog ,

It first showed up on Netflix in mid-2023, in the middle of the writer’s guild strike (meaning there was a dearth of new content). So basically the Netflix effect. It had been on other streaming platforms before - Prime Video and Hulu - but Netflix is still a juggernaut compared to them - it has 5 times as many subscribers as Hulu, for example, and many of the subscribers to Prime Video are incidental and don’t stream as much on average as Netflix users.

I assume Netflix funded off-platform advertising, but the on-platform advertising has a big effect, too. And given that Suits broke a record in the first week it was on Netflix and they have a spinoff coming, it makes sense that they would keep advertising.

hedgehog ,

The funny thing about Lemmy is that the entire Fediverse is basically running a massive copyright violation ring with current copyright law.

Is it, though?

When someone posts a comment to Lemmy, they do so willingly, with the intent for it to be posted and federated. If they change their mind, they can delete it. If they delete it and it remains up somewhere, they can submit a DMCA request; likewise if someone else posts their copyrighted content.

Copyright infringement is the use of works protected by copyright without permission for their use. When you submit a post or a comment, your permission to display it and for it to be federated is implied, because that is how Lemmy works. A license also conveys permission, but that’s not the only way permission can be conveyed.

hedgehog ,

The idea that someone does this willingly implies that the user knows the implications of their choice, which most of the Fediverse doesn't seem to do

The terms of service for lemmy.world, which you must agree to upon sign-up, make reference to federating. If you don’t know what that means, it’s your responsibility to look it up and understand it. I assume other instances have similar sign-up processes. The source code to Lemmy is also available, meaning that a full understanding is available to anyone willing to take the time to read through the code, unlike with most social media companies.

What sorts of implications of the choice to post to Lemmy do you think that people don’t understand, that people who post to Facebook do understand?

If the implied license was enough, Facebook and all the other companies wouldn't put these disclaimers in their terms of service.

It’s not an implied license. It’s implied permission. And if you post content to a website that’s hosting and displaying such content, it’s obvious what’s about to happen with it. Please try telling a judge that you didn’t understand what you were doing, sued without first trying to delete or file a DMCA notice, and see if that judge sides with you.

Many companies have lengthy terms of service with a ton of CYA legalese that does nothing. Even so, an explicit license to your content in the terms of service does do something - but that doesn’t mean that you’re infringing copyright without it. If my artist friend asks me to take her art piece to a copy shop and to get a hundred prints made for her, I’m not infringing copyright then, either, nor is the copy shop. If I did that without permission, on the other hand, I would be. If her lawyer got wind of this and filed a suit against me without checking with her and I showed the judge the text saying “Hey hedgehog, could you do me a favor and…,” what do you think he’d say?

Besides, Facebook does things that Lemmy instances don’t do. Facebook’s codebase isn’t open, and they’d like to reserve the ability to do different things with the content you submit. Facebook wants to be able to do non-obvious things with your content. Facebook is incorporated in California and has a value in the hundreds of billions, but Lemmy instances are located all over the world and I doubt any have a value even in the millions.

hedgehog ,

They don’t call them “mp3 players” anymore - that may be why you can’t find what you need. Look for a “DAP” instead - digital audio player - and you’ll probably have more luck.

For example, the Fiio M7 is $200 and is pretty full-featured. I have the M6 and I think I paid around $100, but I don’t think it’s being sold anymore.

hedgehog ,

You can use YaCy, which can be run as an independent self-hosted index (in “Local” mode), where it will index sites visited as part of web crawls that you initiate, or you can run it as part of a decentralized peer-to-peer network of indexes.

YaCy has its own search UI but you can also set up SearXNG to use it.

hedgehog ,

there is not a 'Searx Index' which is what this is about.

There’s YaCy, which includes a search index (which can be independent or can join a P2P network of indexes), web crawler, and web ui for searching. It can also be added as a SearXNG engine.

hedgehog ,

Last I checked (around the time that LLAMA v3 was released), the performance of local models on CPU also was pretty bad for most consumer hardware (Apple Silicon excepted) compared to GPU performance, and the consumer GPU RAM situation is even worse. At least, when talking about the models that have performance anywhere near that of ChatGPT, which was mostly 70B models with a few exceptional 30B models.

My home server has a 3090, so I can use a self-hosted 4-bit (or 5-bit with reduced context) quantized 30B model. If I added another 3090 I’d be able to use a 4-bit quantized 70B model.

There’s some research that suggests that 1.58 bit (ternary) quantization has a lot of potential, and I think it’ll be critical to getting performant models on phones and laptops. At 1.58 bit per parameter, a 30B model could fit into 6 gigs of RAM, and the quality hit is allegedly negligible.

hedgehog ,

I haven’t used it and only heard about it while writing this post, but Open WebUI looks really promising. I’m going to check it out the next time I mess with my home server’s AI apps. If you want more options, read on.

Disclaimer: I’ve looked into most of the options below enough to feel comfortable recommending them, but I’ve only personally self hosted the Automatic 1111 webui, the Oobabooga webui, and Kobold.cpp.

If you want just an LLM and an image generator, then:

For the image generator, something that leverages Stable Diffusion models:

And then find models that you like at Civitai.

For the LLM, the best option depends on your hardware. Not knowing anything about your hardware, I recommend a llama.cpp based solution. Check out one of these:

Alternatively, VLLM is allegedly the fastest for multi-user CPU-based inference, though as far as I can tell it doesn’t have its own webui (but it does expose OpenAI compatible API endpoints).

And then find a model you like at Huggingface. I recommend finding a model quantized by TheBloke.

There are a couple communities not on Lemmy that discuss local LLMs - r/LocalLLaMA and r/LocalLLM for example - so if you’re trying to figure out which model to try, that’s a good place to check.

If you want a multimodal AI, you can use llama.cpp with a model like LLAVA. The options below also have multimodal support.

If you want an AI assistant with expanded capabilities - like searching your documents or the web (RAG), etc. - then I don’t have a ton of experience there, but these seem to do that job:

If you want to use your local model as more than just a chat bot - integrating it into your IDE or a browser extension - then there are options there, and as far as I know every LLM above can be configured to expose an API allowing it to be used by your other tools. Some, like Open WebUI, expose OpenAI compatible APIs and so can be used with tools built to be used with OpenAI. I don't know of many tools like this, though - I was surprisingly not able to find a browser extension that could use your own API, for example. Here are a couple examples:

Also, I found this Medium article listed some of the things I described above as well as several others that I’d never heard of.

hedgehog ,

I had a pocket TV back in 2007 or so. It had an antenna and everything. It was a bit bulky and not at all power efficient, though. IIRC it went through 8 AA batteries in about 3 hours.

I’m not sure why you’d want that over a smartphone or even just a small tablet, though.

Also, we have flying skateboards, they’re just prohibitively expensive or not yet being sold. Look up the ArcaBoard (was $20k back in 2015, doesn’t seem to be sold anymore), the Lexus Hoverboard, and the Flyboard Air. Unfortunately if you try to buy a “hoverboard” you’re just gonna end up with an electric scooter

hedgehog ,

I am trying to avoid having to having an open port 22

If you’re working locally you don’t need an open port.

If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.

If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.

That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.

I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.

Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:

  • docker compose
  • docker networks
  • docker volumes

If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.

hedgehog ,

I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.

hedgehog ,

It’s not changing the default behavior, so it still has it.

Per the article, they’re introducing a new opt-in feature that a woman, enbie, or person looking for same-gender matches can set up - basically a prompt that their matches can reply to.

I think Bumble also used to prevent you from sending multiple messages before getting a reply, but maybe that was a different app... If they still do that in combination with this feature, then I could see this feature continuing to accomplish their mission of empowering women in online dating.

hedgehog ,

Terrible article. Even worse advice.

On iOS at least, if you’re concerned about police breaking into your phone, you should be using a high entropy password, not a numeric PIN, and biometric auth is the best way to keep your convenience (and sanity) intact without compromising your security. This is because there is software that can break into a locked phone (even one that has biometrics disabled) by brute forcing the PIN, bypassing the 10 attempts limit if set, as well as not triggering iOS’s brute force protections, like forcing delays between attempts. If your password is sufficiently complex, then you’re more likely to be safe against such an attack.

I suspect the same is true on Android.

Such a search is supposed to require a warrant, but the tool itself doesn’t check for it, so you have to trust the individual LEOs in question to follow the law. And given that any 6 digit PIN can be brute forced in under 11 hours (40 ms per entry), this means that if you were arrested (even for a spurious charge) and held overnight, they could search your phone without you knowing.

With a password that has the same entropy as 10 random digits, assuming no further vulnerabilities allowing them to speed up the process, it could take up to 12 and a half years to brute force it. Make it alphanumeric (and still random) and it’s millions of years - infeasible within our lifetime - it’s basically a question of whether another vulnerability is already known or is discovered that enables bypassing the password entirely / much faster rates of entry.

If you’re in a situation where you expect to interact with law enforcement, then disable biometrics. Practice ahead of time to make sure you know how to do it on your phone.

hedgehog ,

I’m not addressing anything Gitea has specifically done here (I’m not informed enough on the topic to have an educated opinion yet), but just this specific part of your comment:

And they also demand a CLA from contributors now, which is directly against the idea of FOSS.

Proprietary software is antithetical to FOSS, but CLAs themselves are not, and were endorsed by RMS as far back as 2002:

In contrast, I think it is acceptable to … release under the GPL, but sell alternative licenses permitting proprietary extensions to their code. My understanding is that all the code they release is available as free software, which means they do not develop any proprietary softwre; that's why their practice is acceptable. The FSF will never do that--we believe our terms should be the same for everyone, and we want to use the GPL to give others an incentive to develop additional free software. But what they do is much better than developing proprietary software.

If contributors allow an entity to relicense their contributions, that enables the entity to write proprietary software that includes those contributions. One way to ensure they have that freedom is to require contributors to sign a CLA that allows relicensing, so clearly CLAs can enable behavior antithetical to FOSS… but they can also enable FOSS development by generating another revenue stream. And many CLAs don’t allow relicensing (e.g., Apache’s).

Many FOSS companies require contributors to sign CLAs. For example, the FSF has required them since 2005 at least, and its CLA allows relicensing. They explain why, but that explanation doesn’t touch on why license reassignment is necessary.

Even if a repo requires contributors sign a CLA, nobody’s four freedoms are violated, and nobody who modifies such software is forced to sign a CLA when they share their changes with the community - they can share their changes on their own repo, or submit them to a fork that doesn’t require a CLA, or only share the code with users who purchase the software from them. All they have to do is adhere to the license that the project was under.

The big issue with CLAs is that they’re asymmetrical (as opposed to DCOs, which serve a similar purpose). That’s understandably controversial, but it’s not inherently a FOSS issue.

Some of the same arguments against the SSPL (which is not considered FOSS because it is so copyleft that it’s impractical) being considered FOSS could be similarly made in favor of CLAs. Not in favor of signing them as a developer, mind you, but in favor of considering projects that use them to be aligned FOSS principles.

hedgehog ,

In the US, if you don’t proceed to step 3, step 2 is legal (so long as the CD lacks DRM). You’re permitted a single backup under fair use; you’re also permitted to rip the music for personal use, like loading it onto a music player. You’re not supposed to burn it to a regular CD-R (is it illegal? Idk), but burning it to an Audio CD-R (where there is a tax that is distributed to rights holders like royalties) is endorsed by the RIAA.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines