I'm curious, what exactly is the problem with this? Does Apple have copyrights on the whole design or each individual visual element? Where would figma get in trouble if they left it working that way?
I'm not a lawyer, I could imagine that a copyright claim for a specific app design is viable.
But in this case, it might also just be a case of avoiding bad press and bad blood with Apple.
Could it be a fear of a software patent relating to the design? Back in the day Apple had one for swipe to unlock that prompted Android to use different patterns.
Yah. ALl of this makes me think it's crazy for a company like Figma to even try this. If the designs don't steal from well known brand, people will say they suck. If they do steal, they get booed as well. Losing proposition, it seems to me.
Sounds like the internet needs to have some rules established to keep things under control. Personally, I think 34 rules is a good number, at least at a minimum.
I recognized the name AU10TIX, because I half-joked on Lemmy about a potential mass doxxing of Xitter's most vile users back in September when they announced the partnership. I assumed they'd be a target for ransomware/hackers, not that they'd just leave their admin creds out in the open.
Yeah. Imo their one hope was to make trackers that leverage both networks, as a sort of middle ground device for like a family with 1 android and 1 iPhone. Without that pivot they've seemed dead since the airtag launch.
That's what happens when people who don't understand LLMs try to profit off of LMMs. You can be sure that the actual programmers at Google told corporate that this would be the exact thing that would happen, but corporate pushed ahead anyway. The programmers shrugged, did their job, and brushed up their CVs just in case.
I miss r/Coomer though, it existed to mock and insult people who have a very unhealthy habit of masturbating and sexual addiction (particularly to lolicon). Like, fuck, sex and masturbating isn't bad on it's own but it shouldn't be your god damn identity.
I miss r/Coomer though, it existed to mock and insult people who have a very unhealthy habit of masturbating and sexual addiction (particularly to lolicon). Like, fuck, sex and masturbating isn't bad on it's own but it shouldn't be your god damn identity.
You missing a community dedicated to mock & harass people who probably need actual help with coping really says a lot about you.
Ooooo, I'm so not impressed that your only breathing existence is to dig through people's post history to connect something insulting. Get a life, dude.
@snownyte
Ooooo, I'm so not impressed that your only breathing existence is to dig through people's post history to connect something insulting. Get a life, dude.
The only thing I dug through was the comment I replied to.
But thanks for proving my point! lmao
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren't actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
Interesting article, but in my experience it overstates the problem... at least for Facebook itself (I have zero interaction with Instagram, Threads, or VR).
I've gone back to Facebook for the last few months, and out of what it mentions, I've only seen like half of it, mostly in the comment sections.
Or to be more precise, for 2024 Q2, I'm seeing:
election disinformation - almost none
violent content
child sexual abuse material
hate speech - only in comments
fake news - almost none
crypto scams - a few
phishing - a few
hacking
romance scams - almost none
AI content - almost none
uncanny valley stuff
The article however forgot to include:
science deniers - a lot in open comments, very few in groups
religious zealots - in comments
political trolls - a few in comments
state-sponsored propagandists - a few in comments
general trolls - a few in comments
Still interesting how I get close to zero of these in my main feed.
there’s a level of disinvestment in Facebook
Disagree. Facebook has reached a "plateau of stability" where the current moderation tools keep enough people on the platform to make it profitable.
I've been actively reporting+blocking problematic content, and while about 99% of my reports end up in "no action was taken", it works wonders to keep my feed and group comments clean.
I found this article quite interesting, as I deactivated my main Facebook account around the time the article asserts Facebook was still "trying" and only recently created a new account under a generic pseudonym to access all the community and small business information that is still locked entirely to the platform. Because I have basically nothing in my feed on this account, Facebook backfills it with "recommended" posts and I was pretty shocked at how universally terrible they are. I guess the algorithm uses my location and gender to generate these recommendations, since I've provided very little in the way of alternative information or interaction for it to use. As a result, my default feed is basically just a wall of misogynistic and highly sexualised slop and even the few genuine recommended posts (like backpackers looking for travelling buddies) are clearly being recommended because they feature young women with a bunch of older men thirsting over them in the comments.
Because I have basically nothing in my feed on this account, Facebook backfills it with "recommended" posts and I was pretty shocked at how universally terrible they are. [...] since I've provided very little in the way of alternative information or interaction for it to use
There is your problem.
When an information-hungry platform like Google or Meta asks you to fill out your preferences "to serve you more relevant content"... they are not lying. I mean, it's also to select ads that will pay more for your attention, but the thing with the content algorithm is, if you don't give it data, then it will ass-u-me that you're statistically most likely to engage with content that is getting most engaged... by people who have also not provided it any data.
The problem with that cohort, is it not only includes the few people with legitimate security concerns, but also those who got dark secrets to hide, and/or are using "incognito" browser mode to look for porn.
I don't like to give too much info about myself, but I also don't want to get stuff intended for the "average horny fanatics" group, so I try to give enough data for the algorithm to put me into a group that makes more sense to me.
And it works. The strongest signal you can send to the algorithm, is blocking content you don't want to see. It's amazing how quickly modern algorithms learn to avoid showing me most porn, politics, or religious content, and instead show me science and humor. They still send like 1% of trash my way, clearly checking whether I'll maybe engage with it, but report+block works wonders.
It's not a "problem", as such. As I said, I created the account to view the pages and groups of small businesses and organisations that have no other online presence. I don't use it for the doomscroll algorithm. This was just my observation of what kind of content is targeted towards males in my location by default.
But it's not exactly a "default", it's more of a "demographic with little data"... and I bet it's small enough that the algorithm is showing exactly the content most of its members are looking for. It's somewhat of a sad reflection on the state of privacy, when keeping things private becomes a segmenting parameter.
404media.co
Top