I'm curious whether the increasingly invasive telemetry of modern Windows will have legal implications surrounding patient privacy here in the US. I work IT in the healthcare field, and one of our key missions is HIPAA compliance. What, then, will be the impact if Microsoft starts storing more and more in-depth data offsite? Will keyboard entries into our EHR be tracked and stored in Microsoft's servers? Will we subsequently be held liable if a breach at Microsoft causes this information to leak, or if Microsoft just straight-up starts selling it to advertisers? Windows is our one-and-only option for endpoint devices, so it's not like we can just switch.
I genuinely don't have the answers to these questions right now, but it may start to become a serious conversation for our department in the future if things continue at the trajectory they're going at. Or, maybe I'm just old and paranoid and everything will be okie dokie.
Like most of Microsoft's more odious features, this one can be turned off through GPO/Intune policy across an organization. As such, the liability will mostly fall on the organization to make sure it's off. The privacy and security impacts will be felt by individuals and small businesses.
They claim that the data is only stored locally, so far. We'll see, I guess.
Sadly a lot of the privacy switches are exclusive to enterprise and education users, but our endpoints are running Pro (we have our previous supervisor to thank for that). I guess I'll hope this is one of the ones we can just toggle off without any fuss.
Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything.
Reddit CEO Steve Huffman says
But refuses to pay the users or at least moderators who build Reddit to what it is now. Instead, it pushes more advertisements and sells data to AI companies for millions of Dollars.
I've made a note to ask for the pay when I see mod postings. Luckily I'm finding more and more mod postings so there's lots of opportunities to remind mods that they're lining Reddits pockets for free.
It seems like the consensus of this thread is that the name isn't holding it back. That was my thinking going into it, but the article makes some very valid points such as the name (being related to a sexual and sometimes derogatory word) making it a non-starter in some organizations.
I have it installed on all our computers at work for basic image editing, but we're a small business and never gave it much thought. I can absolutely see it being problematic in a school setting, however. More to the point, Adobe has ably demonstrated: get them hooked on your software in school and you'll dominate the market. Imagine if kids had been learning GIMP instead of Photoshop all these years.
Anyway, I've got no dog in this fight. Just pointing out what I see as a valid point in the article.
Also, I like their original name possibility of IMP much better. The mascot could have been a cute little imp instead of ... whatever it is now.
My very large organisation has Gimp available for basic image manipulation. I've tried to get them to use Paint.NET instead, but nooooo... Apparently we like hitting nails with jackhammers around here
I should make clear I am not an ACAB person by any means. The whole mentality that the police are automatically the enemy makes just as little sense to me as that the police are never the enemy.
But no one in the world should simply have unaccountable power. Body cams, judicial oversight, warrants, charges when they abuse their power, get rid of police unions or anything else that makes it difficult for a department to fire an officer who they feel is causing problems. Just like some percentage of non police people do bad stuff and we need a system to watch them and try to protect everyone else from them, we need it 10 times more for police people.
I completely agree with you. They need to be made accountable.
That's the real root of the problem. ACAB/defund/whatever, if they were actually held accountable for their insane actions a lot of the problems would go away.
"You killed three peop- oh you resigned? Nevermind then. Have a good day Officer."
Yeah. The frustrating thing is that the blanket "defund the police" attitude actually makes the problem of department-hopping bad cops, or tolerance for bad behavior by cops, worse a lot of the time, by starving departments of resources which makes it harder to hire as many cops as they need which makes them more desperate for employees and makes it harder to be selective about who they employ.
The way I've understood the "defund the police" movement's point is that they're saying police funding is excessive because a lot of the things cops do should be handled before the cops have to get involved, eg. with higher funding for mental health and social services, housing for homeless people etc. So the point is that you wouldn't need as many cops in the first place if things were handled more humanely "downstream" so to speak, instead of just letting problems fester until things go sideways
Yeah. That part makes perfect sense to me. It's a little different from what you were saying, but someone on Lemmy was actually telling me about their experience with someplace where something like this had been implemented -- mental health people going on certain calls instead of cops, with cops assisting in cases that might turn violent, and it sounds like it works out great from all people involved's perspective. The callers are happier because people come who are better at handling the problems, the cops are happier because they don't have to deal with calls they are less qualified to deal with, the mental health people are happier because they have cops on standby for violent calls but they also get to deal with things right from the jump, instead of coming in after the cops came and just tackled and cuffed the person or whatever and now they have to come into the middle of the wreckage.
I know you were talking about things at an even much earlier level than when the 911 call happens; that sounds good to me too. The only part I was objecting to was the vindictive framing of it. Like if you want to fund mental health and homeless services that sounds great, we should do that. Coupling that idea up with punishing the police because they were bad (not saying you're doing that, but definitely some people have that in mind saying "defund the police" I think) I don't think is the way to produce progress though.
The cartoon is excellent but yes the problem is that the phrasing doesn't match the reality. "Fund the nonpolice" isn't catchy though.
Honestly, just properly funding anything that is designed to do benevolent things for the community as a whole is a tough sell with way too many US community politicians
Honestly, just properly funding anything that is designed to do benevolent things for the community as a whole is a tough sell with way too many US community politicians
This seems to be a problem with at least conservative politicians everywhere. In Finland where I live we do still have the vestiges of a welfare state (and it really is vestigial at this point), but right wing politicians keep dismantling it and cutting taxes on the rich, and later on leftist politicians find it impossible to roll back any changes due to resistance from the right.
Defund means to essentially get rid of a department or thing. The phrase "defend the police" means "get rid of the police". It doesn't matter if there's some cartoon to re-explain away the phrase or a bunch of other people trying to re-define the English language. English is English, and words have meaning.
The "defund the police" movement failed because us liberals don't fucking understand marketing. Like, at all. I can't count the number of times some liberal movement crops up with their slogan and Republicans turn that slogan against them because nobody spent the ten minutes time to think about how that phrase or thing could be abused. For example, that brief time when the LGBT movement wanted to rename themselves to LGBTQIA2SUVWTFBBQ? Seriously?!?
It's like naming your baby "Assman McAssface" and wondering why he gets bullied in school.
I have a private theory, for which I have absolutely 0 evidence, that the forces of the establishment have some way of sneaking stupid unpopular things or phrases into the left’s discourse which the left then seizes and runs with, much to the establishments’s delight. E.g. renaming the Green Party the Green-Rainbow Party, climate activists attacking famous artworks, things like that.
I have 0 evidence for this, as applied to “defund the police” or anything else. Actually I sort of suspect that “defund the police” was an original creation of the ACAB contingent which meant exactly what it sounds like, that got retconned by more sensible but still reform-minded people into meaning “more properly fund everything else” for exactly the reasons we’re discussing. But as a general rule I suspect (again, with 0 evidence) that some of what you’re talking about actually comes from deliberate sabotage.
Some of the more extreme-minded liberals do a good enough job at sabotaging themselves and the movements they are a part of, without the need for malicious saboteurs. Like Atheist+ or progressive stack.
Then again, Russians orgs are out there, trying to influence politic movements and sabotage others. GamerGate was an entire sea of political actors, journalists, influencers, and Russian agents, trying to push their own narratives to the point of mass disinformation from both sides, with the general public on either side confused and angry at the other's responses.
But in order to get the money for those programs, especially if their effect is to lower the workload for police, you should get the money from the police budget, otherwise it's just wasted money. Are you just going to keep giving the NYPD a billion dollars a year to do nothing?
That's what ACAB means though. You cannot trust cops, because there's no real accountability for them. Why is there no accountability? Because their colleagues lie for them, their bosses lie for them, the prosecutors decline to prosecute them, judges trust them implicitly, their unions intimidate mayors and lobby politicians for more funding, tougher laws (for non-cops) and less accountability for themselves.
The system is so fucked up that reforming it seems like a waste of time. Actual "good cops" get squeezed out or worse. You might as well assume that ACAB, because the stakes are too high to assume otherwise.
Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place
If this is actually possible then isn't that a huge security vulnerability in Android and/or iOS? I feel if this was the case we'd be hearing about it from security researchers rather than a lawyer.
I'd believe it because I remember the same being true for TikTok.
I don't have the links on me right now, but I remember clearly that when tiktok was new, engineers trying to figure out what data it collected found that the app could recognize when it was being observed, and would "rewite" itself to evade detection.
They noted that they'd never seen this outside of sophisticated malware, and doubted that a social media company had the resources to write such a program.
doubted that a social media company had the resources to write such a program.
Em... writing a different manifest and asking the OS to reinstall itself, is not rocket science. Detecting that it's running in a testing environment and not asking for permission to access some types of data, is also quite easy. Downloading a different update or modules depending on which device and environment it gets installed to, is basic functionality.
It's still sneaky behavior and a dark pattern, but come on.
There is some irony to be had, in discussing this stuff on a page that starts by asking me to login, then to be good and disable my ad blocker, only to proceed with keeping half the text of the article as images so you can't copy+paste it... and even all the comments!
Using that as a baseline... the CPU type, memory usage, disk space, etc. are some extra data points freely available to all apps.
A developer can distribute an app with multiple versions, some targeting more modern and capable devices, some older and more limited. It's a feature, not a bug!
*Other apps you have installed (I've even seen some I've deleted show up in their analytics payload - maybe using as cached value?)
This is overreaching for an app that has nothing to do with managing other apps. Still, you may want some app with those capabilities... so let's call it "sus".
*Everything network-related (ip, local ip, router mac, your mac, wifi access point name)
Your IP is... well, you're using it to connect, they will see it, duh.
The rest is overreaching and comes into PI violation terrain, but can be used for geo location... the OS does it, that's the data it uses to fine-tune the GPS's location.
*Whether or not you're rooted/jailbroken
Typical feature for banking ad DRM protected apps. Nothing to see here.
*Some variants of the app had GPS ping- ing enabled at the time, roughly once every 30 seconds - this is enabled by de- fault if you ever location-tag a post IIRC
Best answered by a comment [1] (SEE BELOW).
TL;DR: more DRM stuff.
*They set up a local proxy server on your device for "transcoding media", but that can be abused very easily as it has zero authentication
This is somewhat sus, but a local proxy by itself, doesn't mean any sort of risk, or that it could be exploited.
For example, Tor can be accessed using a local proxy (although VPN mode is safer).
The scariest part of all of this is that much of the logging they're doing is remotely configurable,
Not exactly. It's how feature flags, and remote testing/debugging works too.
and unless you reverse every single one of their native libraries (have fun reading all of that assembly, assuming you can get past their customized fork of OLLVM!!!) and manually inspect every single obfuscated function.
This is worse (why do they use a custom OLLVM fork?), and obfuscation usually means they have something to hide. It's the opposite of security for the user.
They have several different protections ir. place to prevent you from reversing or debugging the app as well. App behavior changes slightly if they know you're trying to figure out what they're doing.
Not good, but unfortunately allowed. That behavior is shared by both DRM protected software, and malware.
There's also a few snippets of code on the Android version that allows for the downloading of a remote zip file, unzipping it, and executing said binary. There is zero reason a mobile app would need this functionality legitimately.
False.
There are two legitimate reasons: plugins, and DLCs.
It can be used for shady stuff, but is also a "feature, not a bug".
On top of all of the above, they weren't even using HTTPS for the longest time. They leaked users' email addresses in their HTTP REST API, as well as their secondary emails used for password resets. Don't forget about users' real names and birthdays, too. It was alllll publicly viewable a few months ago if you MITM'd the application.
Well, that's just stupid, there is zero reason to send data unencrypted.
They encrypt all of the analytics requests with an algorithm that changes with every update (at the very least the keys change) just so you can't see what they're doing.
Ehm... this is the correct behavior. See previous point.
They also made it so you cannot use the app at all if you block com- munication to their analytics host off at the DNS-level.
Sus... but see the introductory part of this comment. Should boredpanda also be banned?
TikTok put a lot of effort into preventing people like me from figuring out how their app works. There’s a ton of obfuscation involved at all levels of the application, from your standard Android variable renaming grossness to them (bytedance) forking and customizing ollvm for their native stuff. They hide functions, prevent debuggers from attaching, and employ quite a few sneaky tricks to make things difficult. Honestly, it’s more complicated and annoying than most games I’ve targeted,”
This is bad, and a reason to use FLOSS apps... but since it's been an accepted behavior for Privative Software, along with DRM... don't blame the player, blame the game.
No, seriously, blame the DMCA and friends. There is no way to at the same time "enforce DRM, keep a copy of all keys at a trusted third party, and keep users secure"... so the current situation is "you get none of those".
[1]
sr71Girthbird 39 points 1 day ago
Not OP but I work at a company providing video infrastructure, and one of our products is an analytics suite. It provides all the data he men- tioned and ton more. Turner, Discovery, New York Times, Hulu, and everyone's favorite company, MindGeek all use our Analytics, among hundreds of other large customers. Specifically where this guy says, "Some variants of the app had GPS pinging enabled at the time, roughly once every 30 seconds" that's called a heartbeat. The app or video player within the app has to have a heart- beat so that the player can detect if a viewer is still watching video etc. Our analytics + video player services send a regular heartbeat every 8 seconds. It definitely pulls in your exact location.
Uh, as someone who does malware analysis, sandbox detection is not easy, and is certainly not something that a non-malware-developer/analyst knows how to do. This isn't 2005 where sandboxes are listing their names in the registry/ system config files.
I haven't done sandbox detection for some years now, but around 2020, it was already "difficult" as in hard to write from scratch... yet already skid easy as in "copy+paste" from something that does it already. Surely newer sandboxes take more stuff into account, but at the same time more detection examples get published, simply advancing the starting point.
So maybe TikTok has a few people focused on it, possibly with some CI tests for several sandboxes. I don't think it's particularly hard to do 🤷
I know it’s officially not cool to like Rick and Morty anymore, but I cannot read a description of a suspicious bad thing called “Project Nimbus” that Google is getting itself involved in, without picturing a weird creepy sexual supervillain riding a giant seashell into a meeting with their executives
It became associated with aggressive internet shutins bleating about Pickle Rick and the szechuan sauce, although it's hard to tell where exactly the line was between "these guys are losers yelling and being obnoxious at McDonalds," and "stop cyberbullying we're all just internet weirdos who like a funny cartoon and that is fine." And then, Justin Roiland had some kind of sex abuse allegations and the worm turned and the world is officially supposed to hate Rick and Morty now, I think.
I mean, allegedly. I'm with you; some of it is still some great funny shit.
Thanks for the recap. Yeah I think I was more or less aware of most of those bits. I guess I didn't get to the conclusion. 😅 Also I think they handled Justin Roiland's issues fairly unambiguously. Assuming he was a bastard, I didn't skip a beat watching the Justin-free season.
Rick and Morty fired Justin Roiland and got two new voice actors to play Rick and Morty. Ian Cardoni and Harry Belden. They're great, they voiced season 7 and it's the best season of Rick and Morty yet.
I hadn’t fully been aware of the changeover, but yeah I saw season 7 and it was definitely more coherent and solid and less wandery and desperate than some of the middle seasons, absolutely there was an uptick in quality to me yes
We aren't naive. We all knew this will happen. But, as it happened, it was better than banning AI in the free world and giving dictators advantages in AI tech.
Ah, the old "the only way to stop a bad person with a gun is for all the good people to have guns" argument.
Were the dictators even working on their own large language models, or do these tools only exist because OpenAI made one and released it to the public before all the consequences had been considered, thus sparking an arms race where everyone felt the need to jump in on the action? Because as far as I can see, ChatGPT being used to spread disinformation is only a problem because OpenAI were too high on the smell of their own arses to think about whether making ChatGPT publicly available was a good idea.
It really is. I'm also not a huge fan of "everyone needs to have access to their own personal open source AI, otherwise only corporations will be able to use it", like somehow the answer to corporations being shit is to give everyone else a greater ability to be shit too. What the world really needs is even more shit!
Just don't complain when the world becomes even more shit than it already is. Open source AIs that rely on scraping content without paying the creator are just as exploitative of workers as corporate AIs doing the exact same thing.
The reality is, passing this huge amount of data was the only way these crazy current AI models work as powerful as ChatGPT. With a restriction like you fantasize, the AI programs would have been dominated by bad actors and the west would not have a counter technology for a decade if not longer.
Regulating the outputs of AIs would be a separate story. But it's still overwhelmingly difficult. OpenAI is actually advanced in this region in the sense that they have in pocket the single best technology to politically balance the replies by a chatbot.
AI programs are already dominated by bad actors, and always will be. OpenAI and the other corporations are every bit the bad actors as Russia and China. The difference between Putin and most techbros is as narrow as a sheet of paper. Both put themselves before the planet and everyone else living on it. Both are sociopathic narcissists who take, take, take, and rely on the exploitation of those poorer and weaker than themselves in order to hoard wealth and power they don't deserve.
This is just labeling. You can label everything as bad at will. I'm fine with that, it's called "you're entitled to your opinion". That's not objective though.
Well, let's see about the evidence, shall we? OpenAI scraped a vast quantity of content from the internet without consent or compensation to the people that created the content, and leaving aside any conversations about whether copyright should exist or not, if your company cannot make a profit without relying on labour you haven't paid for, that's exploitation.
And then, even though it was obvious from the very beginning that AI could very easily be used for nefarious purposes, they released it to the general public with guardrails that were incredibly flimsy and easily circumvented.
This is a technology that required being handled with care. Instead, its lead proponents are of the "move fast and break things" mentality, when the list of things that can be broken is vast and includes millions of very real human beings.
You know who else thinks humans are basically disposable as long as he gets what he wants? Putin.
So yeah, the people running OpenAI and all the other AI companies are no better than Putin. None of them care who gets hurt as long as they get what they want.
I already write one reply to tell my main point. But whatever argument you come up with, I don't think that'll match the reality as viewed by AI researchers. If you give me specific short questions I'd be happy to engage in a discussion, with conditions on time.
In any case, I won't listen to metaphoric arguments like yours with guns because metaphoric arguments are very difficult to do scientifically. Every situation is different. I mean that anybody can always end the discussion saying "that's oranges vs apples", and everything time this happens you'd not have an objective way to counter that.
The metaphoric argument is exactly on point, though: the answer to "bad actors will use it for evil" is not "so everybody should have unrestricted access to this really dangerous thing." Sorry, but in no situation you can possibly devise is giving everyone access to a dangerous tool the correct answer to bad people having access to it.
I can say it's both on point and not. For the not, you can ban the gun in the UK and it will be very difficult to bring one from the continent. Peace. But the same is not true for AI. If the UK government bans AI, Russia can still bring it through the internet.
And then I can still counter-argue that one, and then counter-argue this one also. See what a mess a metaphoric arguments bring.
Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair's breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.
So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying "look, all these bad people have guns! The only solution is everyone who doesn't already have a gun should get one right now, preferably from us!"
it was better than banning AI in the free world and giving dictators advantages in AI tech.
The US doesn't need to ban AI. It just needs to stop publicly deploying it, untested and unregulated, on the masses. And some of these big tech companies need to stop releasing open models that can be easily obtained and abused by bad actors. Dictatorships don't actually like AI internally, because it threatens their control of the narrative within their country. For example, the CCP has been very cautious of it when compared to the US because it is concerned about how it could be employed against the party.
And this whole arms race argument sort of ignores the fact that the US continuing to mass deploy this shit at breakneck speed is already giving the dictators the advantages they need to fuck with democracy. No one needs to have a real war with the US if it starts one with itself.
So you're saying instead of tacking "site:reddit.com" onto my Google search, I can now use ChatGPT to get the same information, except without the original context, and it will often be wrong? Amazing!
And this also means that companies will fill Reddit with fake comments promoting their brand to ensure that their brand gets mentioned in ChatGPT responses, right? Can't wait!
I'm not so sure I'd call myself a "tankie", but I'd like a $12k new car and if it were an EV, even better. I recently paid more for a used car! Cars, like everything else, have gotten so stupidly expensive. It would have been nice to see one thing actually become more affordable because I know wages ain't gonna increase accordingly for a long time.
Kneecapping decarbonization efforts in the name of "jobs" and "the economy" is just straight up Republican policy. I do not care how many jobs are preserved on my rapidly warming planet.
How does everyone buying a brand new car result in decarbonization versus keeping the ones we've already expended carbon building and upgrading them when they break? There are 283 million cars on the road in the US and replacing them all is going to generate a metric fuckton of carbon.
Yeah China has been doing the same with solar panels. Funny you bring it up since my wife used to work at a facility that made the ingots and sliced them up. They shut down several years ago since it was impossible to compete with Chinese prices. Hurray for cheap prices right?
Cool you can act dramatically. Now that the theatrical portion is out of the way, maybe you can defend your position by responding to the topic of my comments.
Is it really a drop in the bucket b when we take the value per vehicle ?
If we compare Ford to BYD for example.
In 2023 Ford sold around 2 millions cars and BYD around 3 millions.
For Ford only 72 608 cars out of these 2 millions were EV (3.6%) For BYD it's was almost 1.6 million EV (53.3%)
In 2023 Ford got $9.2 billions from the US government to produce EV, so around $126 000 per EV sold in 2023.
$126 000*1 600 000 = $2 trillions ! So unless BYD received more than $2 trillions dollars from the Chinese government in 2023 it means that each EV sold by Ford is more subsidized than an EV sold by BYD.
This is not an analysis, I took huge shortcuts in this comment and might have done mistakes in the calculations.
Wages not keeping in step with inflation is exactly why everything seems so expensive. $30k of today's money is the equivalent of less than $10k in the 80's, and cars were more than $10K then except for a few that ended up being examples of "you get what you pay for".
I should probably state that as "wage increases being suppressed".
That's horrifying. Why would a potential life-threatening device be controlled by a smartphone app? What functions could possibly not be handled on the pump itself and need to be offloaded? What FDA crook was paid off to allow such a stupid thing to hit the market?
The problem with this logic is the manufactures have no control over the iPhone update. The article didn't go into exactly what happened, but it could have been that the device worked fine at launch, but then Apple released an update which caused an issue in the app. Even if it didn't happen this way I could definitely see it happening. Using an app for critical life sustaining medical devices is like playing Russian Roulette, an update from Google or Apple can put you in the hospital, or worse.
You need an incredibly robust quality management system to even achieve certification (allowing you to place on the market) when creating systems which include life support function, or functions which potentially could kill a user. All potential changes both within and outside of the manufacturers' control MUST be assessed and constantly monitored so such issues CANNOT arise.
No one should be able to legally place an unsafe app on the market, or legally perform changes to the app without the necessary checks and balances.
Medical device approvals in most countries are definitely not the wild west. Although they are not perfect.
Why does it need a connection to another device in the first place though? Silicon is tiny and cheap; all the logic, sensing, and scheduling could be done inside the pump.
I can see the utility, but there should be at least some critical operability in case the phone or app doesn't work for whatever reason, to help avoid injuries like these
The same reason you don’t carry a camera, a music player, a phone, etc as separate devices in your pocket. Because it’s wildly inconvenient and super frustrating to swap between them. For diabetics in this case, you generally have two separate companies making the pump and the glucose monitor. So at that point you are carrying a phone around, a monitor for your glucose levels, and a controller for your pump. That’s three devices that you need to keep charged and on your person at all times. Not to mention they are generally not slim and sleek and easy to pocket.
The ability to swap between these from a single device and the mental offload that brings can’t be overstated.
That being said, people that use medical services on their phones should not do OS upgrades until they are notified by their makers to be verified and working and should be heavily tested before any updates go out.
Technology
Top
This magazine is not receiving updates (last activity 0 day(s) ago).