Technically, yes. Practically, it's complicated. It doesn't really exist within the same ecosystem as other Linux distros.
It's not as different as Android (which is also technically a Linux distribution), but running a normal DE and all the programs that come with it is very clearly still an advanced user thing locked behind knowledge of how bash and virtual environments work.
Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place
If this is actually possible then isn't that a huge security vulnerability in Android and/or iOS? I feel if this was the case we'd be hearing about it from security researchers rather than a lawyer.
I'd believe it because I remember the same being true for TikTok.
I don't have the links on me right now, but I remember clearly that when tiktok was new, engineers trying to figure out what data it collected found that the app could recognize when it was being observed, and would "rewite" itself to evade detection.
They noted that they'd never seen this outside of sophisticated malware, and doubted that a social media company had the resources to write such a program.
doubted that a social media company had the resources to write such a program.
Em... writing a different manifest and asking the OS to reinstall itself, is not rocket science. Detecting that it's running in a testing environment and not asking for permission to access some types of data, is also quite easy. Downloading a different update or modules depending on which device and environment it gets installed to, is basic functionality.
It's still sneaky behavior and a dark pattern, but come on.
There is some irony to be had, in discussing this stuff on a page that starts by asking me to login, then to be good and disable my ad blocker, only to proceed with keeping half the text of the article as images so you can't copy+paste it... and even all the comments!
Using that as a baseline... the CPU type, memory usage, disk space, etc. are some extra data points freely available to all apps.
A developer can distribute an app with multiple versions, some targeting more modern and capable devices, some older and more limited. It's a feature, not a bug!
*Other apps you have installed (I've even seen some I've deleted show up in their analytics payload - maybe using as cached value?)
This is overreaching for an app that has nothing to do with managing other apps. Still, you may want some app with those capabilities... so let's call it "sus".
*Everything network-related (ip, local ip, router mac, your mac, wifi access point name)
Your IP is... well, you're using it to connect, they will see it, duh.
The rest is overreaching and comes into PI violation terrain, but can be used for geo location... the OS does it, that's the data it uses to fine-tune the GPS's location.
*Whether or not you're rooted/jailbroken
Typical feature for banking ad DRM protected apps. Nothing to see here.
*Some variants of the app had GPS ping- ing enabled at the time, roughly once every 30 seconds - this is enabled by de- fault if you ever location-tag a post IIRC
Best answered by a comment [1] (SEE BELOW).
TL;DR: more DRM stuff.
*They set up a local proxy server on your device for "transcoding media", but that can be abused very easily as it has zero authentication
This is somewhat sus, but a local proxy by itself, doesn't mean any sort of risk, or that it could be exploited.
For example, Tor can be accessed using a local proxy (although VPN mode is safer).
The scariest part of all of this is that much of the logging they're doing is remotely configurable,
Not exactly. It's how feature flags, and remote testing/debugging works too.
and unless you reverse every single one of their native libraries (have fun reading all of that assembly, assuming you can get past their customized fork of OLLVM!!!) and manually inspect every single obfuscated function.
This is worse (why do they use a custom OLLVM fork?), and obfuscation usually means they have something to hide. It's the opposite of security for the user.
They have several different protections ir. place to prevent you from reversing or debugging the app as well. App behavior changes slightly if they know you're trying to figure out what they're doing.
Not good, but unfortunately allowed. That behavior is shared by both DRM protected software, and malware.
There's also a few snippets of code on the Android version that allows for the downloading of a remote zip file, unzipping it, and executing said binary. There is zero reason a mobile app would need this functionality legitimately.
False.
There are two legitimate reasons: plugins, and DLCs.
It can be used for shady stuff, but is also a "feature, not a bug".
On top of all of the above, they weren't even using HTTPS for the longest time. They leaked users' email addresses in their HTTP REST API, as well as their secondary emails used for password resets. Don't forget about users' real names and birthdays, too. It was alllll publicly viewable a few months ago if you MITM'd the application.
Well, that's just stupid, there is zero reason to send data unencrypted.
They encrypt all of the analytics requests with an algorithm that changes with every update (at the very least the keys change) just so you can't see what they're doing.
Ehm... this is the correct behavior. See previous point.
They also made it so you cannot use the app at all if you block com- munication to their analytics host off at the DNS-level.
Sus... but see the introductory part of this comment. Should boredpanda also be banned?
TikTok put a lot of effort into preventing people like me from figuring out how their app works. There’s a ton of obfuscation involved at all levels of the application, from your standard Android variable renaming grossness to them (bytedance) forking and customizing ollvm for their native stuff. They hide functions, prevent debuggers from attaching, and employ quite a few sneaky tricks to make things difficult. Honestly, it’s more complicated and annoying than most games I’ve targeted,”
This is bad, and a reason to use FLOSS apps... but since it's been an accepted behavior for Privative Software, along with DRM... don't blame the player, blame the game.
No, seriously, blame the DMCA and friends. There is no way to at the same time "enforce DRM, keep a copy of all keys at a trusted third party, and keep users secure"... so the current situation is "you get none of those".
[1]
sr71Girthbird 39 points 1 day ago
Not OP but I work at a company providing video infrastructure, and one of our products is an analytics suite. It provides all the data he men- tioned and ton more. Turner, Discovery, New York Times, Hulu, and everyone's favorite company, MindGeek all use our Analytics, among hundreds of other large customers. Specifically where this guy says, "Some variants of the app had GPS pinging enabled at the time, roughly once every 30 seconds" that's called a heartbeat. The app or video player within the app has to have a heart- beat so that the player can detect if a viewer is still watching video etc. Our analytics + video player services send a regular heartbeat every 8 seconds. It definitely pulls in your exact location.
Uh, as someone who does malware analysis, sandbox detection is not easy, and is certainly not something that a non-malware-developer/analyst knows how to do. This isn't 2005 where sandboxes are listing their names in the registry/ system config files.
I haven't done sandbox detection for some years now, but around 2020, it was already "difficult" as in hard to write from scratch... yet already skid easy as in "copy+paste" from something that does it already. Surely newer sandboxes take more stuff into account, but at the same time more detection examples get published, simply advancing the starting point.
So maybe TikTok has a few people focused on it, possibly with some CI tests for several sandboxes. I don't think it's particularly hard to do 🤷
I hate Temu, but this (apparently contracted?) Grizzly Reports report isn't really all that trust inspiring, tbh.
Our experts identified a stack of software functions that are completely inappropriate to and dangerous
The stack difference to the Amazon app they list:
Package compile
Requesting system logs
Some code obfuscation
Mac address collection
Install permission
Wake lock
Meh. That's just a sliver worse than your regular, off the shelves proprietary corporate app. I don't see how they can pull off the promise of being a truly dynamic Android app from that report.
I do believe they hover up data, but they aren't otherworldly super hackers. They will probably just ask for the data and the users will hand it over in a second. For most people, it really is that simple.
yeah, linux is super usable for every day shit, and even gaming now. I havent had a significant problem in years, and i'm not a sysadmin or something that knows the mystic ways of the commandline or anything. I'm just a random idiot
So just like the majority of USAian apps out there? I think Temu fits right in. Why are people so concerned about what China is doing with their data, but not the very countries they live in or (more importantly) the dominant online surveillance presence: the USA?
One thing that's obvious here on Lemmy is that whataboutism works only in one direction. If an article is critical of China, Russia, Iran, or other dictatorships, you'd read, "But about U.S./EU/the West". But there are tons of articles here critical of Western countries, and it's accepted. Why is this? Just wumaos?
It's funny that every time someone points out the pot calling the kettle black the training kicks in to shout "whataboutism" and it must be "wumao". It's almost a meme. You don't think an article about Xi Ping's government warning about USAian surveillance would be mocked and ridiculed due to their Great Firewall? That wouldn't be "whataboutism" though, right? It would be a "critical opinion"?
an article about Xi Ping's government warning about USAian surveillance
Not possible. The CCP doesn't "warn", it orders to block the app/site/word/photo, and it never existed. Anyone daring to say that it did, or to warn of stuff the CCP didn't say, gets imprisoned or worse (see: the doctor who dared to warn abot COVID, instead of following CCP's truth).
Yeah, these are the 'tankies' who got banned on Reddit, right? I guess it takes time until they get a minority, but it's good that the community grows steadily.
I'm not sure I understand why this question comes up everytime some chinese app is in a news article.
Anyway, it should not come as a surprise, but "Arkansas Attorney General Tim Griffin", someone who works as AG for a state in the US, presumably is more interested in US interests than Chinese interests, and presumably places more trust in the government and businesses of the country he lives in than in the government (and businesses, for where there's a distinction anyway) of the country of his nation's economic rival.
The difference is, the place where I live has some data privacy regulations which actually get enforced, and I have some legal recourse against organizations which mishandle my data. China does not have such regulations and I do not have any recourse against organizations based there, so my risk from them is significantly higher.
Yes you can, most people aren't. In real life, by far the most common response I've gotten when talking about privacy is 😴 . My colleagues in tech will hotly debate China's surveillance, but happy use face ID on their iPhone, upload their entire life to Google or iCloud (including recordings of therapy sessions), send their blood into do a heritage check, nearly exclusively use Amazon for shopping, have an Amazon Ring camera at their door, and so much more.
“Temu is designed to make this expansive access undetected, even by sophisticated users,” Griffin’s complaint said. “Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place.”
So just like the majority USAian app out there?
Which apps do that? Because I am certain it's NOT the majority, and very skeptical about any other apps doing that.
Lemmy is just a carrier software, its license has nothing to do with comments.
Instances however, each have their own TOS and can enforce license controls.
Ideally, all comments should have a "license" field, so stuff like instances with ads on them, or subscription-only instances, or CC0/CC-BY only instances, could inform other instances of their rights, and avoid comments that don't meet their policies.
I mean, I don't like either malware. They are banning tiktok due to security and spying on US citizens, as well as some election interference. These are all things that FAANG can and will do, but since they're American it isn't regulated nearly as much as they should be. All invasions of privacy are bad, it's not whataboutism to call out the holes our regulations have by being overly specific to only "adversarial countries".
🤖 I'm a bot that provides automatic summaries for articles:
Click here to see the summary
Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is "dangerous malware" that's secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.
Griffin fears that Temu is capable of accessing virtually all data on a person's phone, exposing both users and non-users to extreme privacy and security risks.
In their report, Grizzly Research alleged that PDD Holdings is a “fraudulent company” and that “Temu is cleverly hidden spyware that poses an urgent security threat to United States national interests.”
Investigators agreed, the lawsuit said, concluding “we strongly suspect that Temu is already, or intends to, illegally sell stolen data from Western country customers to sustain a business model that is otherwise doomed for failure."
Researchers found that Pinduoduo "was programmed to bypass users’ cell phone security in order to monitor activities on other apps, check notifications, read private messages, and change settings," the lawsuit said.
A Temu spokesperson provided a statement to Ars, discrediting Grizzly Research's investigation and confirming that the company was "surprised and disappointed by the Arkansas Attorney General's Office for filing the lawsuit without any independent fact-finding."
Not at all surprising. ChatGPT 'knows' a course's content insofar as it's memorized the textbook and all the exam questions. Once you start asking it questions it's never seen before (more likely for advanced topics that don't have a billion study guides and tutorials for) it falls short, even for basic questions that'd just require a bit of additional logic.
Mind you, memorizing everything is impressive and can get you a degree, but when tasked with a new problem never seen before ChatGPT is completely inadequate.
Right? Can students use the internet on this test? Because the LLMs have the entire internet to search for the answers, and I guarantee you those textbooks and exam questions are online and searchable.
I wonder how undergrads would do on the same exams given unlimited time and internet access but with LLMs blocked. That's essentially what the LLMs have.
This is incorrect as was shown last year with the Skill-Mix research:
Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.
It's probably not blatantly bypassing security and privacy features, what it is PROBABLY doing is using the user to bypass them by simply manipulating them to do it.
Social engineering is way easier than whatever bullshit you would need to do to bypass sandboxing and dynamically recompile, or whatever people are claiming, and my guess would be that this is what they're doing.
If the suit is claiming they are doing what i said, that's probably legal, and not going anywhere, unless tiktok ban bill 2.0. If the suit is claiming what others are claiming, it's still probably wrong and probably going to be tiktok ban bill 2.0.
Unfortunately these things aren't all that exciting at the end of the day.
“Since the rise of large language models like ChatGPT there have been lots of anecdotal reports about students submitting AI-generated work as their exam assignments and getting good grades.
His team created over 30 fake psychology student accounts and used them to submit ChatGPT-4-produced answers to examination questions.
The anecdotal reports were true—the AI use went largely undetected, and, on average, ChatGPT scored better than human students.
Scarfe’s team submitted AI-generated work in five undergraduate modules, covering classes needed during all three years of study for a bachelor’s degree in psychology.
Shorter submissions were prepared simply by copy-pasting the examination questions into ChatGPT-4 along with a prompt to keep the answer under 160 words.
Turnitin’s system, on the other hand, was advertised as detecting 97 percent of ChatGPT and GPT-3 authored writing in a lab with only one false positive in a hundred attempts.
The original article contains 519 words, the summary contains 144 words. Saved 72%. I'm a bot and I'm open source!
arstechnica.com
Newest