This profile is from a federated server and may be incomplete. View on remote instance

Z4rK OP ,

I mean, that’s fair, if you don’t believe in his integrity than this news have very little value to you.

Z4rK OP ,

Care to elaborate?

The suspicious parts to me was that they didn’t show much of the private cloud stuff, how much it would cost, and that they still feel the need to promote ChatGPT .

Z4rK OP ,

How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.

Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.

Z4rK OP ,

I mean, that’s fair, I personally use Apple devices specifically because I trust them the most on privacy, but if you don’t trust Apple with privacy, which is a 100% valid take to have, then of course this mayor selling point of their marketing becomes moot.

Z4rK OP ,

Unless you are designing and creating your own chips for processing, networking etc, then privacy today is about trust, not technology. There’s no escaping it. I know iPhone and Apple is collecting data about me. I currently trust them the most on how they use it.

Z4rK OP ,

I do agree, but privacy in 2024 is sadly about trust, not technology, unless you yourself can design and create every chip used in your devices and in the network cells you connect to. No setting on your device on “do not allow…” have any meaning without trust in the creator.

Z4rK OP ,

Well they just name-grabbed all of AI with their stupid Apple Intelligence branding.

Z4rK OP ,

He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.

I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.

Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.

Z4rK OP ,

Yes definitely, Apple claimed that their privacy could be independently audited and verified; we will have to wait and see what’s actually behind that info.

Z4rK OP ,

That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.

User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.

The App is (possibly) unaware that AI has been used / activated, and has not received any user information.

Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.

Z4rK OP ,

macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.

Z4rK OP ,

That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.

Microsoft Recall was designed so badly, there’s no hope for it.

Z4rK OP ,

It goes a tad bit beyond classical conditioning... LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.

Here’s a great article that gives some insight into the knowledge features embedded into a larger model: https://transformer-circuits.pub/2024/scaling-monosemanticity/

Z4rK OP ,

To be honest, I’m not sure what we’re arguing - we both seem to have a sound understanding of what LLM is and what it is not.

I’m not trying to defend or market LLM, I’m just describing the usability of the current capabilities of typical LLMs.

Z4rK OP ,

They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/

All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.

They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.

Z4rK OP ,
  1. Security / privacy on device: Don’t use devices / OS you don’t trust. I don’t see what difference on-device AI have at all here. If you don’t trust your device / OS then no functionality or data is safe.
  2. Security / privacy in the cloud: The take here is that Apples proposed implementation is better than 99% of every cloud service out there. AI or not isn’t really part of it. If you already don’t trust Apple then this is moot. Don’t use cloud services from providers you don’t trust.

Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.

Content / Image posting restrictions (based on network)

Just wanted to keep everyone in the loop. If you encounter an issue where you cannot post or comment, but voting is functioning correctly, it is most likely because we have implemented a block for VPN & Tor users. If you are using these services, you can still take part in activities such as voting, reporting posts, and...

Z4rK ,

All my devices and networks are constantly on VPN, it’s just basic obscurity, and it’s such a hassle posting on Lemmy now - my activity is probably down 90%.

Could you please consider allowing basic text posts and / or comments from behind a VPN? You could even disallow all markdown - that’s a decent trade off for my sake.

But please don’t just block all VPN usage if there are options you have not explored yet.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines