@QuadratureSurfer@lemmy.world avatar

QuadratureSurfer

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

That GitHub "archive here" link leads to a page where it hasn't been archived... (or was the archive removed??).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I agree with others on here. Looks like a European Starling.

Here's a similar looking one that was identified in North America:

https://www.inaturalist.org/observations/223616886

I would recommend a tool like iNaturalist for trying to identify birds (and plants, insects, other animals, droppings, tracks, etc).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I would avoid used Bolts, especially because of all the issues those have had with going up in flames.

Hopefully they've fixed those issues in the newest models...

https://www.cnbc.com/2021/07/14/gm-warns-some-bolt-ev-owners-dont-park-them-inside-or-charge-them-unattended-overnight.html

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

They expanded the initial recall. It affects models from 2017 to 2022. If you read the linked article I previously provided, then you missed the key point that vehicles were still bursting into flames even after the recall.

Expanded recall: https://gmauthority.com/blog/2021/09/gm-asking-chevy-bolt-ev-owners-to-park-50-feet-away-from-other-vehicles/

GM stopped replacing the batteries of the newer models and instead offered a software solution that would monitor the batteries for any issues and allow the vehicle to charge beyond the 80% limit that they had set because of these issues.
https://electrek.co/2023/06/14/bolt-battery-recall-diagnostics/

But it's worth noting that this software update has failed to prevent some fires, so the problem isn't really "fixed" even with this:
https://electrek.co/2021/07/08/chevy-bolt-ev-catches-on-fire-after-receiving-both-of-gm-software-fixes/

A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies | TechCrunch ( techcrunch.com )

Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts....

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

What do you mean by this?:

Cara, bans us from removing malicious source code

Is there obviously malicious source code?
Is there a policy that specifically says we can't remove any source code?
Is this even open source?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

What ban?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Well, now's a great time to let them know about Pixelfed, although explosive growth like this will be a strain on any website.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Getting away from Google Maps has been a tough one. There aren't many options there, it's either Google, Apple, Microsoft, or OpenStreetMap.

I've been contributing to OSM for my local area as much as possible to update businesses and their opening hours, website, etc., but it's not a small task.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

You also have to keep in mind that, the more you compress something, the more processing power you're going to need.

Whatever compression algorithm that is proposed will also need to be able to handle the data in real-time and at low-power.

But you are correct that compression beyond 200x is absolutely achievable.

A more visual example of compression could be something like one of the Stable Diffusion AI/ML models. The model may only be a few Gigabytes, but you could generate an insane amount of images that go well beyond that initial model size. And as long as someone else is using the same model/input/seed they can also generate the exact same image as someone else.
So instead of having to transmit the entire 4k image itself, you just have to tell them the prompt, along with a few variables (the seed, the CFG Scale, the # of steps, etc) and they can generate the entire 4k image on their own machine that looks exactly the same as the one you generated on your machine.

So basically, for only a few bits about a kilobyte, you can get 20+MB worth of data transmitted in this way. The drawback is that you need a powerful computer and a lot of energy to regenerate those images, which brings us back to the problem of making this data conveyed in real-time while using low-power.

Edit:

Tap for some quick napkin math

For transmitting the information to generate that image, you would need about 1KB to allow for 1k characters in the prompt (if you really even need that),
then about 2 bytes for the height,
2 for the width,
8 bytes for the seed,
less than a byte for the CFG and the Steps (but we'll just round up to 2 bytes).
Then, you would want something better than just a parity bit for ensuring the message is transmitted correctly, so let's throw on a 32 or 64 byte hash at the end...
That still only puts us a little over 1KB (1078Bytes)...
So for generating a 4k image (.PNG file) we get ~24MB worth of lossless decompression.
That's 24,000,000 Bytes which gives us roughly a compression of about 20,000x
But of course, that's still going to take time to decompress as well as a decent spike in power consumption for about 30-60+ seconds (depending on hardware) which is far from anything "real-time".
Of course you could also be generating 8k images instead of 4k images... I'm not really stressing this idea to it's full potential by any means.

So in the end you get compression at a factor of more than 20,000x for using a method like this, but it won't be for low power or anywhere near "real-time".

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Pastel de Natas (Portuguese Custard Tarts) at Trader Joe's in the U.S.

They only have them in the spring and they run out fast.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Shout-out to Archive.org for all the awesome work they do to backup what they can from the internet.

(Especially when some stack overflow answer to a question is just a link to some website that has either changed or no longer exists).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Technically, generative AI will always give the same answer when given the same input. But, what happens is a "seed" is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

They still are. Giving a generative AI the same input and the same seed results in the same output every time.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

OK, but we're discussing whether computers are "reliable, predictable, idempotent". Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

This isn't something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the "same input".

And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time...) it doesn't matter because your statement effectively boils down to something like this:

"I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!"

If you wanted to make a better argument about computers not always being reliable/predictable, you're better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn't been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003:
https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

Anyway, thanks if you read this far, I enjoy discussing things like this.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

If you think that "pretty much everything AI is a scam", then you're either setting your expectations way too high, or you're only looking at startups trying to get the attention of investors.

There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.

Part of the problem might be with how you define AI... It's way more broad of a term than what I think you're trying to convey.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Sure, but don't let that feed into the sentiment that AI = scams. It's way too broad of a term that covers a ton of different applications (that already work) to be used in that way.

And there are plenty of popular commercial AI products out there that work as well, so trying to say that "pretty much everything that's commercial AI is a scam" is also inaccurate.

We have:
Suno's music generation
NVidia's upscaling
Midjourney's Image Generation
OpenAI's ChatGPT
Etc.

So instead of trying to tear down everything and anything "AI", we should probably just point out that startups using a lot of buzzwords (like "AI") should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

This sounds like what TOR users run into.
Are you always clearing your browsing data/cookies in Firefox?
A fairly blank slate there will raise some red flags in a lot of systems causing the captchas to be aggressive.

Otherwise, maybe you just ended up with a bad IP address that was previously being used for suspicious bot looking traffic?

I would definitely submit a ticket to Etsy about this issue, but this is likely an issue with captcha or whatever they're using to detect "suspicious activity".

Edit: I just tried logging in to Etsy and it gave me about 4 captchas for signing in with a "new device".

Looks like google really needs help training their self driving cars.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Looks like a separate element that comes after the LLM summary which can be removed by ad blockers. That is, if you're still using Google search...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Actually, if this is the requirement, then this means our data isn't leaving the device at all (for this purpose) since everything is being run locally.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Since everything is being run in a local LLM, most likely this will be some extra RAM usage rather than SSD usage, but that is assuming that they aren't saving these images to file anywhere.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The whole thing is going to be run on a local LLM.
They don't have to upload that data anywhere for this to work (it will work offline). But considering what they already do, Microsoft is going to have to do a lot to prove that they aren't doing this.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Very true... what I meant to say was:
[...] then this means our data shouldn't need to leave the device at all [...]

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Videography
Photography
Downloading Machine Learning Models
Data for Training ML Models
Training ML Models
Gaming (the games themselves or saving replays)
Backing up movies/videos/images etc.
Backing up music
NAS

Take your pick, feel free to mix and match or add on to the list.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I agree, but it's one thing if I post to public places like Lemmy or Reddit and it gets scraped.

It's another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Feel free to educate us instead of just saying the equivalent of "you're wrong and I hate reading comments like yours".

But I think, in general, the alteration to Section 230 that they are proposing makes sense as a way to keep these companies in check for practices like shadowbanning especially if those tools are abused for political purposes.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

A very useful video that explains what Quantum Internet is... and what it isn't:

https://www.youtube.com/watch?v=u-j8nGvYMA8

TL/DW: A big misconception here has to do with Quantum entanglement. Quantum Entanglement in Quantum Internet doesn't mean that you can transfer data at speeds faster than light.

It's true that this connection would be "ultra secure" but this would be very inefficient (slow) and it wouldn't be reliable in a noisy environment. It would probably be most useful for some sort of authentication protocol/key sharing.

Qualified experts of Lemmy, do people believe you when you answer questions in your field?

The internet has made a lot of people armchair experts happy to offer their perspective with a degree of certainty, without doing the work to identify gaps in their knowledge. Often the mark of genuine expertise is knowing the limitations of your knowledge....

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It really depends on the subject.

If it's programming/hardware in general then there's not much debate.

But when it comes to discussing "buzz words" or other hot topic items (cryptocurrency or AI/ML Models) then there will be a lot more debates.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Are you saying "No... let's not advance mathematics"?
Or... "No, let's not advance mathematics using AI"?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Just wait till someone creates a manically depressed chatbot and names it Marvin.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

This would actually explain a lot of the negative AI sentiment I've seen that's suddenly going around.

Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website.
He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I don't think that "fake" is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being "powered by AI" or some other nonsense.

Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn't mean that the AI is "fake".
Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn't get you the kind of data that you get when you actually put it into a real world environment.

In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

After reading through that wiki, that doesn't sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

Contrary to your statement, Amazon isn't selling this as a means to "pretend" to do AI work, and there's no evidence of this on the page you linked.

That's not to say that this couldn't be used to fake an AI, it's just not sold this way, and in many applications it wouldn't be able to compete with the already existing ML models.

Can you link to any examples of companies making wild claims about their product where it's suspected that they are using this service?
(I couldn't find any after a quick Google search... but I didn't spend too much time on it).

I'm wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that's necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It becomes easy to do something like this once we start vilifying others and thinking that they "deserve it".

In this case according to the man that threw water, the homeless person had a history of sexual harassment and being violent towards the attendees.

We see this all the time in politics. We're so used to attacking the other side verbally that when one side says something offensive to the other side, physical fights can break out.

Image of apology here:
https://lemmy.world/pictrs/image/73fa415f-28eb-4144-b6fb-6ab543f997f5.jpeg

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

But what app did you use to access OSM and download the maps for offline use... was it a web browser? OsmAnd? Vespucci?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Do you have a source for those scientists you're referring to?

I know that LLMs can be trained on data output by other LLMs, but you're basically diluting your results unless you do a lot of work to clean up the data.

I wouldn't say it's "impossible" to determine if content was generated by an LLM, but I agree that it will not be reliable.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Says the person using light mode!!

(On a serious note, upvotes and downvotes mean different things to different people. That's just their own opinion and that's okay. But if you are bothered by downvotes I would use a Lemmy instance that hides the downvotes entirely.)

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Looks like he instantly got VAC banned with that triple headshot?

Hello GPT-4o ( openai.com )

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds,...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The demo showcasing integration with BeMyEyes looks like an interesting way to help those who are blind.

https://vimeo.com/945587840

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I would be careful trusting everything said in this video and taking it at face value.

He touches on a broad range of different AI related news, but doesn't seem to fully grasp the technology himself (I'm basing this statement on his "evidence" from the 8 min mark).

He seems to be running a channel that's heavily centered on stock market related content. And it feels like he's putting his own spin on every topic he touches in this video.

Overall, it's not the worst video, but I would rather base my information from better informed sources.

What he should have done was to set the baseline by defining what AI actually is and then proceed to compare what these companies are doing with that definition. Instead we have a list of AI news stories covering Amazon Fresh Stores, Gemini, ChatGPT, and Copilot (powered by ChatGPT) and his own take on how those stories mean that everything is faked.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

This video should have more accurately been labelled, "Things that make AI Look Bad" rather than attempting to prove that AI was faked.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Steam doesn't control the region locks.

The publisher (Sony) is the one that makes changes to their store page which affects where it can be sold.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

That makes sense, but I haven't seen any official announcement from Steam saying that they did this. Only speculation from random people. Any documentation I can find just seems to point to this being a decision that's made by the company releasing the game (or in this case Sony as the publisher).

Besides, only a few hours ago 3 new countries were added to the restricted list: https://steamdb.info/sub/137730/history/?changeid=23492083

I doubt that Steam is still trying to block additional countries given that Sony has already announced that the PSN account requirement is being withdrawn.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

Better/additional info here:
https://www.gamesradar.com/games/third-person-shooter/helldivers-2-community-manager-seemingly-fired-after-encouraging-negative-reviews-over-now-canceled-psn-mandate-i-knew-i-was-taking-a-risk-with-what-i-said/

Spitz:

"Generally it's not a good idea to tell people to refund and leave negative reviews when you're a community manager. TIL," Spitz said. "I appreciate all the support and I appreciate even more that everyone can play the game again without restrictions. I knew I was taking a risk with what I said about refunding and changing reviews. I stand by it. It was my job to represent the community, that's what I did."

They added: "I wanted to work for Arrowhead because they're my all-time favorite studio. I got that chance. I'm thankful for that opportunity. I'd happily continue working for them if I had the choice, but that isn't up to me or anyone else in here. I can walk away happy and I don't want anyone causing trouble on my behalf, especially not to people I still have a lot of care and respect for."

This definitely sounds like Sony wanted them out and Arrowhead wanted them to stay.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Looks like someone setup a petition for Spitz to get rehired.
https://www.change.org/p/re-hire-the-legendary-community-manager-general-spitz

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

So raytracing will be supported in iPad apps now...

So far the M4 seems to only be announced for the iPad.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

Games made by the studios being closed:

Arkane Austin (Tap for list)

 Blade (Marvel game (not?) in development)
 Redfall
 Deathloop
 Prey
 Prey Digital Deluxe
 Prey Mooncrash
 Dishonered
 Dishonered 2
 Dishonered Death of the Outsider
 Dishonered Dunwall City Trials
 Dishonered The Knife Of Dunwall
 Dishonered The Brigmore Witches
 Dishonered Void Walker's Arsenal
 Arx Fatalis

Tango Gameworks

 Hi-Fi Rush
 Ghostwire Tokyo
 The Evil Within
 The Evil Within 2

Alpha Dog Games

 Wraithborne (iOS, Android)
 MonstroCity: Rampage (iOS, Android)
 Ninja Golf (iOS, Android)
 Mighty DOOM (iOS, Android)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines