This profile is from a federated server and may be incomplete. View on remote instance

Are you embracing AI? ( viewber.co.uk )

There’s something of a misunderstanding in the UK property industry that agents are luddites, clinging to fax machines and Rolodexes, but quite the opposite is true. Sales and letting agents like nothing more than finding new efficiencies – whether through careful outsourcing, digital signatures or virtual tours, begging the...

sonori , (edited )
@sonori@beehaw.org avatar

“A computer can never be held accountable, therefore a computer must never make a management decision.”

Even more importantly when it comes to assessing properly, machine learning, now referred to as AI, has been continuealy shown to not just repeat the biases in its training data, but to significantly exaggerate them.

Given how significantly and explicitly race has been used to determine and guide so much property and neighborhood development in the training data, I do not look forward to seeing a system that is not only more racist than a post war city council choosing where to build new moterways but which is sold and treated as infallible by the humans operating and litigating it.

Given the deaths and disaster created by the Horizon Post Office Scandel, I also very much do not look forward to the widespread adoption of software which is inherently and provablly far less accurate, reliable, and auditable than the Horizon software. At least that could only ruin your life if you were a Postmaster and not just any member of the general public who isn’t rich enough to have your affairs handled by a human.

But hey, on the bright side, if Horizon set UK legal precedent than any person or property agent is fully and unequivocally legally liable for the output of any software they use, after the first few are found guilty for things the procedural text generator they used wrote people might decide its not worth the risk.

Electric Aviation is already better than you think - Volts with David Roberts ( www.volts.wtf )

Electric vehicles that can take off and land vertically, but then fly like a plane, are already being sold and used by hospitals and shipping companies. These vehicles have 5 batteries that give it a range of over 350 miles using current battery technology, though the batteries are intended to be swapped over the life of the...

sonori ,
@sonori@beehaw.org avatar

But we could have an average of 684 9/11’s a hour, the actual number of car crashes per hour in the US, driven purely by the piloting skills of the average Amarican driver given command of an aircraft, who wouldn’t want to live in that future?

'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement ( www.theatlantic.com )

As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in...

sonori ,
@sonori@beehaw.org avatar

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

sonori ,
@sonori@beehaw.org avatar

Like say, treating a program that shows you the next most likely word to follow the previous one on the internet like it is capable of understanding a sentence beyond this is the most likely string of words to follow the given input on the internet. Boy it sure is a good thing no one would ever do something so brainless as that in the current wave of hype.

It’s also definitely becuse autocompletes have made massive progress recently, and not just because we’ve fed simpler and simpler transformers more and more data to the point we’ve run out of new text on the internet to feed them. We definitely shouldn’t expect that the field as a whole should be valued what it was say back in 2018, when there were about the same number of practical uses and the foucus was on better programs instead of just throwing more training data at it and calling that progress that will continue to grow rapidly even though the amount of said data is very much finite.

sonori ,
@sonori@beehaw.org avatar

To note the obvious, an large language model is by definition at its core a mathematical formula and a massive collection of values from zero to one which when combined give a weighted average of the percentage that word B follows word A crossed with another weighted average word cloud given as the input ‘context’.

A nuron in machine learning terms is a matrix (ie table) of numbers between zero and 1 by contrast a single human nuron is a biomechanical machine with literally hundreds of trillions of moving parts that darfs any machine humanity has ever built in terms of complexity. This is just a single one of the 86 billion nurons in an average human brain.

LLM’s and organic brains are completely different and in both design, complexity, and function, and to treat them as closely related much less synonymous betrays a complete lack of understanding of how one or both of them fundamentally functions.

We do not teach a kindergartner how to write by having them read for thousands of years until they recognize the exact mathematical odds that string of letters B comes after string A, and is followed by string C x percent of the time. Indeed humans don’t naturally compose sentences one word at a time starting from the beginning, instead staring with the key concepts they wish to express and then filling in the phrasing and grammar.

We also would not expect that increasing from hundreds of years of reading text to thousands would improve things, and the fact that this is the primary way we’ve seen progress in LLMs in the last half decade is yet another example of why animal learning and a word cloud are very different things.

For us a word actually correlates to a concept of what that word represents. They might make mistakes and missunderstand what concept a given word maps to in a given language, but we do generally expect it to correlate to something. To us a chair is a object made to sit down on, and not just the string of letters that comes after the word the in .0021798 percent of cases weighted against the .0092814 percent of cases related to the collection of strings that are being used as the ‘context’.

Do I believe there is something intrinsically impossible for a mathematical program to replicate about human thought, probably not. But this this not that, and is nowhere close to that on a fundamental level. It’s comparing apples to airplanes and saying that soon this apple will inevitably take anyone it touches to Paris because their both objects you can touch.

sonori ,
@sonori@beehaw.org avatar

Generally the term Markov chain is used to discribe a model with a few dozen weights, while the large in large language model refers to having millions or billions of weights, but the fundamental principle of operation is exactly the same, they just differ in scale.

Word Embeddings are when you associate a mathematical vector to the word as a way of grouping similar words are weighted together, I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

Subtracting vectors from each other can give you a lot of things, but not the actual meaning of the concept represented by a word.

sonori ,
@sonori@beehaw.org avatar

No part of a human or animal brain operates on subtracting tables of cleanly defined numbers from each other so I think it’s pretty safe to say that no matrix calculation is done on a handful of numbers as part of much less as our sole means of understanding concepts or objects.

I don’t know exactly how one could tell true understanding from minicry, far smarter and more well researched people than me have debated that for decades, i’m just pretty sure what we think an kindness is boils down to something a bit more complex than a high school math problem discribing a word cloud.

sonori ,
@sonori@beehaw.org avatar

They are definitely to simple to represent the entirety of an concepts meaning on their own. Yep, I don’t believe it’s likely that such an incrediblely intricate thing as a nuron, much less the idea of conceptual meaning, can be replicated by a high school math problem. Maybe they could be a part, but your off by about a half a dozen order of magnitude at least from where we are now with love being a matrix with a few hundred numbers in it.

[B.C.] Canada's first commercial electric flight to make history June 14 ( www.nanaimobulletin.com )

On June 14, Sealand's Pipistrel Velis Electro will take flight for an introductory flight lesson. It will be the first time a person can purchase a commercial flight on an electric aircraft in Canada. The student will be allowed to operate the aircraft under the guidance of the flight instructor....

sonori ,
@sonori@beehaw.org avatar

Those little electric Pipistrels have been getting more common for instruction and pattern work down here for a bit too. I’m told they’re great fun if you just need to do a lot of sub hour flights every day.

sonori ,
@sonori@beehaw.org avatar

The researchers who wrote the paper only mentioned possibly applying the tech to very small things like wearables and Iot applications where a large capacitor might be relevant. It’s the journalist summarizing it that makes the wild claims about phones and cars, which don’t tend to use capacitors for a bunch of reasons, not least of which is that they tend to be physically twenty times larger than a given battery of the same capacity.

If people are able to deal with batteries anywhere near that large, then I’d imagine most of them would choose twenty times the battery life/ range over being able to charge fast enough overload a wall outlet/ small power plant.

sonori ,
@sonori@beehaw.org avatar

No, it doesn’t effect devices of all sizes, only devices that might use this specific bulky capacitor, all other devices will show exactly zero improvement because there is no real point to mixing capacitors in with a large battery. Being able to quickly get three minutes of charge per whole hour of battery capacity you replace with capacitors just isn’t that useful because you might as well just stay plugged in for an extra few minutes and get the same charge plus that extra hour before needing to find a charger at all.

As for EV’s the problem is even more pointless, as being able to go a half mile the street from a charger massive enough that it can output a small power plants worth of electricity is similarly to specialized of a use case to be worth the loss of range and greater degradation of the rest of the battery.

sonori ,
@sonori@beehaw.org avatar

Obviously nearly every electrical circuit board uses capacitors in some respect, especially for filtering and smoothing, but it is extremely rare for them to be used for bulk energy storage outside of things like adjusting power factor.

Given we are talking about charging times, which are primarily limited by the batteries charge current vs degradation curve and not at all by the various small capacitors in the charger’s electronics, there is fundamentally no effect on charge times unless you are replacing the energy storage medium itself with supercapacitors.

We can already supply enough dc power to charge an EV battery at its maximum designed curve via dc fast charging stations, which involve some contractors and shunts but actually don’t even involve any size of capacitors at all in the car itself on the HV side.

sonori ,
@sonori@beehaw.org avatar

While I think in this case they won’t have an effect because no Amarican company is even trying to compete in the space, I feel like claiming “history says tarrifs rarely work” is pretty misleading. The high tarrifs caused by the US generating nearly all federal income by tarrifs in the 17 and 18 hundreds are after all widely credited with being the reason the northern US went from being a minor agricultural nation dependent entirely on european industrial goods to becoming one of the largest industrialized nations so quickly.

Indeed that was why the WTO blocking third world nations from putting tarrifs on western goods was so heavily criticized by the left a few decades ago, before China proved you could do it without said tarrifs so long as your competitors were greedy enough to outsource their industry to you.

sonori ,
@sonori@beehaw.org avatar

While the paper demonstrated strong diminishing returns in adding more data to modern neural networks in terms of image classifers, the video host is explaining how the same may effect apply to any nureal network based system with modern transformers.

While there are technically methods of generative AI that don’t use a neural network, they haven’t made much progress in recent decades and arn’t what most people mean when they hear or say generative AI, and as such I would say the title is accurate enough for a video meant for a general audience, though “Is there a fundamental limit to modern neural networks” might be more technically correct.

sonori ,
@sonori@beehaw.org avatar

But if we don’t feed the entire internet into Siri, China would, and you don’t want China to have an advantage in the autocomplete wars, now do you?/s

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines