diz

joined 1 year ago
[–] diz@awful.systems 6 points 3 weeks ago

Well the OP talks about a fridge.

I think if anything it's even worse for tiny things with tiny screws.

What kind of floating hologram is there gonna be that's of any use, for something that has no schematic and the closest you have to a repair manual is some guy filming themselves taking apart some related product once?

It looks cool in a movie because it's a 20 second clip in which one connector gets plugged, and tens of person hours were spent on it by very talented people who know how to set up a scene that looks good and not just visually noisy.

[–] diz@awful.systems 4 points 3 weeks ago

but often the video isn’t clear or fine quality enough

Wouldn't it be great if 100x the effort that didn't go into making the video clear or fine quality enough, instead didn't go into making relevant flying, see-through overlay decals?

Ultimately the reason it looks cool is that you're comparing a situation of little effort being put into repair related documentation, to some movie scenario where 20 person-hours were spent making a 20-second repair fragment whereby 1 step of a repair is done.

[–] diz@awful.systems 4 points 3 weeks ago

I'm not sure it's actually being used, beyond C suite wanting something cool to happen and pretending it did happen.

[–] diz@awful.systems 6 points 3 weeks ago (1 children)

Exactly. It goes something like "remember when you were fixing a washing machine and you didn't know what some part was and there was no good guide for fixing it, no schematic, no nothing? Wouldn't it be awesome if 100x of the work that wasn't put into making documentation was not put into making VR overlays?

[–] diz@awful.systems 3 points 3 weeks ago* (last edited 3 weeks ago)

Using tools from physics to create something that is popular but unrelated to physics is enough for the nobel prize in physics?

If only, it's not even that! Neither Boltzmann machines nor Hopfield networks led to anything used in the modern spam and deepfake generating AI, nor in image recognition AI, or the like. This is the kind of stuff that struggles to get above 60% accuracy on MNIST (hand written digits).

Hinton went on to do some different stuff based on backpropagation and gradient descent, on newer computers than those who came up with it long before him, and so he got Turing Award for that, and it's a wee bit controversial because of the whole "people doing it before, but on worse computers, and so they didn't get any award" thing, but at least it is for work that is on the path leading to modern AI and not for work that is part of the vast list of things that just didn't work and it's extremely hard to explain why you would even think they would work in the first place.

[–] diz@awful.systems 3 points 3 weeks ago

Then next year Hopfield and Hinton go back to Sweden, don't tell king of Sweden anything, king of Sweden still gives them the Nobel Prize! King of Sweden now has conditioned reflex!

[–] diz@awful.systems 13 points 4 weeks ago (22 children)

I seriously wonder, do any of the folks with the "AR glasses to assist repair" thing ever actually repair anything, or do they get their ideas of how you repair stuff from computer games?

[–] diz@awful.systems 7 points 4 weeks ago* (last edited 4 weeks ago)

Nobel prize in Physics for attempting to use physics in AI but it didn't really work very well and then one of the guys working on a better more pure mathematics approach that actually worked and got the Turing Award for the latter, but that's not what the prize is for, while the other guy did some other work, but that is not what the prize is for. AI will solve all physics!!!111

[–] diz@awful.systems 7 points 4 weeks ago

Maybe if the potato casserole is exploded in the microwave by another physicist, on his way to start a resonance cascade...

(i'll see myself out).

[–] diz@awful.systems 31 points 4 months ago

AI peddlers just love any "critique" that presumes the AI is great at something.

Safety concern that LLMs would go Skynet? Say no more, I hear you and I'll bring it up first thing in the Congress.

Safety concern that terrorists might use it to make bombs? Say no more! I agree that the AI is so great for making bombs! We'll restrict it to keep people safe!

It sounds too horny, you say? Yeah, good point, I love it. Our technology is better than sex itself! We'll keep it SFW to keep mankind from going extinct due to robosexuality!

[–] diz@awful.systems 23 points 4 months ago* (last edited 4 months ago)

I love the "criti-hype". AI peddlers absolutely love any concerns that imply that the AI is really good at something.

Safety concern that LLMs would go Skynet? Say no more, I hear you and I'll bring it up in the congress!

Safety concern that terrorists might use it to make bombs? Say no more! I agree that the AI is so great for making bombs! We'll restrict it to keep people safe!

Sexual roleplay? Yeah, good point, I love it. Our technology is better than sex itself! We'll restrict it to keep mankind from falling into the sin of robosexuality and going extinct! I mean, of course, you can't restrict something like that, but we'll try, at least until we release a hornybot.

But any concern about language modeling being fundamentally not the right tool for some job (Do you want to cite a paper or do you want to sample from the underlying probability distribution?), hey hey hows about we talk about the skynet thing instead?

[–] diz@awful.systems 18 points 4 months ago* (last edited 4 months ago)

It used to mean things like false positives in computer vision, where it is sort of appropriate: the AI is seeing something that's not there.

Then the machine translation people started misusing the term when their software mistranslated by adding something that was not present in the original text. They may have been already trying to be misleading with this term, because "hallucination" implies that the error happens when parsing the input text - which distracts from a very real concern about the possibility that what was added was being plagiarized from the training dataset (which carries risk of IP contamination).

Now, what's happening is that language models are very often a very wrong tool for the job. When you want to cite a court case as a precedent, you want a court case that actually existed - not a sample from the underlying probability distribution of possible court cases! LLM peddlers don't want to ever admit that an LLM is the wrong tool for that job, so instead they pretend that it is the right tool that, alas, sometimes "hallucinates".

view more: next ›