this post was submitted on 23 Aug 2024
195 points (100.0% liked)

TechTakes

1539 readers
258 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@programming.dev -5 points 4 months ago* (last edited 4 months ago) (8 children)

No. Predicting words is barely related to facts. I'll defend AI as an occasionally useful tool, but nothing it ever says should be taken as fact without confirmation. Sometimes that confirmation can be experimental — does this recipe taste good? Sometimes you need expert supervision to say this part was translated wrong or this code won't work because of xyz. Sometimes you have to go out and look it up.

I like AI but there is a real problem treating it like the output means anything. It might give you a direction to look closer at, but it can never be the endpoint. We'd be better off not trying to censor it, but understanding it will bullshit you without blinking.

I summarize all of that by saying AI is a useful tool, but a terrible product.

[–] self@awful.systems 10 points 4 months ago (6 children)

We’d be better off not trying to censor it

this claim keeps getting brought up and every time it doesn’t seem to mean a damn thing, particularly since no, censoring the output of an LLM doesn’t do anything to its ability to predict text. censoring its training set would, but seeing as the topic of this thread is a fact an LLM fabricated by being just a dumb text predictor — there’s no real way to censor the training set to prevent this, LLMs are just shitty.

I summarize all of that by saying AI is a useful tool

trying to find a use case for this horseshit has broken your brain into thinking these worthless tools would have value if only they weren’t “being censored” or whatever cope you gleaned from the twitter e/accs

load more comments (3 replies)
load more comments (4 replies)