this post was submitted on 02 Aug 2023
359 points (94.1% liked)

Technology

60123 readers
3819 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] joelthelion@lemmy.world 10 points 1 year ago (6 children)

I don't understand why they don't use a second model to detect falsehoods instead of trying to fix it in the original LLM?

[–] FlyingSquid@lemmy.world 22 points 1 year ago (1 children)

And then they can use a third model to detect falsehoods in the second model and a fourth model to detect falsehoods in the third model and... well, it's LLMs all the way down.

[–] thimantha@lemmy.world 4 points 1 year ago (1 children)
[–] postmateDumbass@lemmy.world 2 points 1 year ago
[–] doggle@lemmy.world 7 points 1 year ago (1 children)

Ai models are already computationally intensive. This would instantly double the overhead. Also being able to detect problems does not mean you're able to fix them.

[–] kromem@lemmy.world 2 points 1 year ago

More than double, as query size is very much connected to the effective cost of the generation, and you'd need to include both the query and initial response in that second pass.

Then - you might need to make an API call to a search engine or knowledge DB to fact check it.

And include that data as context along with the query and initial response to whatever decides if it's BS.

So for a dumb realtime chat application, no one is going to care enough to slow out down and exponentially increase costs to avoid hallucinations.

But for AI replacing a $120,000 salaried role in writing up a white paper on some raw data analysis, a 10-30x increase over a $0.15 query is more than acceptable.

So you will see this approach taking place in enterprise scenarios and professional settings, even if we may never see them in chatbots.

[–] kromem@lemmy.world 2 points 1 year ago

2+ times the cost for every query for something that makes less than 5% unusable isn't a trade off that people are willing to make for chat applications.

This is the same fix approach for jailbreaking.

You absolutely will see this as more business critical integrations occur - it just still probably won't be in broad consumer facing realtime products.

[–] wizardbeard@lemmy.dbzer0.com 2 points 1 year ago

Because then they still need a reliable method to detect falsehoods. That's the issue here.

[–] Sethayy@sh.itjust.works 2 points 1 year ago

Cause what are you gonna train the second model on? Same data as the first just recreates it and any other data is gonna be nice and mucky with all the ai content out there

[–] dirkgentle@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

If it was easy to detect, it wouldn't happen in the first place. So far, not even OpenAI themselves, have succeeded in implementing a AI detector.