this post was submitted on 13 Aug 2023
388 points (74.4% liked)

Technology

60112 readers
1991 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] _danny@lemmy.world 3 points 1 year ago (1 children)

It's definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn't make. I remember my "wow moment" was asking how many M&M's would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn't find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.

I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.

[–] chaogomu@kbin.social 2 points 1 year ago (2 children)

There's also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We're about halfway there.

[–] FaceDeer@kbin.social 3 points 1 year ago

ChatGPT is trained on data with a cutoff in September 2021. It's not training on AI-generated data.

Even if some AI-generated data is included, as long as it's reasonably curated and it's mixed with non-AI data model collapse can be avoided.

"Model collapse" is starting to feel like just a keyword for "this AI isn't as good as I wanted."

[–] _danny@lemmy.world 2 points 1 year ago

I feel like you're undereducated on how and when AI models are trained. Especially for the gpt model, it's not "constantly learning" like other models. It's being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

Also, AI are already training other AI, that's kinda how AI are made.... There's an AI that detects how well a given phrase follows another phrase, and that's used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

CGP gray has a good into video on how bots learn, it's pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.