this post was submitted on 20 Jul 2023
663 points (97.4% liked)

Technology

59402 readers
2741 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[–] CaptainAniki@lemmy.flight-crew.org 23 points 1 year ago* (last edited 1 year ago) (3 children)

I don't think it's that easy. These are vLLMs that feed back on themselves to produce "better" results. These models don't have single point release cycles. It's a constantly evolving blob of memory and storage orchestrated across a vast number of disk arrays and cabinets of hardware.

[e]I am wrong the models are version controlled and do have releases.

[–] drspod@lemmy.ml 30 points 1 year ago (1 children)

That's not how these LLMs work. There is a training phase which takes a large amount of compute power, and the training generates a model which is a set of weights and could easily be backed up and version-controlled. The model is then used for inference which is a less compute-intensive process and runs on much smaller hardware than the training phase.

The inference architecture does use feedback mechanisms but the feedback does not modify the model-weights that were generated at training time.

[–] CaptainAniki@lemmy.flight-crew.org 0 points 1 year ago* (last edited 1 year ago) (3 children)

For simple language models sure but we're talking about chatGPT here. OpenAI has some pretty bold claims...

https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253

100 trillion bites is 100 terrabytes and if you have any amount of actual data in those parameters then the size of the data could easily get into the petabyte range.

[–] drspod@lemmy.ml 13 points 1 year ago

They list the currently available models that users of their API can select here:

https://platform.openai.com/docs/models/overview

They even say that while the main models are being continuously updated (read: re-trained) there are snapshots of previous models that will remain static.

So yes, they are storing and snapshotting the models and they have many different models available with which to perform inference at the same time.

[–] hedgehog@ttrpg.network 4 points 1 year ago

Each parameter corresponds to a single number, so if it’s using 16 bit numbers then that’s 200 TB. They might be using 32 bit numbers (400 TB) but wouldn’t be using anything larger.

[–] Lukecis@lemmy.world 1 points 1 year ago

Makes me wonder how exactly they curate said data, its such an insane amount even teams of thousands of human programmers sifting through all of it 24/7 all day everyday wouldn't be able to fact check or assess all the data for years. Presumably they use ai to go over the data scraped and thrown into the model, since I cant imagine any human being able to curate it all.

I've heard from various videos detailing the topic that many of the developers have little to no clue as to what's going on inside the LLM once it's assembled and set about its work on training itself and what not- and I'm inclined to believe them, the human programmers simply set the params, and system up and then the system eats all the data loaded into it and immediately becomes a sort of black box which nobody knows exactly whats going on inside of it to produce the output it does.

[–] Lazylazycat@lemmy.world 6 points 1 year ago

Exactly this, that's why Loab exists forever now.

[–] agent_flounder@lemmy.one 2 points 1 year ago

Even so, surely they can take snapshots. If they're that clueless about rudimentary practices of IT operations then it is just a matter of time before an outage wipes everything. I find it hard to believe nobody considered a way to do backups, rollbacks, or any of that.