this post was submitted on 03 Jun 2024
1474 points (97.9% liked)
People Twitter
5236 readers
1725 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a tweet or similar
- No bullying or international politcs
- Be excellent to each other.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You need an absolutely insane amount of data to train LLMs. Hundreds of billions to tens of trillions of tokens. (A token isn't the same as a word, but with numbers this massive it doesn't even matter for the point.)
Wikipedia just doesn't have enough data to make an LLM off of, and even if you could do it and get okay results, it'll only know how to write text in the style of Wikipedia. While it might be able to tell you all about the how different cultures most commonly cook eggs, I doubt you'll get any recipe out of it that makes sense.
If you were to take some base model (such as llama or gpt) and tune it in Wikipedia data, you'll probably get a "llama in the style of Wikipedia" result, and that may be what you want, but more likely not.