38
top 9 comments
sorted by: hot top controversial new old
[-] AbouBenAdhem@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

I wonder if this is actually comparable to the way our brains store long-term memory?

[-] abhi9u@lemmy.world 4 points 1 year ago

Interesting. I'm just thinking aloud to understand this.

In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

[-] AbouBenAdhem@lemmy.world 6 points 1 year ago

Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

[-] GenderNeutralBro@lemmy.sdf.org 3 points 1 year ago

This seems likely to me. The common saying is that "you hear what you want to hear", but I think more accurately it's "you remember what has meaning to you". Recently there was a study that even visual memory was tightly integrated with spoken language: https://www.science.org/doi/10.1126/sciadv.adh0064

However, there's a lot of variation in memory among humans. See: The Mind of a Mnemonist.

[-] abhi9u@lemmy.world 2 points 1 year ago

Yes, that makes much more sense.

[-] InvertedParallax@lemm.ee 1 points 1 year ago

No, because our brains also use hierarchical activation for association, which is why if we're talking about bugs and I say "I got a B" you assume its a stinging insect, not a passing grade.

If it was simple word2vec we wouldn't have that additional means of noise suppression.

[-] drre@feddit.de 3 points 1 year ago

does anyone know whether these results were obtained while taking the size of the dictionary into account?

[-] abhi9u@lemmy.world 3 points 1 year ago

Do you mean the number of tokens in the LLM's tokenizer, or the dictionary size of the compression algorithm?

The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn't use dictionaries.

[-] AbouBenAdhem@lemmy.world 3 points 1 year ago

It looks like they did it both ways (“raw rate” vs “adjusted rate”):

In the case of the adjusted compression rate, the model's size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.

this post was submitted on 28 Sep 2023
38 points (100.0% liked)

Technology

58311 readers
3180 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS