[-] ZickZack@kbin.social 26 points 1 year ago

That's not what lossless data compression schemes do:
In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
For example, one of the simplest and now ancient algorithms LZW does the following:

  • Initialize the dictionary to contain all strings of length one.
  • Initialize the dictionary to contain all strings of length one.
  • Emit the dictionary index for W to output and remove W from the input.
  • Add W followed by the next symbol in the input to the dictionary.
  • repeat
    Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
"Information" here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
Logically this makes sense:
Let's say you have a system that measures earthquakes. As one would expect, most of the time, let's say 99% of the time, you will see "no earthquake", while in 1% of the cases you will observe "earthquake".
Since "no earthquake" is a lot more common, the information gain is relatively small (if I told you "the system said no earthquake", you could have guessed that with 99% confidence: not very surprising).
However if I tell you "there is an earthquake" this is much more important and therefore is worth more information.

From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

However, we can do even better:
Instead of just considering individual characters, we can also add in character pairs!
Of course, it doesn't make sense to add in every possible character pair, but for some of them it makes a ton of sense:
For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence "the" and save a ton of bits!
To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
We can still do that: just scan the text 600 times.
With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
But with 4 characters its impossible: you already have half a million combinations!
In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
We can try to predict it!
This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it's just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.

[-] ZickZack@kbin.social 12 points 1 year ago

No, it's built into the protocol: think of it like as if every http request forces you to attach some tiny additional box containing the solution to a math puzzle.

The twist is that you want the math puzzle to be easy to create and verify, but hard to compute. The harder the puzzle you solve, the more you get prioritized by the service that sent you the puzzle.

If your puzzle is cheaper to create than hosting your service is, then it's much harder to ddos you since attackers get stuck at the puzzle, rather than getting to your expensive service

[-] ZickZack@kbin.social 10 points 1 year ago

Standard lossless compression (without further assumptions) is already very close to being as optimal as it can get: At some point the pure entropy of these huge datasets just is not containable anymore.

The most likely savior in this case would be procedural rendering (i.e. instead of storing textures and meshes, you store a function that deterministically generates the meshes and textures). These already are starting to become popular due to better engine support, but pose a huge challenge from a design POV (the nice e.g. blender-esque interfaces don't really translate well to this kind of process).

[-] ZickZack@kbin.social 15 points 1 year ago

It's a different paper (e.g. https://www.nature.com/articles/s41586-022-05294-9) from a different researcher (specifically Ranga Dias). This is not connected to the recent non-peer reviewed https://arxiv.org/abs/2307.12008

[-] ZickZack@kbin.social 9 points 1 year ago

It's $\mathbb{X}$ or unicode 𝕏 (U+1D54F)
Maybe he really likes metric spaces??

[-] ZickZack@kbin.social 26 points 1 year ago

They will make it open source, just tremendously complicated and expensive to comply with.
In general, if you see a group proposing regulations, it's usually to cement their own positions: e.g. openai is a frontrunner in ML for the masses, but doesn't really have a technical edge against anyone else, therefore they run to congress to "please regulate us".
Regulatory compliance is always expensive and difficult, which means it favors people that already have money and systems running right now.

There are so many ways this can be broken in intentional or unintentional ways. It's also a great way to detect possible e.g. government critics to shut them down (e.g. if you are Chinese and everything is uniquely tagged to you: would you write about Tiananmen square?), or to get monopolies on (dis)information.
This is not literally trying to force everyone to get a license for producing creative or factual work but it's very close since you can easily discriminate against any creative or factual sources you find unwanted.

In short, even if this is an absolutely flawless, perfect implementation of what they want to do, it will have catastrophic consequences.

[-] ZickZack@kbin.social 8 points 1 year ago

That paper makes a bunch of(implicit) assumptions that make it pretty unrealistic: basically they assume that once we have decently working models already, we would still continue to do normal "brain-off" web scraping.
In practice you can use even relatively simple models to start filtering and creating more training data:
Think about it like the original LLM being a huge trashcan in which you try to compress Terrabytes of mostly garbage web data.
Then, you use fine-tuning (like the instruction tuning used the assistant models) to increases the likelihood of deriving non-trash from the model (or to accurately classify trash vs non-trash).
In general this will produce a datasets that is of significantly higher quality simply because you got rid of all the low-quality stuff.

This is not even a theoretical construction: Phi-1 (https://arxiv.org/abs/2306.11644) does exactly that to train a state-of-the-art language model on a tiny amount of high quality data (the model is also tiny: only half a percent the size of gpt-3).
Previously tiny stories https://arxiv.org/abs/2305.07759 showed something similar: you can build high quality models with very little data, if you have good data (in the case of tiny stories they generate simply stories to train small language models).

In general LLM people seem to re-discover that good data is actually good and you don't really need these "shotgun approach" web scrape datasets.

[-] ZickZack@kbin.social 31 points 1 year ago

Everything using the activityPub standard has open likes (see https://www.w3.org/TR/2018/REC-activitypub-20180123/ for the standard), and logically it makes sense to do this to allow for verification of "likes":
If you did not do that, a malicious instance could much more easily just shove a bunch of likes onto another instance's post, while, if you have "like authors" it's much easier to do like moderation.
Effectively ActivityPub treats all interactions like comments, where you have a "from" and "to" field just like email does (just imagine you could send messages without having an originator: email would have unusable levels of spam and harassment).
Specfically, here is an example of a simple activity:

POST /outbox/ HTTP/1.1
Host: dustycloud.org
Authorization: Bearer XXXXXXXXXXX
Content-Type: application/ld+json; profile="https://www.w3.org/ns/activitystreams"

{
  "@context": ["https://www.w3.org/ns/activitystreams",
               {"@language": "en"}],
  "type": "Like",
  "actor": "https://dustycloud.org/chris/",
  "name": "Chris liked 'Minimal ActivityPub update client'",
  "object": "https://rhiaro.co.uk/2016/05/minimal-activitypub",
  "to": ["https://rhiaro.co.uk/#amy",
         "https://dustycloud.org/followers",
         "https://rhiaro.co.uk/followers/"],
  "cc": "https://e14n.com/evan"
}

As you can see this has a very "email like" structure with a sender, receiver, and content. The difference is mostly that you can also publish a "type" that allows for more complex interactions (e.g. if type is comment, then lemmy knows to put it into the comments, if type is like it knows to put it to the likes, etc...).
The actual protocol is a little more complex, but if you replace "ActivityPub" with "typed email" you are correct 99% of the time.

The different services, like lemmy, kbin, mastodon, or peertube are now just specific instantiations of this standard. E.g. a "like" might have slightly different effects on different services (hence also the confusion with "boosting" vs "liking" on kbin)

[-] ZickZack@kbin.social 13 points 1 year ago

It really depends on what you want: I really like obsidian which is cross-platform and uses basically vanilla markdown which makes it easy to switch should this project go down in flames (there are also plugins that add additional syntax which may not be portable, but that's as expected).

There's also logseq which has much more bespoke syntax (major extensions to markdown), but is also OSS meaning there's no real danger of it suddenly vanishing from one day to the next.
Specifically Logseq is much heavier than obsidian both in the app itself and the features it adds to markdown, while obsidian is much more "markdown++" with a significant part of the "++" coming from plugins.

In my experience logseq is really nice for short-term note taking (e.g. lists, reminders, etc) and obsidian is much nicer for long-term notes.

Some people also like notion, but i never got into that: it requires much more structure ahead of time and is very locked down (it also obviously isn't self-hosted). I can see notion being really nice for people that want less general note-taking and more custom "forms" to fill out (e.g. traveling checklists, production planning, etc..).

Personally, I would always go with obsidian, just for the piece of mind that the markdown plays well with other markdown editors which is important for me if I want a long-running knowledge base.
Unfortunately I cannot tell you anything with regards to collaboration since I do not use that feature in any note-taking system

[-] ZickZack@kbin.social 13 points 1 year ago

They choose to do this. Delicious has historically been a point and click developer, but they wanted to diversify, especially since their previous title "pillars of the earth" flopped. They first tried their have at rts with "a year of rain" which is simply not that good, and then looked into Gollum.
You also can't raid make the argument that the project was rushed out the door, considering the game was supposed to release in 2021 (two years ago).

They tried something they had no experience in, not through coercion but because they wanted to, and produced a game of shockingly low quality. Since this wasn't the first flop, but just the latest in a huge series of flops, (though it was the most expensive and high profile one) the studio closed.

[-] ZickZack@kbin.social 13 points 1 year ago

And don't forget that even after that you still have to watch baked-in "This video is sponsored by <insert shady company here>" adds since the actual revenue that gets passed to creators from youtube is so low that to keep the ship afloat they have to look for additional revenue streams.

[-] ZickZack@kbin.social 12 points 1 year ago

While the inability to source is a huge problem, but you also have to keep in mind that complaining about AI has other objective beyond the obvious "AI bad".

  • it's marketing: "Our thing is so powerful it could irreparably change someone's life" is still advertising even if that irreparable change is bad. Saying "AI so powerful it's dangerous" just sounds less advertis-y than "AI so powerful you cannot not invest in it" despite both leading to similar conclusions (you can look back at the "fearvertising" done during the original AI boom: same paint, different color)
  • it's begging for regulatory zeals to be put into place: Everyone with a couple of millions can build an LLM from scratch. That might sound like a lot, but it's only getting cheaper and it doesn't need highly intricate systems to replicate. Specifically the ability to finetune a large model with few datapoints allows even open-source non-profits like OpenAssistant to compete against the likes of google and openai: Google has made that very explicit in their leaked We have no moat memo. This is why you see people like Sam Altman talking to congress about the dangers of AI: He has no serious competetive advantage and hopes that with sufficient fear-mongering he can get the government to give him one.

Complaining about AI is as much about the AI as it is about the economical incentives behind AI.

view more: next ›

ZickZack

joined 1 year ago