scruiser

joined 1 year ago
[–] scruiser@awful.systems 10 points 4 months ago

I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

[–] scruiser@awful.systems 11 points 5 months ago* (last edited 5 months ago)

I saw people making fun of this on (the normally absurdly overly credulous) /r/singularity of all places. I guess even hopeful techno-rapture believers have limits to their suspension of disbelief.

[–] scruiser@awful.systems 17 points 5 months ago

First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set.

An extra bit of labeling on your training data set really doesn't help you that much. LLMs already make up plausible looking citations and website links (and other data types) that are actually complete garbage even though their training data has valid citations and website links (and other data types). Labeling things as "fact" and forcing the LLM to output stuff with that "fact" label will get you output that looks (in terms of statistical structure) like valid labeled "facts" but have absolutely no guarantee of being true.

[–] scruiser@awful.systems 9 points 6 months ago

Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?

[–] scruiser@awful.systems 12 points 6 months ago (4 children)

It is even worse than I remembered: https://www.reddit.com/r/SneerClub/comments/hwenc4/big_yud_copes_with_gpt3s_inability_to_figure_out/ Eliezer concludes that because it can't balance parentheses it was deliberately sandbagging to appear dumber! Eliezer concludes that GPT style approaches can learn to break hashes: https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/

[–] scruiser@awful.systems 9 points 6 months ago (7 children)

iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue

Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI's efforts on abstractions like AIXI instead of more practical things like neural network interpretability.

[–] scruiser@awful.systems 8 points 6 months ago (2 children)

I unironically kinda want to read that.

Luckily LLMs are getting better at churning out bullshit, so pretty soon I can read wacky premises like that without a human having to degrade themselves to write it! I found a new use case for LLMs!

[–] scruiser@awful.systems 13 points 6 months ago

Sneerclub tried to warn them (well not really, but some of our mockery could be interpreted as warning) that the tech bros were just using their fear mongering as a vector for hype. Even as far back as the OG mid 2000s lesswrong, a savvy observer could note that much of the funding they recieved was a way of accumulating influence for people like Peter Thiel.