this post was submitted on 18 Jul 2023
3 points (100.0% liked)

TechTakes

1490 readers
30 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space.

It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

top 2 comments
sorted by: hot top controversial new old
[–] dgerard@awful.systems 1 points 1 year ago (1 children)

gawd we could fill this sub with nothing but Wolfram

been looking into Digital Physics recently, which has been Wolfram's big idea for about 40 years now, and how it suffers just the minor problem that it contradicts quantum theory and experiment

[–] blakestacey@awful.systems 1 points 1 year ago

The best I can say about Wolfram's "Digital Physics" is that it's not Eric Weinstein's "Geometric Unity".