this post was submitted on 05 Jan 2024
8 points (100.0% liked)
TechTakes
1490 readers
32 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Adversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.
Think of it this way. One, reason why humans don't just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It's not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.
I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.
Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it's the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.
It's funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we'll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we'll ignore that and focus on reductive world views again.