the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
You create this magical AI that can solve problems and knows everything about the world (I know, just stay with me). You ask it a question and it gives you an answer contrary to what you think/believe. Isn't that the point? Isn't it supposed to think in a way different from a human? Isn't it supposed to come up with answers you wouldn't think of?
"Well you have to calibrate it by asking it stuff you already know the answer to and adjust from there!" They will say. But that can't work for everything. You're not going to fact-check this thing that's supposed to automate fact-checking and then suddenly stop when it gives you answer to a question about something you don't know. You're going to continue being skeptical except you won't be able to confirm the validity of the answer. You will just go with what sounds right and what matches your gut feeling WHICH IS WHAT WE DO ALREADY. You haven't invented anything new. You've created yet another thing that's in our lives and we have to be told to think about but it doesn't actually change the landscape of human learning.
We already react that way with news and school and everything else. We've always been on a vibes-based system here. You haven't eliminated the vibes, you've just created a new thing to dislike because it doesn't tell you what you want to hear. That is unless you force it to tell you what you want to hear. Then you're just back at social media bubbles.
The thing they're training AI to do is to just tell the person talking to it whatever that person already believes and always accept correction with grace, the ultimate pleasure sub