this post was submitted on 16 Mar 2024
182 points (100.0% liked)

the_dunk_tank

15914 readers
12 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 

I know the leftist in me is supposed to have sympathy for these people and get them to unionize. But only after I stop laughing and enjoying this moment. For years these fucks told the rest of us to “learn to code” and pretended like studying anything else at uni was a fucking waste of time.

GUESS WHAT FUCKERS. SO WAS CODING. Looks like we’ll be baristas together, only I’ll have three years of experience!!!

you are viewing a single comment's thread
view the rest of the comments
[–] Tabitha@hexbear.net 9 points 8 months ago (1 children)

Never heard of Devin, but ChatGPT (the current leader) can barely follow instructions well, but is usually faster to get the information I need than Googling questions/topics for a wide variety of tasks. It's pretty much only good at things that had solid stackoverflow answers, including combining 2 questions or answering it for any language. Anytime you attempt to add complexity or details, previous details just fall off at random eventually. Sometimes it just can't combine 2 key details. Sometimes you get a lot of dead-ends. This mostly makes up for Google not being as good as it was 10 years ago, but my productivity boost is probably less than 10%. With Google, you probably click into 2-10 websites which almost all them missing a key keyword (you know a keyword that was key...). Some of the Google quick information boxes copy information from a website and phrase it in the form of a response to your question, but sometimes that specific data point had nothing to do with your question. An example would be a random date in the webpage was picked by Google's LLM to answer my question, but the date I was looking for was not even in the webpage. ChatGPT usually does well if you ask it 101 stuff or questions about well documented non-obscure facts or starting-from-scratch stuff or "write a function that does X". Anything else, you can almost safely assume a miss on first attempt, and a dead-end (no further conversing reaches viable solution) very likely.

[–] flan@hexbear.net 12 points 8 months ago* (last edited 8 months ago)

Devin is afaik built on ChatGPT but it takes it a little farther and iterates on the code ChatGPT generates by attempting to build and run the program, taking screenshots and so on along the way. I'm a little skeptical that this brute force method will work well but it may end up giving us more shit-tier websites and apps that barely function and have random bugs that aren't 100% reproducible.

My skepticism of this being the thing to replace coders is really about scale. If we've really scraped every morsel of information off the internet and come up with GPT-4 and Claude 3 and Gemini 1.5 I don't know where we go with this technique. It is incredibly expensive to build, train and run these things. ChatGPT-4 is 40 requests per 3 hours for $20/month so if you use it as efficiently as you can each request costs about 2 tenths of a cent. Datacenters are now putting pressure on the US electrical system and I haven't heard of much in terms of making transformers (the core layer of these things) more efficient.

Anyway those are kind of disorganized thoughts but in summary unless something really transformative happen in the ML space I don't know if we can possibly get to the required scale of power and computation and memory we would need to have human-level reasoning. Let alone the fact that we need to apparently suck up all information in existence to get to basic human-like text generation.