this post was submitted on 24 May 2024
605 points (97.2% liked)

Technology

34894 readers
1080 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Boozilla@lemmy.world 54 points 5 months ago* (last edited 5 months ago) (4 children)

It's been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

It's also been helpful at work with some random database type stuff.

But it definitely gets stuff wrong. A lot of stuff.

The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It's more like an iterative process of refinement than one prompt gives you the final answer.

[–] Downcount@lemmy.world 31 points 5 months ago (4 children)

The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

Or it get stuck in an endless loop of two different but wrong solutions.

Me: This is my system, version x. I want to achieve this.

ChatGpt: Here's the solution.

Me: But this only works with Version y of given system, not x

ChatGpt: Try this.

Me: This is using a method that never existed in the framework.

ChatGpt:

[–] mozz@mbin.grits.dev 14 points 5 months ago
  1. "Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn't work)"
  2. Goto 1
[–] UberMentch@lemmy.world 8 points 5 months ago (1 children)

I used to have this issue more often as well. I've had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT's response and saying "do not include y."

[–] brbposting@sh.itjust.works 5 points 5 months ago

Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.

It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.

*[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom

[–] Boozilla@lemmy.world 4 points 5 months ago (1 children)

Ha! That definitely happens sometimes, too.

[–] FaceDeer@fedia.io 1 points 5 months ago

But only sometimes. Not often enough that I don't still find it more useful than not.

[–] BrianTheeBiscuiteer@lemmy.world 2 points 5 months ago

While explaining BTRFS I've seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.

[–] mozz@mbin.grits.dev 19 points 5 months ago (2 children)

It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.

It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.

[–] Boozilla@lemmy.world 6 points 5 months ago

Exactly. And for me, being in middle age, it's a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.

[–] CeeBee@lemmy.world 3 points 5 months ago

It is very good for boilerplate code

Personally I find all LLMs in general not that great at writing larger blocks of code. It's fine for smaller stuff, but the more you expect out of it the more it'll get wrong.

I find they work best with existing stuff that you provide. Like "make this block of code more efficient" or "rewrite this function to do X".

[–] tristan@aussie.zone 11 points 5 months ago* (last edited 5 months ago)

I was recently asked to make a small Android app using flutter, which I had never touched before

I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it's called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on

Suddenly it became really useful..I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change

I could ask it to write me an example method for something that I could then easily adapt for my use

One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.

This wasn't a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me... I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone

[–] WalnutLum@lemmy.ml 7 points 5 months ago (1 children)

This is because all LLMs function primarily based on the token context you feed it.

The best way to use any LLM is to completely fill up it's history with relevant context, then ask your question.

[–] Boozilla@lemmy.world 4 points 5 months ago

I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.