Its the most ok’est coder with the attention span of a 5 year old.
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
The LLM can type the Code, but you need to know what you want / how you want to solve it.
For repetitive tasks, it can almost automatically get a first template you write by hand, and extrapolate with multiple variations.
Beyond that… not really. Anything beyond single line completion quickly devolves into either something messy, non working, or worse, working but not as intended. For extremely common cases it will work fine; but extremely common cases are either moved out in shared code, or take less time to write than to "generate" and check.
I've been using code completion/suggestion on the regular, and it had times where I was pleasantly surprised by what it produced, but even for these I had to look after it and fix some things. And while I can't quantify how often it happened, there are a lot of times where it's convincing gibberish.
I've also had some decent luck when using a new/unfamiliar language by asking it to make the code I wrote more idiomatic.
It's been a nice way to learn some tricks I probably wouldn't have bothered with before
Absolutely, but they need a lot of guidance. GitHub CoPilot often writes cleaner code than I do. I'll write the code and then ask it to clean it up for me and DRYify it.
Yes and no. GPT usually gives me clever solutions I wouldn’t have thought of. Very often GPT also screws up, and I need to fine tune variable names, function parameters and such.
I think the best thing about GPTis that it knows the documentation of every function, so I can ask technical questions. For example, can this function really handle dataframes, or will it internally convert the variable into a matrix and then spit out a dataframe as if nothing happened? Such conversions tend to screw up the data, which explains some strange errors I bump into. You could read all of the documentation to find out, or you could just ask GPT about it. Alternatively, you could show how badly the data got screwed up after a particular function, and GPT would tell that it’s because this function uses matrices internally, even though it looks like it works with dataframes.
I think of GPT as an assistant painter some famous artists had. The artist tells the assistant to paint the boring trees in the background and the rough shape of the main subject. Once that’s done, the artist can work on the fine details, sign the painting, send it to the local king and charge a thousand gold coins.
my dad uses this LLM python code generation quite routinely, he says the output's mostly fine.
For snippets yes, ask him to tell it to make a complete terminal service and see what happens
No. To specify exactly what you want the computer to do for you, you'd need some kind of logic-based language that both you and the computer mutually understand. Imagine if you had a spec you could reference to know what the key words and syntax in that language actually mean to the computer.
Ai is excellent at completing low effort ai generated Pearson programming homework while I spend all the time I saved on real projects that actually matter. My hugging face model is probably trained on the same dataset as their bot. It gets it correct about half the time and another 25% of the time, I just have to change a few numbers or brackets around. It takes me longer to read the instructions than it takes the ai bot to spit out the correct answer.
None of it is "good" code but it enables me to have time to write good code somewhere else.
Writing code is probably one of the few things LLMs actually excell at. Few people want to program something nobody has ever done before. Most people are just reimplimenting the same things over and over with small modifications for their use case. If imports of generic code someone else wrote make up 90% of your project, what's the difference in getting an LLM to write 90% of your code?
I see where you're coming from, sort of like the phrase "don't reinvent the wheel". However, considering ethics, that doesn't sound far off from plagiarism.
IMO this perspective that we're all just "reimplementing basic CRUD" applications is the reason why so many software projects fail.