this post was submitted on 19 Nov 2024
1066 points (97.7% liked)

People Twitter

5396 readers
196 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] 1stTime4MeInMCU@mander.xyz 86 points 1 month ago (1 children)

I’m convinced people who can’t tell when a chat bot is hallucinating are also bad at telling whether something else they’re reading is true or not. What online are you reading that you’re not fact checking anyway? If you’re writing a report you don’t pull the first fact you find and call it good, you need to find a couple citations for it. If you’re writing code, you don’t just write the program and assume it’s correct, you test it. It’s just a tool and I think most people are coping because they’re bad at using it

[–] BluesF@lemmy.world 9 points 1 month ago (1 children)

Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It's particularly good for things like identifying niche python packages & providing example use cases so I don't have to learn shit loads of syntax that I'll never use again.

[–] Aceticon@lemmy.world 34 points 1 month ago (5 children)

In other words, it's the new version of copying code from Stack Overflow without going to the trouble of properly understanding what it does.

[–] Rekorse@sh.itjust.works 7 points 1 month ago

Pft you must have read that wrong, its clearly turning them into master programmer one query at a time.

[–] BluesF@lemmy.world 5 points 1 month ago

I know how to write a tree traversal, but I don't need to because there's a python module that does it. This was already the case before LLMs. Now, I hardly ever need to do a tree traversal, honestly, and I don't particularly want to go to the trouble of learning how this particular python module needs me to format the input or whatever for the one time this year I've needed to do one. I'd rather just have something made for me so I can move on to my primary focus, which is not tree traversals. It's not about avoiding understanding, it's about avoiding unnecessary extra work. And I'm not talking about saving the years of work it takes to learn how to code, I'm talking about the 30 minutes of work it would take for me to learn how to use a module I might never use again. If I do, or if there's a problem I'll probably do it properly the second time, but why do it now if there's a tool that can do it for me with minimum fuss?

load more comments (3 replies)
[–] sp3tr4l@lemmy.zip 71 points 1 month ago* (last edited 1 month ago) (11 children)

I just tried out Gemini.

I asked it several questions in the form of 'are there any things of category x which also are in category y?' type questions.

It would often confidently reply 'No, here's a summary of things that meet all your conditions to fall into category x, but sadly none also fall into category y'.

Then I would reply, 'wait, you don't know about thing gamma, which does fall into both x and y?'

To which it would reply 'Wow, you're right! It turns out gamma does fall into x and y' and then give a bit of a description of how/why that is the case.

After that, I would say '... so you... lied to me. ok. well anyway, please further describe thing gamma that you previously said you did not know about, but now say that you do know about.'

And that is where it gets ... fun?

It always starts with an apology template.

Then, if its some kind of topic that has almost certainly been manually dissuaded from talking about, it then lies again and says 'actually, I do not know about thing gamma, even though I just told you I did'.

If it is not a topic that it has been manually dissuaded from talking about, it does the apology template and then also further summarizes thing gamma.

...

I asked it 'do you write code?' and it gave a moderately lengthy explanation of how it is comprised of code, but does not write its own code.

Cool, not really what I asked. Then command 'write an implementation of bogo sort in python 3.'

... and then it does that.

...

Awesome. Hooray. Billions and billions of dollars for a shitty way to reform web search results into a coversational form, which is very often confidently wrong and misleading.

[–] archomrade@midwest.social 14 points 1 month ago (5 children)

Idk why we have to keep re-hashing this debate about whether AI is a trustworthy source or summarizer of information when it's clear that it isn't - at least not often enough to justify this level of attention.

It's not as valuable as the marketing suggests, but it does have some applications where it may be helpful, especially if given a conscious effort to direct it well. It's better understood as a mild curiosity and a proof of concept for transformer-based machine learning that might eventually lead to something more profound down the road but certainly not as it exists now.

What is really un-compelling, though, is the constant stream of anecdotes about how easy it is to fool into errors. It's like listening to an adult brag about tricking a kid into thinking chocolate milk comes from brown cows. It makes it seem like there's some marketing battle being fought over public perception of its value as a product that's completely detached from how anyone actually uses or understands it as a novel piece of software.

[–] sp3tr4l@lemmy.zip 22 points 1 month ago* (last edited 1 month ago) (5 children)

Probably it keeps getting rehashed because people who actually understand how computers work are extremely angry and horrified that basically every idiot executive believes the hype and then asks their underlings to inplement it, and will then blame them for doing what they asked them to do when it turns out their idea was really, unimaginably stupid, but idiot executive gets golden parachute and software person gets fired.

That, and/or the widespread proliferation of this bullshit is making stupid people more stupid, and just making more people stupid in general.

Or how like all the money and energy spent on this is actively murdering the environment and dooming the vast majority of our species, when it could be put toward building affordable housing or renovating crumbling infrastructure.

Don't worry, if we keep throwing exponential increasing amounts of effort at the thing with exponentially diminishing returns, eventually it'll become God!

load more comments (5 replies)
load more comments (4 replies)
[–] taladar@sh.itjust.works 11 points 1 month ago

And then more money spent on adding that additional garbage filter to the beginning and the end of the process which certainly won't improve the results.

[–] pyre@lemmy.world 7 points 1 month ago (1 children)

copilot did the same with basic math. just to test it I said "let's say I have a 10x6 rectangle. what number would I have to divide width and height by, in order to end up with a rectangle that's half the area?"

it said "in order to make it half, you should divide them by 2. so [pointlessly lengthy steps explaining the divisions]"

I said "but that would make the area 5x3 = 15 units which is not half the area of 60"

it said "you're right! in order to ... [fixing the answer to √2 using approximation"

I don't know if I said it then, or after some other fucking nonsense but when I said "you're useless" it had the fucking audacity to take offense and end the conversation!

like fuck off, you don't get to have fake pride if you don't have basic fake intelligence but use it in your description.

[–] sp3tr4l@lemmy.zip 8 points 1 month ago* (last edited 1 month ago)

Its a perfect encapsulation of the corpo mindset:

Whatever I do is profound, meaningful, with endless possibilities for future greatness...

... even though I'm just talking out of my ass 99% of the time...

... and if you have the audacity, the nerve, to have a completely normal reaction when you determine that that is what I am doing, pshaw, how uncouth, I won't stand for your abuse!

...

They've done it. They've made a talking (not thinking) machine in their own image.

And it was not good.

You start a conversation you can't even finish it You're talkin' a lot, but you're not sayin' anything When I have nothing to say, my lips are sealed Say something once, why say it again?

Psycho Killer Qu'est-ce que c'est

load more comments (8 replies)
[–] tired_n_bored@lemmy.world 32 points 1 month ago (1 children)

I beg someone to help me. There is this new guy at my workplace, officially as a developer who can't write code at all. He has pasted an entire project I did into ChatGPT with "optimize this" and pull requested it. I swear.

[–] wizardbeard@lemmy.dbzer0.com 17 points 1 month ago (1 children)

Report up the chain, if it's safe to do so and they are likely to understand.

Also, check what your company's rules regarding data security and LLM use are. My understanding is that at many places putting private company or customer data into an outside LLM is seen as shouting company secrets out to the open internet. At least that's the policy where I'm at. Pasting an entire project in would definitely violate things for my workplace.

In general that's rude as hell. New guy comes in, grabs an entire project they have no background with, and just chucks it at an LLM? No actual review of it themselves, just an assumption that your code is so shit that a general use text generator will do better? Doesn't sound like a "team player" to me (management eats that kind of talk up).

Maybe couch it as "I want to make sure that as a team, we're utilizing the tools available to us in the best way possible to multiply our strengths. That said, I'm concerned the approach that [LLM idiot] is using will only result in more work for the team. Using chatGPT as he has is an explosive approach, when I feel that a more scalpel-like approach to address specific areas for improvement would be the best method moving forward. We should be using these tools to address specific concerns, not chucking everything at the wall in some never ending chase of an undefined idea of 'more optimized'."

Perhaps frame it in terms of man hours? The immediateness of 5 minutes in chatGPT can cost the team multiple workdays in reviewing the output, whereas more focused code review up front can reduce the man hour cost significantly.

There's also a bunch of articles out there online about how overuse of LLMs is leading to a measurable decrease in code quality and increase in security issues in code bases.

[–] tired_n_bored@lemmy.world 5 points 1 month ago

Such a great answer, thank you lots!

[–] JackbyDev@programming.dev 28 points 1 month ago

Because of I haven't found anyone asking the same question on a search index, ChatGPT won't tell me to just use Google or close my question as a duplicate when it's not a duplicate.

[–] WalnutLum@lemmy.ml 20 points 1 month ago* (last edited 1 month ago) (1 children)

Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You're not having a conversation with it, it's "completing" the chat history you're providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

If you don't directly provide, in the chat history and/or the text completion prompt, the information you're trying to retrieve, you're essentially fishing for text in a sea of random text tokens that seems like it fits the question.

It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a "smarter" way (see GPT-4o and "chain of reasoning")

[–] HackerJoe@sh.itjust.works 7 points 1 month ago

They were trained on reddit. How much would you trust a chatbot whose brain consists of the entirety of reddit put in a blender?

I am amazed it works as well as it does. Gemini only occasionally tells people to kill themselves.

[–] Nurse_Robot@lemmy.world 17 points 1 month ago (7 children)

sigh people do talk about this, they complain about it non-stop. These same people probably aren't using it as intended, or are deliberately trying to farm a "gotcha" response. AI is a very neat tool which can do a lot of things well, but it's important to recognize its limitations. I don't use it for things I don't understand because I won't recognize if it's spitting out nonsense, but for topics I do understand it's hard to overstate how efficient and time saving it is.

[–] ByteOnBikes@slrpnk.net 16 points 1 month ago (2 children)

The FuckAI people are valid for their concerns.

Unfortunately, their anger seems to constantly be misdirected at the weirdest things, instead of root issues.

[–] taladar@sh.itjust.works 6 points 1 month ago

Oh, there is plenty of hate for the hype cycle in general which is about as close to the root of the issue as you can get.

load more comments (1 replies)
[–] zarkanian@sh.itjust.works 6 points 1 month ago

"Give me a vegan recipe using " has been flawless. The recipes are decent, although they tend to use the same spices over and over.

[–] Paradigm_shift@sh.itjust.works 4 points 1 month ago

I sometimes use it to "convert" preexisting bulletpoints or informal notes into a professional sounding business email. I already know all the information so proofreading the final product doesn't take a lot of time.

I think a lot of people who shit on AI forget that some people struggle with putting their thoughts into words. Especially if they aren't writing in their native language.

load more comments (4 replies)
[–] hoshikarakitaridia@lemmy.world 17 points 1 month ago (14 children)

Because in a lot of applications you can bypass hallucinations.

  • getting sources for something
  • as a jump off point for a topic
  • to get a second opinion
  • to help argue for r against your position on a topic
  • get information in a specific format

In all these applications you can bypass hallucinations because either it's task is non-factual, or it's verifiable while promoting, or because you will be able to verify in any of the superseding tasks.

Just because it makes shit up sometimes doesn't mean it's useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.

[–] ms_lane@lemmy.world 25 points 1 month ago (1 children)

Also just searching the web in general.

Google is useless for searching the web today.

load more comments (1 replies)
[–] WalnutLum@lemmy.ml 22 points 1 month ago (2 children)

All LLMs are text completion engines, no matter what fancy bells they tack on.

If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.

For everything else you are wading through territory you could probably do easier using other methods.

load more comments (2 replies)
load more comments (12 replies)
[–] surph_ninja@lemmy.world 15 points 1 month ago

Depending on the task, it’s quicker to verify the AI response than work through the blank page phase.

[–] spankmonkey@lemmy.world 15 points 1 month ago (6 children)

Because most people are too lazy to bother with making sure the results are accurate when they sound plausible. They want to believe the hype, and lack critical thinking.

load more comments (6 replies)
[–] TrickDacy@lemmy.world 15 points 1 month ago

Probably because they're not checking them

[–] bl_r@lemmy.dbzer0.com 14 points 1 month ago

My job uses a data science platform that has a special ai assistant trained on its own docs.

The first time I tried using it, it used the wrong language. The second time I used it, it was hallucinating its own functions, but after looking up the docs I told it what function to use and it gave me code that worked

I have not used it a third time. I don’t think i will.

[–] Snowclone@lemmy.world 12 points 1 month ago* (last edited 1 month ago) (2 children)

I only use it for complex searches with results I can usually parse myself like ''list 30 typical household items without descriptions or explainations with no repeating items'' kind of thing.

[–] ohwhatfollyisman@lemmy.world 10 points 1 month ago

great value for all that energy it expends, indeed!

load more comments (1 replies)
[–] Kushan@lemmy.world 11 points 1 month ago (1 children)

They don't give you the answer, they give you a rough idea of where to look for the answer.

I've used them to generate chunks of boilerplate code that was 80% of what I needed, because I knew what I needed and wanted to save time.

[–] BakerBagel@midwest.social 10 points 1 month ago (1 children)

There are ways of doing that which dont require burning an acre of rainforest

load more comments (1 replies)
[–] callcc@lemmy.world 8 points 1 month ago

It's usually good for ecosystems with good and loads of docs. Whenever docs are scarce the results become shitty. To me it's mostly a more targeted search engine without the crap (for now)

[–] ugjka@lemmy.world 7 points 1 month ago (1 children)

The only reason i use ChatGPT for some quick stuff is just that search engines suck so bad.

load more comments (1 replies)
[–] RedditWanderer@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

Big businesses know, they even ask people like me to add extra measures in place. I like to call it the concorde effect. Youre trying to make a plane that can shove air out of the way faster than it wants to move, and this takes an enormous amount of energy that isn't worth the time save, or the cost. Even if you have higher airspeed when it works, if your plane doesn't make it to destination it isn't "faster".

We hear a lot about the downsides of AI, except that doesn't fit the big corpo narrative and people don't care enough really. If youre just a consumer who has no idea how this really works, the investments companiess make into shoving it everywhere makes it seem like it's not a problem and it looks like there's only AI hype and no party poopers.

[–] fossilesque@mander.xyz 6 points 1 month ago* (last edited 1 month ago)

Treat it like a janitor rather than an answer machine and you'll have a better time. I call it my bitch bot.

[–] Sam_Bass@lemmy.world 6 points 1 month ago

They're trying not to lose money on the developments

[–] bunchberry@lemmy.world 5 points 1 month ago* (last edited 1 month ago) (3 children)

It depends upon what you use ChatGPT for and if you know how to use it productively. For example if I ask ChatGPT coding questions it is often very helpful. If I ask it history questions it constantly makes things up. You also again need to know how to use it, like people who claim ChatGPT is not helpful for coding you ask them how they use it and they basically just ask ChatGPT to do their whole project for them and when it fails they claim it is useless. But that's not the productive way to use it, the productive way to use it is like a replacement for StackOverflow or to provide you examples of how to use some library, or things like that, not doing your whole project for you. Of course, people often use it incorrectly so it's probably not a good idea to allow its use in the workplace, but for individual use it can be very helpful.

load more comments (3 replies)
[–] orcrist@lemm.ee 5 points 1 month ago (4 children)

What are you talking about? We mention this on a daily basis. That's the #1 complaint about ChatGPT when used for factual purposes

load more comments (4 replies)
load more comments
view more: next ›