this post was submitted on 06 Aug 2023
1776 points (98.6% liked)

Programmer Humor

32472 readers
874 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] namnnumbr@lemmy.ml 93 points 1 year ago (2 children)

It's not just every tech company, it's every company. And it's terrifying - it's like giving people who don't know how to ride a bike a 1000hp motorcycle! The industry does not have guardrails in place and the public consciousness "chatGPT can do it" without any thought to checking the output is horrifying.

[–] szczuroarturo@programming.dev 9 points 1 year ago

Basically the internet.

load more comments (1 replies)
[–] Blackmist@feddit.uk 68 points 1 year ago (2 children)
[–] h_a_r_u_k_i@programming.dev 16 points 1 year ago (1 children)

It's sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.

[–] Blackmist@feddit.uk 39 points 1 year ago (15 children)

if you ask it today's date, it actually does that.

It just doesn't have any actual knowledge of what it's saying. I asked it a programming question as well, and each time it would make up a class that doesn't exist, I'd tell it it doesn't exist, and it would go "You are correct, that class was deprecated in {old version}". It wasn't. I checked. It knows what the excuses look like in the training data, and just apes them.

It spouts convincing sounding bullshit and hopes you don't call it out. It's actually surprisingly human in that regard.

[–] tjaden@lemmy.sdf.org 19 points 1 year ago (1 children)

It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

Oh great, Silicon Valley's AI is just an overconfident intern!

load more comments (1 replies)
[–] scarabic@lemmy.world 9 points 1 year ago (1 children)

It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

[–] dan@upvote.au 11 points 1 year ago (3 children)

It doesn't know what it's doing. It doesn't understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

load more comments (3 replies)
load more comments (13 replies)
load more comments (1 replies)
[–] kbity@kbin.social 63 points 1 year ago (11 children)

There's even rumours that the next version of Windows is going to inject a bunch of AI buzzword stuff into the operating system. Like, how is that going to make the user experience any more intuitive? Sounds like you're just going to have to fight an overconfident ChatGPT wannabe that thinks it knows what you want to do better than you do, every time you try opening a program or saving a document.

[–] Taleya@aussie.zone 56 points 1 year ago

This is what pisses me off about the whole endeavour. We can't even get a fucking search algo right any more, why the fuck do i want a machine blithely failing to do what it's told as it stumbles off a cliff.

[–] DragonAce@lemmy.world 40 points 1 year ago

It'll be like if they brought clippy back but only this time hes even more of an asshole and now he can fuck up your OS too.

[–] sigh@lemmy.world 12 points 1 year ago (2 children)

There’s even rumours

Like, I know we all love to hate Microsoft here but can we stop with the random nonsense? That's not what's happening, at all.

load more comments (2 replies)
[–] whosdadog@sh.itjust.works 8 points 1 year ago (1 children)

Windows Co-pilot just popped up on my Windows 11 machine. Its disclaimer said it could provide surprising results. I asked it what kind of surprising results I could expect, it responded that it wasn't comfortable talking about that subject and ended the conversation.

load more comments (1 replies)
load more comments (7 replies)
[–] DrownedRats@lemmy.world 52 points 1 year ago (1 children)

My cousin got a new TV and I was helping to set it up for him. During the setup thing, it had an option to enable AI enhanced audio and visuals. Turning the ai audio on turned the decent, but maybe a little sub par audio, into an absolute garbage shitshow it sounded like the audio was being passed through an "underwater" filter then transmitted through a tin can and string telephone. Idk who decided this was a feature that was ready to be added to consumer products but it was absolutely moronic

[–] Whitebrow@lemmy.world 52 points 1 year ago

Coupled with laying off a few thousand employees

[–] Poob@lemmy.ca 48 points 1 year ago (19 children)

None of it is even AI, Predicting desired text output isn't intelligence

[–] lukas@lemmy.haigner.me 28 points 1 year ago (3 children)

You hold artificial intelligence to the standards of general artificial intelligence, which doesn't even exist yet. Even dumb decision trees are considered an AI. You have to lower your expectations. Calling the best AIs we have dumb is unhelpful at best.

load more comments (3 replies)
[–] freeman@lemmy.pub 27 points 1 year ago (4 children)

At this point i just interpret AI to be "we have lots of select statements and inner joins "

load more comments (4 replies)
[–] drekly@lemmy.world 15 points 1 year ago (2 children)

I do agree, but on the other hand...

What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?

[–] fidodo@lemm.ee 15 points 1 year ago

I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.

load more comments (1 replies)
[–] Noughmad@programming.dev 10 points 1 year ago (1 children)

AI is whatever machines can't do yet.

Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".

load more comments (1 replies)
load more comments (15 replies)
[–] drdabbles@lemmy.world 47 points 1 year ago* (last edited 1 year ago) (5 children)

Before this is was blockchain, and before that it was "AI", and before that...

[–] adeoxymus@lemmy.world 19 points 1 year ago (1 children)

Before that self driving cars, before that "Big data", before that 3D printing, before that internet TV, before that "cloud computing", before that web 2.0, before that WAP maybe, internet in general?

Some of those things did turn out to be game changers, others not at all or not so much. It's hard to predict the future.

load more comments (1 replies)
[–] EmilieEvans@lemmy.ml 10 points 1 year ago

IOT? Don't worry. Edge AI is now AIOT (AI IOT)

load more comments (3 replies)
[–] 0Xero0@lemmy.world 35 points 1 year ago (1 children)

If it ain't broke, we'll break it!

[–] niktemadur@lemmy.world 8 points 1 year ago

We'll make it broken!

[–] cloudy1999@sh.itjust.works 28 points 1 year ago (1 children)

This is refreshing to see. I thought I was the only one who felt this way.

[–] 1984@lemmy.today 27 points 1 year ago (1 children)

It's all so stupid. The entire stock market basically took off because Nvidia CEO mentioned AI like 50 times and everyone now thinks it's worth 200 times it's yearly profit.

We don't even have AI, we have language models that dig through text and create answers from that.

[–] Anticorp@lemmy.ml 8 points 1 year ago (7 children)

That's a massive oversimplification. We do have AI. We don't have AGI.

load more comments (7 replies)
[–] ICastFist@programming.dev 28 points 1 year ago (3 children)

Unlike the previous bullshit they threw everywhere (3D screens, NFTs, metaverse), AI bullshit seems very likely to stay, as it is actually proving useful, if with questionable results... Or rather, questionable everything.

[–] WheelcharArtist@lemmy.world 11 points 1 year ago (14 children)

if it only were AI and not just llms, machine learning or just plain algorithms. but yeah let's call everything AI from here on. NFTs could be useful if used as proof of ownership instead of expensive pictures etc

load more comments (14 replies)
[–] drdabbles@lemmy.world 9 points 1 year ago

As a counter to your example, this is my career's third AI hype cycle.

load more comments (1 replies)
[–] MrMamiya@feddit.de 27 points 1 year ago

God it’s exhausting. Okay, I’ll buy a 3d television if that’s what I have to do, let’s bring that back instead. Please?

[–] denemdenem@lemmy.world 23 points 1 year ago (1 children)

If you take out the AI part it still holds true. 2023 is full of bullshit.

load more comments (1 replies)
[–] ExtraMedicated@lemmy.world 18 points 1 year ago* (last edited 1 year ago) (1 children)

I'm bookmarking this for the next time my supervisor plugs ChatGPT.

[–] ApathyTree@lemmy.dbzer0.com 8 points 1 year ago

I had a manager tell me some stuff was being scanned by AI for one of my projects.

No, you are having it scanned by a regular program to generate keyword clouds that can be used to pull it up when humans type their stupidly-worded questions into our search. It’s not fucking AI. Stop saying everything that happens on a computer that you don’t understand is fucking AI!

I’m just so over it. But at least they aren’t trying to convince us chatGPT is useful (it definitely wouldn’t be for what they would try to use it for)

[–] KIM_JONG_JUICEBOX@lemmy.ml 18 points 1 year ago (3 children)

What companies are you people working for?

We are being asked not to use AI.

[–] fluxion@lemmy.world 17 points 1 year ago (3 children)

Ain't gotta use it to sell it or slap AI stickers on top of whatever products you're selling

load more comments (3 replies)
[–] baked_tea@sh.itjust.works 9 points 1 year ago

Not surprising for North Korea

[–] RagingRobot@lemmy.world 8 points 1 year ago

Larger companies have been working fast to sandbox the models used by their employees. Once they are safe from spilling data they go all in. I'm currently on a platform team enabling generative Ai capabilities at my company.

[–] taanegl@lemmy.ml 17 points 1 year ago

It begs the question... what's the boardroom random bullshit timeline?

When was it random cloud bullshit go and when was it random Blockchain bullshit go, and what other buzzwords almost guaranteed Silicon Valley tech bros tossed money in your face and at what point in time were they applicable?

[–] Protoflare@lemmy.world 16 points 1 year ago (1 children)

Snapchat AI. My friends don't want it, they can't block it, and it is proven to lie about certain things, like asking if it has one's location.

load more comments (1 replies)
[–] jimmydoreisalefty@lemmus.org 13 points 1 year ago

More Ads and tracking systems, Now With AI!

Commercial...

load more comments
view more: next ›