this post was submitted on 05 Feb 2025
293 points (82.9% liked)

Technology

61774 readers
4078 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] whynot_1@lemmy.world 35 points 1 day ago (2 children)

I think I have seen this exact post word for word fifty times in the last year.

[–] pulsewidth@lemmy.world 1 points 19 hours ago* (last edited 19 hours ago)

And yet they apparently still can't get an accurate result with such a basic query.

Meanwhile... https://futurism.com/openai-signs-deal-us-government-nuclear-weapon-security

[–] clay_pidgin@sh.itjust.works 14 points 1 day ago (6 children)

Has the number of "r"s changed over that time?

[–] Tgo_up@lemm.ee 15 points 1 day ago (2 children)

This is a bad example.. If I ask a friend "is strawberry spelled with one or two r's"they would think I'm asking about the last part of the word.

The question seems to be specifically made to trip up LLMs. I've never heard anyone ask how many of a certain letter is in a word. I've heard people ask how you spell a word and if it's with one or two of a specific letter though.

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

[–] renegadespork@lemmy.jelliefrontier.net 25 points 1 day ago (2 children)

If you think of LLMs as something with actual intelligence you're going to be very unimpressed.. It's just a model to predict the next word.

This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

[–] Tgo_up@lemm.ee 1 points 18 hours ago (1 children)

What situations are you thinking of that requires reasoning?

I've used LLMs to create software i needed but couldn't find online.

load more comments (1 replies)
[–] sugar_in_your_tea@sh.itjust.works 5 points 1 day ago (1 children)

Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you'll see very different applications. If you go into it looking for problems, you'll find them, likewise if you go into it for use cases, you'll find them.

[–] renegadespork@lemmy.jelliefrontier.net 1 points 1 day ago (1 children)

Personally I have yet to find a use case. Every single time I try to use an LLM for a task (even ones they are supposedly good at), I find the results so lacking that I spend more time fixing its mistakes than I would have just doing it myself.

[–] Scubus@sh.itjust.works 2 points 20 hours ago (1 children)

So youve never used it as a starting point to learn about a new topic? You've never used it to look up a song when you can only remember a small section of lyrics? What about when you want to code a block of code that is simple but monotonous to code yourself? Or to suggest plans for how to create simple sturctures/inventions?

Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

Hell, ask chatgpt what use cases it would recommend for itself, im sure itll have something interesting.

load more comments (1 replies)
[–] Grandwolf319@sh.itjust.works 4 points 1 day ago (4 children)

If you think of LLMs as something with actual intelligence you're going to be very unimpressed

Artificial sugar is still sugar.

Artificial intelligence implies there is intelligence in some shape or form.

[–] Tgo_up@lemm.ee 1 points 18 hours ago

Exactly. The naming of the technology would make you assume it's intelligent. It's not.

[–] corsicanguppy@lemmy.ca 3 points 1 day ago (1 children)

Artificial sugar is still sugar.

Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?

Because those are all wrong. What's your criteria?

load more comments (1 replies)
[–] JohnEdwa@sopuli.xyz 3 points 1 day ago* (last edited 1 day ago)

Something that pretends or looks like intelligence, but actually isn't at all is a perfectly valid interpretation of the word artificial - fake intelligence.

load more comments (1 replies)
[–] FourPacketsOfPeanuts@lemmy.world 19 points 1 day ago (1 children)

It's predictive text on speed. The LLMs currently in vogue hardly qualify as A.I. tbh..

[–] TeamAssimilation@infosec.pub 10 points 1 day ago

Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.

That was definitely a big step in AI.

[–] dan1101@lemm.ee 13 points 1 day ago (1 children)

It's like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

[–] zipzoopaboop@lemmynsfw.com 2 points 21 hours ago
[–] artificialfish@programming.dev 7 points 1 day ago (2 children)

This is literally just a tokenization artifact. If I asked you how many r’s are in /0x5273/0x7183 you’d be confused too.

load more comments (2 replies)
[–] autonomoususer@lemmy.world 2 points 1 day ago

Skill issue

[–] Fubarberry@sopuli.xyz 6 points 1 day ago (1 children)

I asked mistral/brave AI and got this response:

How Many Rs in Strawberry

The word "strawberry" contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›