this post was submitted on 25 Feb 2024
203 points (83.7% liked)

Technology

60123 readers
3615 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don't learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

you are viewing a single comment's thread
view the rest of the comments
[–] Wooki@lemmy.world 53 points 10 months ago* (last edited 10 months ago) (2 children)

This overglorified snake oil salesman is scared.

Anyone who understands how these models works can see plain as day we have reached peak LLM. Its enshitifying on itself and we are seeing its decline in real time with quality of generated content. Dont believe me? Go follow some senior engineers.

[–] Michal@programming.dev 20 points 10 months ago (2 children)

Any recommendations whom to follow? On Mastodon?

[–] thirteene@lemmy.world 16 points 10 months ago (1 children)

There is a reason they didn't offer specific examples. LLM can still scale by size, logical optimization, training optimization, and more importantly integration. The current implementation is reaching it's limits but pace of growth is also happening very quickly. AI reduces workload, but it is likely going to require designers and validators for a long time.

[–] Wooki@lemmy.world 3 points 10 months ago* (last edited 10 months ago) (1 children)

For sure evidence is mounting that model size benefit is not returning the quality expected. Its also had the larger net impact of enshitifying itself with negative feedback loops between training data, humans and back to training. This one being quantified as a large declining trend in quality. It can only get worse as privacy, IP laws and other regulations start coming into place. The growth this hype master is selling is pure fiction.

[–] msage@programming.dev 2 points 10 months ago (1 children)

But he has a lot of product to sell.

And companies will gobble it all up.

On an unrelated note, I will never own a new graphics card.

[–] Wooki@lemmy.world 0 points 10 months ago (1 children)

Secondhand is better value, still new cost right now is nothing short of price fixing. You only need look at the size reduction in memory since A100 was released to know what’s happening to gpu’s.

We need serious competition, hopefully intel is able to but foreign competition would be best.

[–] msage@programming.dev 1 points 10 months ago (1 children)

I doubt that any serious competitor will bring any change to this space. Why would it - everyone will scream 'shut up and take my money'.

[–] Wooki@lemmy.world 1 points 10 months ago

I think you answered your own question: money

[–] Wooki@lemmy.world 0 points 10 months ago* (last edited 10 months ago)

Fediverse is sadly not as popular as we would like sorry cant help here. That said i follow some researchers blogs and a quick search should land you with some good sources depending on your field of interest

[–] Animated_beans@lemmy.world 12 points 10 months ago (1 children)

Why do you think we've reached peak LLM? There are so many areas with room for improvement

[–] Wooki@lemmy.world 0 points 10 months ago* (last edited 10 months ago)

You asked the question already answered. Pick your platform and you will find a lot of public research on the topic. Specifically for programming even more so