this post was submitted on 17 Aug 2024
614 points (98.4% liked)

Technology

59446 readers
3878 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] schizo@forum.uncomfortable.business 1 points 3 months ago (1 children)

Because there's not enough PD content there to train AI on.

Copyright law is generally (yes I know this varies country by country but) gives the creator immediate ownership without any further requirements, which means every doodle, shitpost and hot take online is property of it's owner UNLESS they chose to license it in a way that would allow use.

Nobody does, and thus the data the AI needs simply doesn't exist as PD content and that makes the only choices for someone training a model is either to steal everything, or don't do it.

You can see what choice has been universally made.

[–] tjsauce@lemmy.world 2 points 3 months ago

People were also a lot more open to their data being used by machine learning because it was used in universally appreciable tasks like image classification or image upscaling; tasks no human would want to do manually and which threatens nobody.

The difference today is not the data used, but the threat from the use-case. Or, more accurately, people don't mind their data being used if they know the outcome is of universal benefit.