Sure, the LLMs are more than welcome to make use of my memes and random comments. Anything on the internet is public for all to use, after all.
Lemmy.World Announcements
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
I don't really care to be honest. If something's public on social media, it's public, and it's no longer on you to decide how it will be used. I really like the Stack Exchange policy that all posts are publicized under a Creative Commons license. Though they seem hell-bent on killing that, too.
Yeah, I think a creative commons style license makes sense and that was always my intent when posting things. However, when you post creative commons content, you do get to decide the restrictions (e.g. commercial or noncommercial).
I think its currently an open question how this applies to generative AI and LLMs. Perhaps the output of generative AI should retain the license of the training data? Or perhaps that is overly restrictive? There are those who believe that training commercial generative AI on data under permissive licenses is a problem.
https://www.theregister.com/2023/05/12/github_microsoft_openai_copilot/
I am not really sure where I stand on the overall issue. But the worst case scenario in my opinion is one where open source generative AI is hobbled by regulation paving the way for corporate control. My biggest fear about the Reddit API changes prevent anyone except Google, Facebook, Microsoft, Amazon, etc from using user comments as a training set.
I don't know either. I'll agree with you though that not restricting AI so that only big tech companies who have lots of lawyers can research it (and not release it) is the worst case scenario. And I fear that it's either that or complete dysregulation. OpenAI etc. just have too much money for lobbying, and given this is all happening in the US, which seems to be quite susceptible to monetary influence in politics, so I doubt any laws are gonna be passed to restrict them. Besides, there's the national interest in not letting China take the lead.
My personal opinion is that high API usage fees hurt open source LLMs (e.g. GPT4All). I would rather not see this new technology monopolized by those who can pay API fees.
I think it's a good idea to monetize the API for that, but not to the extreme Reddit has gone. When your API is so expensive that it kills off anything other than data scraping bots you messed up.
I don't care if people train models off my posts. I released the content into the wild; I don't much care what happens to it after that. Attribution of direct quotes is nice to have, but twiddling some weights in a language model is far too abstruse for me to care about.
And sure, if openAI is inhaling all of reddit, it's reasonable to charge for that.
But shutting down third-party apps was never about that.
Bullshit. This assumes the people training LLMs are the same ones building the datasets. Once a dataset is created, it can be used to train multiple models, meaning that there's no further impact on API usage.
Certainly the archived Reddit posts will be used for that for years to come regardless. What I am curious about is how do you feel about your posts contributing to the output of a LLM (independent of API usage costs)?
LLMs can be specialized to tasks by training them further on a curated set of data. For example, a LLM trained specifically on your posts will sound more like you than the LLM before the training. Does it bother you that someone may use your posts for this purpose?
Well, these AIs are being trained on public figures, and there isn't much they can do unless they livestream with the AI impersonating them, allowing them to potentially identify who is behind it. How will people figure out if there's an LLM out there that speaks just like them? It's similar to fine-tuning AIs on artists to create art that mimics their style. It can be frustrating, but there isn't much anyone can do unless surveillance software is installed on every computer. In summary, I don't mind because I won't even find out.