this post was submitted on 05 Feb 2025
425 points (97.5% liked)
Technology
61632 readers
5407 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
when the data used to train the AI is copyrighted, how do you make it open source? it's a valid question.
one thing is the model or the code that trains the AI. the other thing is the data that produces the weights which determines how the model predicts
of course, the obligatory fuck meta and the zuck and all that but there is a legal conundrum here we need to address that don't fit into our current IP legal framework
my preferred solution is just to eliminate IP entirely
The OSI's definition actually tackles this pretty well:
Sufficient information as to the source of the data so that one could potentially go out and to retrieve it, and recreate the model, is sufficient to fall within the OSAI definition.
When part of my code base belongs to someone else, how do I make it open source? By open sourcing the parts that belong to me, while clarifying that it's only partially open source.
This is essentially what Llama does, no? The reason they are attempting a clarification is because they would be subject to different regulations depending on whether or not it's open source.
If they open source everything they legally can, then do they qualify as "open source" for legal purposes? The difference can be tens of millions if not hundreds of millions of dollars in the EU according to Meta.
So a clarification on this issue, I think, is not asking for so much. Hate Facebook as much as the next guy but this is like 5 minute hate material
No, definitely not! Open source is a binary attribute. If your product is partially open source, it's not open source, only the parts you open sourced.
So Llama is not open source, even if some parts are.
I agree with you. What I'm saying is that perhaps the law can differentiate between "not open source" "partially open source" and "fully open source"
right now it's just the binary yes/no. which again determines whether or not millions of people would have access to something that could be useful to them
i'm not saying change the definition of open source. i'm saying for legal purposes, in the EU, there should be some clarification in the law. if there is a financial benefit to having an open source product available then there should be something for having a partially open source product available
especially a product that is as open source as it could possible legally be without violating copyright
Open source isn't defined legally, only through the OSI. The benefit is only from a marketing perspective as far as I'm aware.
Which is also why it's important that "open source" doesn't get mixed up with "partially open source", otherwise companies will get the benefits of "open source" without doing the actual work.
It is defined legally in the EU
https://artificialintelligenceact.eu/
https://artificialintelligenceact.eu/high-level-summary/
There are different requirements if the provider falls under "Free and open licence GPAI model providers"
Which is legally defined in that piece of legislation
Meta has done a lot for Open source, to their credit. React Native is my preferred framework for mobile development, for example.
Again- I fully acknowledge they are a large evil megacorp but without evil large megacorps we would not have Open Source as we know it today. There are certain realities we need to accept based on the system we live in. Open Source only exists because corporations benefit off of this shared infrastructure.
Our laws should encourage this type of behavior and not restrict it. By limiting the scope, it gives Meta less incentive to open source the code behind their AI models. We want the opposite. We want to incentivize
I mean, you can have open source weights, training data, and code/model architecture. If you've done all three it's an open model, otherwise you state open "component". Seems pretty straightforward to me.
Yes, but that model would never compete with the models that use copyrighted data.
There is a unfathomably large ocean of copyrighted data that goes into the modern LLMs. From scraping the internet to transcripts of movies and TV shows to tens of thousands of novels, etc.
That's the reason they are useful. If it weren't for that data, it would be a novelty.
So do we want public access to AI or not? How do we wanna do it? Zuck's quote from article "our legal framework isn't equipped for this new generation of AI" I think has truth to it
I mean using proprietary data has been an issue with models as long as I've worked in the space. It's always been a mixture of open weights, open data, open architecture.
I admit that it became more obvious when images/videos/audio became more accessible, but from things like facial recognition to pose estimation have all used proprietary datasets to build the models.
So this isn't a new issue, and from my perspective not an issue at all. We just need to acknowledge that not all elements of a model may be open.
This is more or less what Zuckerberg is asking of the EU. To acknowledge that parts of it cannot be opened. But the fact that the code is opened means it should qualify for certain benefits that open source products would qualify for.