this post was submitted on 10 Jul 2023
91 points (100.0% liked)

Technology

31 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

you are viewing a single comment's thread
view the rest of the comments
[–] magic_lobster_party@kbin.social 3 points 1 year ago* (last edited 1 year ago)

Here’s a basic description of how (a part of) LLMs work: https://huggingface.co/learn/nlp-course/chapter1/6

LLMs are generating texts word for word (or token by token if you’re pedantic). This is why ChatGPT is slowly generating the response word by word instead of giving you the entire response at once.

Same applies during the training phase. It gets a piece of text and the word it’s supposed to predict. Then it’s tuned to improve its chances to predict the right word based on the text it’s given.

Ideally it’s supposed to make predictions by learning the patterns of the language. This is not always the case. Sometimes it can just memorize the answer instead of learning why (just like how a child can memorize the multiplication table without understanding multiplication). This is formally known as overfitting, which is a machine learning 101 concept.

There are ways to mitigate overfitting, but there’s no silver bullet solution. Sometimes it cannot help to memorize the training data.

When GitHub Copilot was new people quickly figured out it could generate the fast inverse square root implementation from Quake. Word for word. Including the “what the fuck” comment. It had memorized it completely.

I’m not sure how much OpenAI has done to mitigate this issue. But it’s a thing that can happen. It’s not imaginary.