this post was submitted on 26 Aug 2023
67 points (94.7% liked)
Artificial Intelligence
1340 readers
38 users here now
Welcome to the AI Community!
Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:
- Be kind and respectful.
- Share high-quality contributions.
- Stay on-topic.
- Enhance accessibility.
- Verify information.
- Encourage meaningful discussions.
You can access the AI Wiki at the following link: AI Wiki
Let's create a thriving AI community together!
founded 1 year ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I still don't see how adding a tiny little line of code to robots.txt equals OpenAI can no longer scrape data. Seems like they can still manually go in and harvest info manually right? and that's not exactly a large list of companies.
That's honor system, for sure. OpenAI has promised that their bots will honor this line in robots.txt. But unless these companies have implemented some detect-and-block method on their own, there is nothing physically stopping the bots from gathering data anyway.
Exactly, so this is purely performative on their part. Businesses shouldn't virtue signal, its a little pathetic.
Companies do honour robots.txt. Maybe not the small project made by some guy somewhere. But large companies do.
Thats not really virtue signaling unless the org is using it for PR reasons, it's just asking others to respect your wishes in a cooperative community sense rather than a legal demand. This is a more technical side of things than the politics everyone injects.
Im a proponent supporting these LLaMA systems, they are really just the next iteration of Search systems. Just like with search engines, they use traffic and server time with queries and its good manners for everyone to follow the robots.txt limits of every site, but the freedom is still inherent under an open internet that a third party can read the site for whatever reasons. If you dont want take part in the open community part of the open internet of the world, you don't have to expose anything at all to public access that can be scraped.
I rarely read paywalled news sites because they opt to not to be part of the open community of information sharing that our open internet represents.
Except it doesn't credit the source nor direct traffic. So... almost an entirely different beast.
That depends fully on the implementation. Bing does give you sources, but chatgpt generates "original content" based on all the shit it's scraped
A bit of a tangent, but I've recently shifted my focus to reading content behind paywalls and have noticed a significant improvement in the quality of information compared to freely accessible sources. The open internet does offer valuable content, but there's often a notable difference in journalistic rigor when a subscription fee is involved. I suspect that this disparity might contribute to the public's vulnerability to disinformation, although I haven't fully explored that theory.