Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
Maybe work on proving "AI" is actually a technological advancement instead of an overhyped plagiarism machine first.
LLMs real power isn't generating fresh content it's their ability to understand language.
Using one to summarise articles gives incredibly good results.
I use Bing enterprise everyday at work as a programmer. It makes information gathering and learning so much easier.
It's decent at writing code but that's not the main selling point in my opinion.
Plus they are general models to show the capabilities. Once the tech is more advanced you can train models for specific purposes.
It seems obvious an AI that can do creative writing and coding wouldn't be as good at either.
These are generation 0. There'll be a lot of advances coming.
Also LLMs are a very specific type of machine learning and any advances will help the rest of the field. AI is already widely used in many fields.
LLMs don't "understand" anything. They're just very good at making it look like they sort of do
They also tend to have difficulty giving the answer "I don't know" and will confidently assert something completely incorrect
And this is not generation 0. The field of AI has been around for a long time. It's just now becoming widespread and used where the avg person can see it
If they're very good at it, then is there functionally any difference? I think the definition of "understand" people use when railing against AI must include some special pleading that gates off anything that isn't actually intelligent. When it comes to artificial intelligence, all I care about is if it can accurately fulfill a prompt or answer a question, and in the cases where it does do that accurately I don't understand why I shouldn't say that it seems to have "understood the question/prompt."
I agree that they should be more capable of saying I don't know, but if you understand the limits of LLMs then they're still really useful. I can ask it to explain math concepts in simple terms and it makes it a lot easier and faster to learn whatever I want. I can easily verify what it said either with a calculator or with other sources, and it's never failed me on that front. Or if I'm curious about a religion or what any particular holy text says or doesn't say, it does a remarkable job giving me relevant results and details that are easily verifiable.
But I'm not going to ask GPT3.5 to play chess with me because I know it's going to give me blatantly incoherent and illegal moves. Because, while it does understand chess notation, it doesn't understand how to keep track of the pieces like GPT4 does.
If you can easily validate any of the answers. And you have to to know if they're actually correct wouldn't it make more sense to just skip the prompt and do the same thing you would to validate?
I think LLMs have a place. But I don't think it's as broad as people seem to think. It makes a lot of sense for boilerplate for example, as it just saves mindless typing. But you still need to have enough knowledge to validate it
If I'm doing something like coding or trying to figure out the math behind some code I want to write, it's a lot easier to just test what it gave me than it is to go see if anyone on the internet claims it'll do what I think it does.
And when it comes to finding stuff in texts, a lot of the time that involves me going to the source for context anyways, so it's hard not to validate what it gave me. And even if it was wrong, the stakes for being wrong about a book is 0, so... It's not like I'm out here using it to make college presentations, or asking it for medical advice.