this post was submitted on 30 Aug 2023
228 points (96.7% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
54577 readers
312 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Current AI is not smarter than humans. It needs supervised training, and then acts according to that. That's inherently incompatible to novelty and correct exploration.
It's not even real AI lol there's no thought, just text transformation
This problem seems like the sort of thing machine learning could be good at though. You have some input binary code that doesn't run, you want an output that does, you have available training data of inputs and correct matching outputs.
AI is good in doing complex things but bad at doing easy things. Supervision is required at first for learning of course, there's no AI that works out of the box.
That assessment entirely depends on what you consider "complex" and "easy".
What do you mean by it's bad at doing easy things but good at doing complex things? I don't see how something complex would work better than something easy.
In short.
Look up what AI does good right now, like finding complex solutions to mathematical issues a human couldn't. Calculate stuff very fast, replicate natural language etc.
Look up what AI struggles with at the moment, like drawing hands or recognizing objects or driving a car.
This statement is only valid in this current state, as AI is advancing faster than most peoples mind by now. Most people have yet to understand LLM or generative AI models.
That's what I'm talking about. If you look at the process required to crack Denuvo, then you'll notice that there's a lot of guesswork done, something the AI is good at if learned properly. The amount of people who know how to and are willing to spend time cracking Denuvo is shrinking by the day. The amount of software DRM encrypted is rising every day. We need automation soon.
AI will soon be mandatory for software security as malicious actors will use AI to find zero day exploits and you want an AI to protect you from those real time threats. Anti Virus software already work somewhat into that direction by now but there's still much room.