The keyword you need to look for is "temporally stable".
Ask Burggit!
Ask Burggit!
Ever had a question you wanted to ask? Get an opinion on something?
Well, here's the place to do it! Ask the community pretty much anything.
Rules:
- Follow the rules of Burggit.moe
- Not intended for tech support or questions for Burggit staff.
As in using AI? It's probably already possible.
I'm sure it will happen, these studios tend to draw this stuff completely uncensored and only censor it for release. I could see more companies like Fakku popping up which license out the original raw uncensored hentai episodes for western distribution.
If not that the rise of AI and AI Assistance will probably help greatly as well.
There’s DeepCreamPy but it requires you to color in the censored parts manually. https://github.com/Deepshift/DeepCreamPy
It's already achievable with current workflows. The only missing tool is a censorship detection/marking neural network.
As far as i know, no one has worked on this, but the problem doesn't require the network to be particularly large or costly to train. Rather, the problem is in the lack of a readily-available dataset to train it.
You could build this sort of dataset by gathering a few hundred censored and uncensored releases of a given JAV / h-anime, then running a difference filter on the censored and uncensored versions. The availability of censored/uncensored releases seems to be much greater for JAV, but i can see how a model trained exclusively on JAV would potentially have issues if used on h-anime.
After this network is trained, the rest would be using it to predict which regions on a given frame have been censored, and then use that as an inpainting mask for an existing image generation model.
The inpainting approach would allow the inpainting network to be changed as developments on temporally-stable image generation improve.