Trying to better understand options for faceswapping.

Talk about AI and Deep Learning


Locked
User avatar
SpooglyWoogly
Posts: 3
Joined: Fri Nov 12, 2021 3:23 pm

Trying to better understand options for faceswapping.

Post by SpooglyWoogly »

New to the scene and Faceswap is a great tool! I just started using it with about 7 hours of training, 120,000 or so iterations my results weren't great but I'm sure if I improved my data set and trained longer I could get better results (learn some other tricks from reading guides on here too).

But I'm curious about other options? I don't know where else to talk about this kind of stuff so if there is another more generic forum then please point me to it!

I believe there are solutions out there that take a single image and slap it onto any other face in a video. I'm not sure how good that method can be but I'm wondering if it is at least faster than the current process used with the Faceswap app? What is deterring me is the amount of time it takes to collect good data and process it.

But having the lead of "One Shot Faceswap" and googling I've come across some github repos which seem promising but I'm still figuring out how to use all this source and the documentation seems like it was only useful for the person who created it.

I've found this

  1. https://github.com/zyainfal/One-Shot-Fa ... Megapixels

and

  1. https://github.com/AliaksandrSiarohin/m ... gmentation

The second one I managed to get up and running with the demo but again documentation is lacking so my results are really bad but it does it super fast! Seem to people to pick one source image and one video.

The first one I can't get running, it depends on another repo and I'm not sure what the repo from that link is even changing/adding or how to integrate it with Style GAN as it indicates is required.

So in short, I'm looking for faster, easier that gives pretty good results something like Reflect.tech used to do (I actually never used it but reading online it seems like it was really good and worked off just a single image?).

And more importantly if the research path I'm on is a lost cause I'd appreciate a sanity check here or if there are more modern/better leads? I'm sure if what I'm asking for existed everyone would be on it, but if it exists and its just a matter of figuring out how to execute I'd like to keep digging. I'm also a C++/C# developer so github isn't unfamiliar to me just python and machine learning are a new territory.

User avatar
bryanlyon
Site Admin
Posts: 793
Joined: Fri Jul 12, 2019 12:49 am
Answers: 44
Location: San Francisco
Has thanked: 4 times
Been thanked: 218 times
Contact:

Re: Trying to better understand options for faceswapping.

Post by bryanlyon »

One-shot models do exist and can run a swap without requiring training. They've come and gone over time, and most of them are nore really built for deployment and are just academic projects. To that end they can be difficult to get running reliably and rarely have any type of support.

Even without the academic project's typical lack of support, one general problem with the one-shot technique is that since it's going off a single image, they have to get all the information for the swap from that one image. That prevents you from getting really good movement and just generally leads to a lower quality swap. It does tend to work better for photo swaps than for video, since it doesn't get all the information to reproduce high quality video swaps.

User avatar
SpooglyWoogly
Posts: 3
Joined: Fri Nov 12, 2021 3:23 pm

Re: Trying to better understand options for faceswapping.

Post by SpooglyWoogly »

bryanlyon wrote: Fri Nov 12, 2021 4:04 pm

One-shot models do exist and can run a swap without requiring training. They've come and gone over time, and most of them are nore really built for deployment and are just academic projects. To that end they can be difficult to get running reliably and rarely have any type of support.

Even without the academic project's typical lack of support, one general problem with the one-shot technique is that since it's going off a single image, they have to get all the information for the swap from that one image. That prevents you from getting really good movement and just generally leads to a lower quality swap. It does tend to work better for photo swaps than for video, since it doesn't get all the information to reproduce high quality video swaps.

I'm sure you have a better understanding for the pros/cons of each. Honestly though the results from Reface (not that I know what paper they used) is still very very good when considering the data provided with the result given.

Sounds as though the leads I have are pretty much as good as they are going to get until someone puts it all together and decides to make a repo with proper documentation.

I can spend some time trying to hash it and see what I learn along the way. While researching I'll keep playing with Faceswap to get a better understanding of the time commitment to quality ratio it provides.

As a sanity check though. Is the One-shot-face-Swapping-on-Megapixels repo and instructions enough to get it working?
I get lost when they start depending on other datasets, undocumented commands and other repos. Part of me learning is of course trying to learn from others more experienced than me as much as I can. Hope I'm asking the right questions too of course.

Locked