Page 1 of 1

Reface analogue

Posted: Tue Nov 03, 2020 1:57 pm
by sintanial

I try to create app like Reface App https://reface.ai/.

But I don't understand how face swapping works so quickly for them? For a faceswap, you need to train a model for more than 24 hours, and they (reface) do this face change "on the fly" !


Re: Reface analogue

Posted: Tue Nov 03, 2020 5:53 pm
by torzdf

They use a different technique, which is a lot quicker, but a lot more limited in what it can do.


Re: Reface analogue

Posted: Thu Jan 21, 2021 3:38 pm
by lineage

realize this is old and probably a dead end ... but yah, same thought here.

juding by your comment on the different but limited ... Im curious of you have any more insight to share?

it just seems there is a clear and decisive difference in approach, a mere few seconds to get acceptable quality with any face photo to select 15 secs of video.

Im no expert, but clearly cloud crunching/processing with what appears like a 3-d head object per frame for the select videos and the face photo is a skin on top of that 3-d head object (im sure theres more to it, just my neophyte observation)

the eye blinks, smiles and facial gestures get really interesting ... but I imagine its similar to the motion tracking approach used by big studios. the software has more points to track for the movement, but the select 15 seconds would be "simple" to crunch. In fact, I suspect it is based on that core tech which makes movies (except purely software based with no actual sensors on an actors face)... hence the many hollywood vids made available for users to swap with.

curious if any faceswap member did any work towards reface :)

and of course the biggest difference is they have money... hollywood money... nuff said? :P


Re: Reface analogue

Posted: Thu Jan 21, 2021 3:54 pm
by bryanlyon

Pretty sure that Reface doesn't do anything in 3d at all. They using a morphing/latent space search for their face swapping. Basically they take the features of each face and morph them together and put them back.


Re: Reface analogue

Posted: Thu Jan 21, 2021 5:12 pm
by lineage

that kinda sounds like what faceswap is doing but within seconds and acceptable quality...

tbh I was hoping that was the case here ... ie train like crazy on a source vid, clean the face extract/maps/landmarks... then you'll be able to use any static photo and put that face on our trained source vid.

or am I missing something? again, neophyte talking here ... appreciate the discussion :)


Re: Reface analogue

Posted: Thu Jan 21, 2021 6:48 pm
by bryanlyon

This is nothing like what Faceswap is doing.

Faceswap is "learning" the faces and an encoding that corresponds to the input pose, expression, and lighting (though still technically not as a 3d object). Thats what the training is, building an understanding of what each face "is" and what it looks like in the various contexts. Because of this intense training, Faceswap is capable of much more than just a surface level copy of facial features onto the original video. It's able to match things like HOW an actor might "blink" or do other actions. That's how you're able to get tics and actions that one person does into a video.

If all you're looking for is surface facial features copied, then yes reface is one of the tools out there that does that but it's techniques are radically different from how Faceswap works and what Faceswap tries to accomplish.


Re: Reface analogue

Posted: Fri Nov 12, 2021 3:47 pm
by SpooglyWoogly

I was actually digging into this myself. I made a post but seeing how this exists it probably won't be showed so I'll add some comments here.
I'm New here and pretty much know the most popular easy to use tools, out there ( I think?).

But I'm a C# and C++ developer and I'm looking for the cutting edge solutions.
Faceswap deters me because of the tediousness of collecting data (proper data) and the length of time to process said data.

Reface is great but you have to pay. Mirror.tech was free but gone (also never used it but heard it was great)>

My research led me to look into First Order Models (no idea what that is yet) and Single Show or One Shot Face swapping.

This led me to two repo's on github which seem promising but I'm not a ML expert or python developer and the guides look like they were written for the creators themselves...

I've found this

  1. https://github.com/zyainfal/One-Shot-Fa ... Megapixels

and

  1. https://github.com/AliaksandrSiarohin/m ... gmentation

The second one I managed to get up and running with the demo but again documentation is lacking so my results are really bad but it does it super fast! Seem to people to pick one source image and one video.

The first one I can't get running, it depends on another repo and I'm not sure what the repo from that link is even changing/adding or how to integrate it with Style GAN as it indicates is required.

Can anyone shed light on getting either of those to run? Hope this is the right place to talk about this kind of stuff. If there is another area or even site that goes into this kind of stuff more than here (in case this is just Faceswap app related only) please let me know).