Prioritize Eyes?

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Tekniklee
Posts: 37
Joined: Fri Jan 31, 2020 6:03 pm
Has thanked: 7 times
Been thanked: 3 times

Prioritize Eyes?

Post by Tekniklee »

Despite attempting to gather source data that has eyes looking in different directions, I still get a lot of problems with eyes that either look strange, or not quite at the same angle as the original video. It's a bit like trying to look at someone with a lazy eye. Will you be implementing some of the new DFL model settings like Prioritize Eyes? Thanks.

User avatar
cosmico
Posts: 95
Joined: Sat Jan 18, 2020 6:32 pm
Has thanked: 13 times
Been thanked: 35 times

Re: Prioritize Eyes?

Post by cosmico »

When I had a problem similar to this, it seemed to be a combination of not having enough images for both A and B, we are talking extremely low numbers here, and also the source material I was deepfaking the face onto was pretty low res. The df came out great and realistic in every other way, but the face was completely motionless and dead, including the eyes not moving.

User avatar
Tekniklee
Posts: 37
Joined: Fri Jan 31, 2020 6:03 pm
Has thanked: 7 times
Been thanked: 3 times

Re: Prioritize Eyes?

Post by Tekniklee »

Yes, and I've gone back and done some things like that. I've learned a lot since first messing with swapping a few months back, and one of those things is to keep all of the images where only the eye positions change. I had previously gotten rid of anything where the faces were basically identical, not realizing that the eyes have it. Still, you get that "dead face" effect, alien eyes, eyes that point in the wrong direction, or (worse) different directions.

I haven't found much info on what the new DFL options actually do though. Does anyone here know about them? Also, do new models get updated before the interface is updated? In other words, is there a way to enable features like this in FaceSwap before the interface is modified by passing a command line parameter, or are models and the interface updated at the same time?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: Prioritize Eyes?

Post by torzdf »

To answer your questions in reverse order.....

The interface is updated at the same time.

I will review feature adds when I have a bit more time.

My word is final

User avatar
Nightwatch
Posts: 5
Joined: Wed May 06, 2020 2:38 pm

Re: Prioritize Eyes?

Post by Nightwatch »

I'm really new to this, but I'm wondering.....
Since the program will already attempt to do a swap onto a person who's head has gone halfway out of shot....
For example, stood up quickly and you're getting a few seconds with only half a face....tip of a nose, mouth & chin...
The program still does a swap with varying results...I've noticed it sometimes still nails it.

I'm wondering if sort of the opposite applies in relation to fixing the eyes looking in the wrong directions thing....
What happens if you crop a face video so it's ONLY the eyes & extract it into your face file that's already got all your extracted Full Face images in it ???

Only reason I'm asking & not doing, is my computer I got from a brain damaged snail who was upgrading to a faster model.
It takes ages to experiment & if I do too much it gets its period, goes off & starts eating chocolate & asking me if it looks pretty.

User avatar
Tekniklee
Posts: 37
Joined: Fri Jan 31, 2020 6:03 pm
Has thanked: 7 times
Been thanked: 3 times

Re: Prioritize Eyes?

Post by Tekniklee »

I think the problem you note is not really a problem. When a model is swapped, only the parts of the model inside of the frame are output, so anything falling outside (even half an eyeball) are okay. Ditto with training - anything outside of the frame is ignored. So the program doesn't attempt to either train or swap areas not in-frame. As long as the alignments are able to recognize the face that's in frame okay, everything should be good. That's not always the case, and if the alignment is hosed, the swap will be hosed.

I can tell you from experience that raw eye variation is critical to proper tracking. Many new users (i.e. me several months ago) concentrate on just getting facial expressions, and consider a face that is static except for the eye position to be unneeded. No. While it is true that nearly identical faces provide little training value, it turns out that eyes can provide value just by being in a different position. In order for eyes to track well you need lots of different eye positions, and this can be a real problem when you are using lots of "face on" images with the eyes staring at the camera. So you usually need to specifically hunt down training images with lots of eye positions other than looking into the camera. In this regard, even bad (blurry, etc) eye variation is better than too little eye variation. As far as I can tell, when pinged for output the trained model - being unable to find an exact match in the model - uses the closest thing available (someone please correct me if I'm wrong on this). So if you have too little variation during training, you get eyes looking in the wrong direction (i.e. different from the source). Or even worse "lazy eye", where one eye may look okay, but the other is off by a little or a lot.

What would be nice is for the training model to do a bit more work specifically on the eyes to reduce or eliminate this problem. The eyes comprise a very small area, but are high value because they are critical to social engagement. We instinctivelly track where a person's eyes are looking (gaze tracking), and we use that to add context information about the image and the environment. If the model could just take an eye (the iris, actually, and assuming the eye is not closed) and just move it along horizontally under the eyelid openings during training, that would fill in these critical features completely. I think this is what the new "Prioritize Eyes" feature in DFL does, but I'm not really sure since there doesn't appear to be any info on it yet.

Locked