Page 1 of 1

Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Mon Mar 09, 2020 11:55 pm
by MetaUserName

I've recently begun working with the Faceswap program and have had trouble getting good results when swapping face B onto A, while getting great results swapping A onto B. I have tried Dlight, Dfl-h128, original, and lightweight and am getting the same results across each. In general these results are:

  1. Face B swapped onto face A has poorer results than Face A swapped onto B.
  2. The loss for A is always greater than the loss of B.

The computer i am using is farily low-end (Nvidia Georce GTX 1050) so i am using both memory saving gradients, optimizer savings, as well as running batches no larger than 30.

Is their some trick i am missing during the conversion/training or is it common for A to have greater loss than B?


Re: Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Tue Mar 10, 2020 4:33 pm
by bryanlyon

For Dlight, this is an intended "feature" of the model. It saves memory on the A decoder in order to maximize the quality of the B side. This also applies to Realface and Unbalanced.

All the other models you mentioned are symmetrical where the A and B sides are identical and you can swap both directions. If you're seeing higher A loss on these models, it's probably due to your data.


Re: Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Tue Mar 10, 2020 8:43 pm
by MetaUserName
bryanlyon wrote: Tue Mar 10, 2020 4:33 pm

For Dlight, this is an intended "feature" of the model. It saves memory on the A decoder in order to maximize the quality of the B side.

I appreciate you mentioning this, the description for Dlight does not seem to indicate that it would do this.

I have continued to run my project and i am slowly getting better results using the dfl-h128 model. Face A still has a higher loss than face B, but face B has better lighting.


Re: Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Sat Apr 25, 2020 5:20 pm
by Grassone

Hmmm wait...wait...wait.... this is kind of confusing..

The trainer window looks like this (at least this is what I see on the pop-up balloons)

Does this mean that I am doing things the wrong way (If I want to keep a good quality)?


Re: Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Sun Apr 26, 2020 10:50 am
by torzdf

Your annotation is correct.


Re: Better A > B Swap than B > A on different trainer models (Newbie)

Posted: Sun Apr 26, 2020 12:22 pm
by Grassone

Sheeesssss...

This explains a lot of things... including -probably- the fact that randomly there are faces on the "weak" side looking like Marty Feldman...

Anyway, this had a good side effect: now I am cleaning my input data a lot more carefully.