Page 1 of 1

Loading weights and freezing layers on Phaze-A with split fc

Posted: Sun Jun 20, 2021 6:00 am
by swapration

With training new Phaze-A models that reuse Face B and replaces Face A, since only Face B is kept the same, with split fc enabled, in addition to the encoder should Fc B also be selected while Fc A left unselected?
Also, while keeping only Face B the same and using a shared G-Block, under weights should G-Block be left unselected? Since it'll need to learn from a new Face A?


Re: Loading weights and freezing layers on Phaze-A with split fc

Posted: Sun Jun 27, 2021 10:32 am
by torzdf

I honestly don't know the answer to this question. Every time I try to give one, I see an argument for doing the opposite. The best I can offer is "experiment and see what happens".

I wish I could give you something more definitive.


Re: Loading weights and freezing layers on Phaze-A with split fc

Posted: Sun Jun 27, 2021 4:18 pm
by swapration

Ok, I'll experiment with it.


Re: Loading weights and freezing layers on Phaze-A with split fc

Posted: Sun Jun 27, 2021 11:03 pm
by swapration

Oh, one last question.
Why is it when you reuse the encoder there's no identity leakage (as stated in the '[Guide] Training best practices...')?
Say you load and freeze the encoder weights from a face a1 + b1 model to a face a2 + b1 model then shouldn't face a1 data leak over?


Re: Loading weights and freezing layers on Phaze-A with split fc

Posted: Mon Jun 28, 2021 12:51 am
by torzdf

This is quite hard to explain, so I will keep it brief and point to some resources which should help to better understand it.

Long story short though, the encoder is responsible for creating a vector that is a description of the face it has seen for the decoder to try to create a face out of. It doesn't really have any detail in it that you would recognize as a face. This is the reason why, when you run a swap and feed it Face A data, you get Face B faces out of it. If the identity was stored in the encoder, then you would get Face A out the other end.

The other thing to bear in mind is that freezing the encoder is a method for "kickstarting" a model. It's a way to speed up training rather than starting from scratch. You don't keep the encoder frozen, you just use the encodings from a previous model to kickstart training the decoders. This is called Transfer learning.

You can read more about Transfer Learning here (you can ignore the code stuff, but the introduction and description of the premise is useful): https://keras.io/guides/transfer_learning/

Also this video does a pretty good job of explaining what the Encoder does within the wider scope of Faceswap (and they also use my software, which is also nice :) )


Re: Loading weights and freezing layers on Phaze-A with split fc

Posted: Mon Jun 28, 2021 3:08 am
by swapration

Thank you for that! All the Phaze-A settings make so much more sense now. Although now other questions have come to mind... so much to test out.