Page 1 of 1

How do you reuse training/data from B source face for a new A Video/face?

Posted: Tue Nov 08, 2022 6:38 pm
by CProdDigital

If I wanted to repeatedly use the same B face for new A videos, is there a way to reuse training/progress to speed up future uses?


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Wed Nov 09, 2022 7:24 am
by billubakra
CProdDigital wrote: Tue Nov 08, 2022 6:38 pm

If I wanted to repeatedly use the same B face for new A videos, is there a way to reuse training/progress to speed up future uses?

Same question


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Wed Nov 09, 2022 12:17 pm
by torzdf

Look into Freezing/Loading weights.

Covered in the training guide


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Fri Jan 13, 2023 1:52 pm
by curiosisegreti
torzdf wrote: Wed Nov 09, 2022 12:17 pm

Look into Freezing/Loading weights.

Covered in the training guide

two question:
1) So can be a good pratice training a model A/B with a very high number of interation to obtain a good "B" trained face that i want to reuse with the "load weight" function the B model for other swap projects ? (C/B) (D/B) (..X/B)

2) If the first A/B is trained with a model (original, dlight, realface etc) the C/ B training with load/freeze weights applied, must have the same training model?

Sorry for my poor english and thanks :)


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Fri Jan 13, 2023 2:14 pm
by torzdf
curiosisegreti wrote: Fri Jan 13, 2023 1:52 pm

1) So can be a good pratice training a model A/B with a very high number of interation to obtain a good "B" trained face that i want to reuse with the "load weight" function the B model for other swap projects ? (C/B) (D/B) (..X/B)

Not something I have tried, but yes, theoretically that would be possible (load + freeze everything except for the A decoder, train the A decoder for a length of time before unfreezing the whole model.

2) If the first A/B is trained with a model (original, dlight, realface etc) the C/ B training with load/freeze weights applied, must have the same training model?

Theoretically, no. You could use weights from different models (to a certain extent), in reality though, yes, it would need to be the same model with the settings, as this is all that Faceswap allows (the complexity involving developing an automated solution for cross-model weights transfer is too high), so unless you were prepared to do some model surgery, yourself, outside of Faceswap, yes, stick to the same model.

It's worth noting that only Phaze-A allows for freezing/loading weights for different parts of the model. In Faceswap, the other models only allow for freezing/loading the encoder.


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Fri Jan 13, 2023 5:17 pm
by curiosisegreti

Not something I have tried, but yes, theoretically that would be possible (load + freeze everything except for the A decoder, train the A decoder for a length of time before unfreezing the whole model.

Thanks for clarification but there's something still not clear, how Faceswap recognize that in the new project i need the B face of the model (A/b long trained) for my new project C/B ?
There's two faces in that model of course and i just choose a h5 file in the "load weights" feature without any other parameter


Re: How do you reuse training/data from B source face for a new A Video/face?

Posted: Fri Jan 13, 2023 10:29 pm
by couleurs
curiosisegreti wrote: Fri Jan 13, 2023 5:17 pm

how Faceswap recognize that in the new project i need the B face of the model (A/b long trained) for my new project C/B ?

Look in the "load weights"/"freeze weights" section in the PhazeA options

Which to load+freeze depends on your model. For an Fc(both) + decoder(A),decoder(B) architecture like SYM384, you would:

Load and optionally freeze Encoder (the common encoder between the two models)
Optionally load, don't freeze Decoder A (this is the new decoder if going A/B -> C/B like you are; you would invert this and Decoder B if going A/B -> A/C)
Load and freeze Decoder B (this is the common decoder between the two models)
Load and freeze Fc (both) (the common Fc layer)