If I wanted to repeatedly use the same B face for new A videos, is there a way to reuse training/progress to speed up future uses?
How do you reuse training/data from B source face for a new A Video/face?
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
- CProdDigital
- Posts: 14
- Joined: Fri Nov 04, 2022 4:22 pm
- Has thanked: 1 time
- billubakra
- Posts: 22
- Joined: Sun Oct 30, 2022 5:00 pm
- Has thanked: 13 times
Re: How do you reuse training/data from B source face for a new A Video/face?
CProdDigital wrote: ↑Tue Nov 08, 2022 6:38 pmIf I wanted to repeatedly use the same B face for new A videos, is there a way to reuse training/progress to speed up future uses?
Same question
Re: How do you reuse training/data from B source face for a new A Video/face?
Look into Freezing/Loading weights.
Covered in the training guide
My word is final
- curiosisegreti
- Posts: 2
- Joined: Fri Jan 13, 2023 1:43 pm
Re: How do you reuse training/data from B source face for a new A Video/face?
torzdf wrote: ↑Wed Nov 09, 2022 12:17 pmLook into Freezing/Loading weights.
Covered in the training guide
two question:
1) So can be a good pratice training a model A/B with a very high number of interation to obtain a good "B" trained face that i want to reuse with the "load weight" function the B model for other swap projects ? (C/B) (D/B) (..X/B)
2) If the first A/B is trained with a model (original, dlight, realface etc) the C/ B training with load/freeze weights applied, must have the same training model?
Sorry for my poor english and thanks
Re: How do you reuse training/data from B source face for a new A Video/face?
curiosisegreti wrote: ↑Fri Jan 13, 2023 1:52 pm1) So can be a good pratice training a model A/B with a very high number of interation to obtain a good "B" trained face that i want to reuse with the "load weight" function the B model for other swap projects ? (C/B) (D/B) (..X/B)
Not something I have tried, but yes, theoretically that would be possible (load + freeze everything except for the A decoder, train the A decoder for a length of time before unfreezing the whole model.
2) If the first A/B is trained with a model (original, dlight, realface etc) the C/ B training with load/freeze weights applied, must have the same training model?
Theoretically, no. You could use weights from different models (to a certain extent), in reality though, yes, it would need to be the same model with the settings, as this is all that Faceswap allows (the complexity involving developing an automated solution for cross-model weights transfer is too high), so unless you were prepared to do some model surgery, yourself, outside of Faceswap, yes, stick to the same model.
It's worth noting that only Phaze-A allows for freezing/loading weights for different parts of the model. In Faceswap, the other models only allow for freezing/loading the encoder.
My word is final
- curiosisegreti
- Posts: 2
- Joined: Fri Jan 13, 2023 1:43 pm
Re: How do you reuse training/data from B source face for a new A Video/face?
Not something I have tried, but yes, theoretically that would be possible (load + freeze everything except for the A decoder, train the A decoder for a length of time before unfreezing the whole model.
Thanks for clarification but there's something still not clear, how Faceswap recognize that in the new project i need the B face of the model (A/b long trained) for my new project C/B ?
There's two faces in that model of course and i just choose a h5 file in the "load weights" feature without any other parameter
Re: How do you reuse training/data from B source face for a new A Video/face?
curiosisegreti wrote: ↑Fri Jan 13, 2023 5:17 pmhow Faceswap recognize that in the new project i need the B face of the model (A/b long trained) for my new project C/B ?
Look in the "load weights"/"freeze weights" section in the PhazeA options
Which to load+freeze depends on your model. For an Fc(both) + decoder(A),decoder(B) architecture like SYM384, you would:
Load and optionally freeze Encoder
(the common encoder between the two models)
Optionally load, don't freeze Decoder A
(this is the new decoder if going A/B -> C/B like you are; you would invert this and Decoder B if going A/B -> A/C)
Load and freeze Decoder B
(this is the common decoder between the two models)
Load and freeze Fc (both)
(the common Fc layer)