My question could be silly, but is there a way to disable B>A training and just keep A>B.
The background idea is to boost up the training speed/time.
I use a NVIDA GTX 1660 Ti 6GB and mostly training using Original (bach 64) & DFaker (batch 16)
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
My question could be silly, but is there a way to disable B>A training and just keep A>B.
The background idea is to boost up the training speed/time.
I use a NVIDA GTX 1660 Ti 6GB and mostly training using Original (bach 64) & DFaker (batch 16)
No, this is not possible.
The reason for this is that the model isn't really training B->A or even A->B but only training A->A and B->B with a shared encoder which we then utilize by swapping the decoders when it comes time to swap.
The model must learn both A and B to be able to learn how to swap the two.
No, this is not possible.
The reason for this is that the model isn't really training B->A or even A->B but only training A->A and B->B with a shared encoder which we then utilize by swapping the decoders when it comes time to swap.
The model must learn both A and B to be able to learn how to swap the two.