Can someone explain why there is the option to adjust the convert batchsize in the global training settings?
Has this to do with the convert that is showing in the preview?
Want to understand the training process better? Got tips for which model to use and when? This is the place for you
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
Honestly, it was added a long time ago, but makes very little difference to the conversion speed (the bottleneck is the CPU not the GPU during convert). The setting just dictates how many faces are fed through the model at once to get the swap at each call.
My word is final