Also I noticed this the other day.
Distributed with a batch of 14, and only gpu1 with a batch of 7.
Shouldn't the distributed batch of 14 have roughly 2x the EG/s of the single gpu with a batch of 7?
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
Also I noticed this the other day.
Distributed with a batch of 14, and only gpu1 with a batch of 7.
Shouldn't the distributed batch of 14 have roughly 2x the EG/s of the single gpu with a batch of 7?
You get speed up by increasing your batchsize.
The same batchsize on a single GPU or multi GPUs is likely to run at about the same speed, or maybe slightly slower.
Misread your message. I don't have a multi-gpu setup, so can't compare, and don't know where the "sweet-spot" is. Hopefully someone who does have a multi-gpu setup can offer some insight.
My word is final