So, I bought a used 3090 to compliment my 3080ti (so I can do other things while messing around with machine learning.) The idea was to use the 3090 for training, and switching the 3080 off so I can use it for gaming or whatever
Faceswap reads there's 36G of memory and I can go from a batch size of 1 to a batch size of three using default, but when I go to mirrored it actually seems to slow down. Is that a type of placebo-effect? Is it really slowing down? And, if left in Default is it still using both GPUs? Because it seems like it is and I thought it was only supposed to use 1.
And, I have my 3080ti plugged into my PCIe 1 slot, and 3090 in #2 (for cooling purposes.) Will that matter to how Faceswap uses the GPUs?