Hi!
I've just been doing face swap projects recently, and I have some questions about multiple GPUs.
First thing is that, what exactly should I do to run my face swap project on several GPUs since I haven't purchase them but I will in few weeks so that I can't try myself now. Should I just using like"export CUDA_VISIBLE_DEVICES=0,1,2," than it will operate successfully? Or I need to change something in the code.
The second question is that if I want to run one face swap project on multiple devices/computers to speed up a lot, is there a way to do this? Maybe sending data through the internet, but how can I know how the works are assigned to different GPUs? Can someone tell how the GPUs are chosen and how the original codes work?
Thank you !
(First time asking, not sure if it's proper. If not, sorry~)
For training there is a "GPUs" option. Just select the number of GPUs you want to use there, and make sure you set a batchsize (when training) that is divisible by the number of GPUs available.
There is no current way for distributed training over multiple machines, but we have improvements for multi-gpu in the pipeline.