So I am a totally new to ML & Faceswapping, but I wanted to give it a go and have no access to a decent GPU. I decided to try using Google Cloud GPUs. It worked, so hopefully I can get some feedback on improving my process from some more knowledgeable tweakers. Hopefully someone knowledgeable in hardware like bryanlyon can provide an opinion.
For my first try I provisioned a GCP VM with an NVIDIA Tesla P100 GPU. GCP also has K80, P4 and T4. I don't know the difference and I wanted something quick, so I actually just picked the preconfigured "MXNet 1 Python 3.6 NVidia GPU Production" from Jetware in their marketplace choices. It's a one-click install with CUDA & Python already installed. (here's the link: https://console.cloud.google.com/market ... are-public) If anyone can recommend a better preconfigured VM I appreciate it.
I left it training overnight, for a total of 12 hours, and it "cost me" $12. Not bad, especially since Google give you $300 credit for each account you have.
I learned quickly that the data transfer time would be a killer, but luckily you can provision an S3 bucket for pennies on the GB, load up all your images to the cloud bucket ahead of time from somewhere you have a strong connection (work?), and simply mount the bucket as a drive on your provisioned GPU-enabled VM...voila, instant training when you provision the VM and very cheap.
I'm getting decent results, but doing some very basic short clips, fairly low res until I build up the experience. Ideas on improving this workflow?
One downside of the GCP option for newbies like me is you can only use the command line, so you can't really tell how the training is going visually nor use the gui. For a cloud GUI-enabled option perhaps you could provision an AWS Graphics Pro workspace(NVIDIA Tesla M60 GPU)...it's more expensive and it doesn't fall within the free tier ($66/month + $11.62/hour or $999 a month). IF anyone's tried it let me know.