Despite attempting to run Dfaker with a batch size of 1 i am unable to use the model without the trainer crashing due to vram limitations. My computer has a nvidia geforce gtx 1050 (not Ti so only 2Gb vram). I know i'm running on low end hardware, but is their anyway i can get the model to run?
Settings i am using:
Batch size 1
warp to landmarks
Allow Growth
Memory saving gradients
Optimizer savings
Tried it with and without ping pong same result.
If it is just not possible for me to use the Dfaker model on my computer, what is the recommended alternative?
P.S. Its probably a dumb question but, how does the unbalanced trainer work? Is it used to swap face A onto B or B onto A? The language is a little confusing to me.