I tried VGG because Bisnet wasn't working quite the way I wanted, but vgg wasn't really better - I should have just redone bisnet.
I forgot you can just do the missing ones! Will remember for next time.
I tried VGG because Bisnet wasn't working quite the way I wanted, but vgg wasn't really better - I should have just redone bisnet.
I forgot you can just do the missing ones! Will remember for next time.
Yup, I had to change it back to admin to train because otherwise it dies after a while due to "file in use errors". And when I do, it only uses the CPU - no idea why that makes a difference.
Oof. Ok, I'll look closer next time
It doesn't say that exactly, but the FAQ says it's supposed to download and install the update, but that's never worked for me. Every time I want the new version, I've had to install from scratch. What am I doing wrong?
For whatever reason, either setting python and faceswap to run as admin or setting it to save every 1000 iterations seems to have helped.
It only made it about 100 more iterations before crashing again.
Again at 2001 its.
scipy-cgk40_w2 @ 3001 its
scipy-5ekc3fh2 @ 3252
And so on.
Fair enough. So I need more source material with that angle. You're right that it's model-dependant on the eye thing though. I've only ever seen that dlite.
Would it help if I used more higher resolution photos than medium res frames of video?
What do I do about the blur though? I'm pretty sure I'm good with 512x512 source faces that are clear for the most part. Is there a better model for sharpness? Or do I need to push upwards of 400k on training for that?
Cool! I'll try it, thanks!
EDIT: That seems to work! Thank you!
I hit it today on an extraction. For settings, the file dump would be good enough wouldn't it? As for source, how would I provide that? Google Drive or something?