Unfortunately, at this moment in time, no there is no quick way to mask obstructions. It is a laborious face-by-face task....
It is most important for convert, although can help training too.
It is something on my road map to address.
Unfortunately, at this moment in time, no there is no quick way to mask obstructions. It is a laborious face-by-face task....
It is most important for convert, although can help training too.
It is something on my road map to address.
Honestly? I'd need to investigate why that hard coded value was put in in the first place. My preference would just be to remove it altogether, as I am not particularly a fan of setting environment variables inside applications.
The FAN model file has corrupted. Most likely during download....
Delete the file:
faceswap/plugins/extract/align/.cache/face-alignment-network_2d4_keras_v2.h5
and try again
I would say that editing the code directly is rarely the best idea, as it can prevent Faceswap from updating.
Go into Training Settings and try reducing the Convert Batchsize
Honestly? It's entirely up to you for whichever works best for you workflow wise.
Personally I tend to stick to separate sources, but there is no reason not to compile all your sources together.
Sure it can, but it's going to require work on your part, and is not a feature we are likely to ever build in.
Ultimately, it is a masking problem. You have access to the 68 point face landmarks, so you can create "sub-masks" for any area of the face from that, and use those for conversion.
Yeah. It's a GUI bug. Doesn't impact training, just stops the graph updating for that session.
I'll fix it one day.
That has nothing to do with merging alignments.
Merging alignments file was an old training process. You're trying to convert.
In the first instance, run Faceswap in VERBOSE mode and you should get a message from Tensorflow telling you why the GPU is not being used.
You can set an env variable by doing:
VARIABLE=variable ; python faceswap.py ...
The training previews are generated in exactly the same way as the converted image.
Preview image is tiny, convert can be huge.
Save a copy of the preview image during training and zoom into it. It should give you a better idea of why you are seeing the differences you are.
You can't delete the "right eye". All 68 Landmarks need to be present. Therefore for profile shots the hidden landmarks will tend to "stack" onto the side of the face.
As long as the face image looks aligned, this is fine.
Most likely you are using an alignments file that had an "Extract Every N" number set to higher than 1.
Re-check the convert guide for how to do it right.
Ok. So most likely there is a "NaN" in your model. I'm merging this topic with a similar discussion....
Does this happen immediately, or several faces into your convert?