This should be fixed in latest update.
I have not dug into it in any detail, but there is no evidence to suggest that multi-scale output helps in any way.
This should be fixed in latest update.
I have not dug into it in any detail, but there is no evidence to suggest that multi-scale output helps in any way.
Yes, you can do it however you want. The built in version is just a convenience tool.
I don't recommend outputting to jpeg though (use pngs).
Go to the faceswap folder, and look for a file called (something like) gui_win_launcher.bat.
Open the file in notepad and copy the contents.
Open up a command prompt and paste the contents of the file and press enter. You should now be able to see the issue without the window closing.
Dlight is notoriously unstable. Try lowering the learning rate and starting again
Dlight is notoriously unstable. Try lowering the learning rate
Tell a lie. This was an easy bug to fix, so it's been fixed. Please update
That model looks corrupted, for whatever reason. Which model are you using?
Don't use on the fly. It's not good, and I'm not going to prioritize looking at it
Thanks for the heads up. This should be fixed in latest commit.
If you're using the GUI go Help
> Update Faceswap
If you are using the CLI run python update_deps.py
Sometimes OpenCV doesn't pull in during install for unknown reasons.
If you cannot provide the model, then please at least provide the stat.json file from your model folder. This works fine for me.
Please provide the crash report from your Faceswap folder (if one was created)
No they are not. The extracted face is "aligned" based on the position of the face landmarks. Ideally the landmarks will appear in the correct position as if the face were to be fully visible.
This isn't how Faceswap works. You would need to be looking at a different solution for this, more in lines of First Order Model: https://github.com/AliaksandrSiarohin/first-order-model
It's unlikely to be the model file, and something to do with a recent update... I think I have fixed this now, so rather than going backwards, let's go forwards.
Please update faceswap and see if the problem persists.
ianstephens wrote: ↑Sun May 16, 2021 6:28 pmThe only difference I can think of with this particular training project is that we are saving every 1000 iterations as opposed to our usual 500. That's the only difference.
No. it's more likely due to enabling the pointless "multi-scale output" feature in dfl-sae
A mask is a mask. I'll be frank and say none of the existing masking solutions are optimal, and all fail in certain instances.
You will want to edit masks with the manual tool for convert, so really you want to use the mask which gives you the least amount of editing to do.
Generally, if the mask overspills the face, then you want to remove the overspill.
Not so important for training, but in convert, anything which is "red" will be swapped.
Nothing should have changed there.
Which version of Tensorflow are you currently running?
Are you able to zip up your model folder and provide it to me for analysis?