My experiences so far

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
abigflea
Posts: 182
Joined: Sat Feb 22, 2020 10:59 pm
Answers: 2
Has thanked: 20 times
Been thanked: 62 times

My experiences so far

Post by abigflea »

I've been at this a couple months now. Enjoying the technology and getting a handle on how it works.
Made a couple really good swaps @ 70K iterations.

Made several other models just trying to teach myself. but I get a little dismayed.. Still blurry or somehow it is not right.

So I set up a set of real good test, and have used different models. I wanted to see the shape of teeth, and freckles come through, the little micro expressions, mouth shapes. Kept getting kinda blurry and sometimes the face is just 'off'. Keep asking myself, Why isn't this working.

To those like me that are trying it out, and feeling a bit frustrated here is a message:
Listen to the advice in the guides. Read them again.
Get high quality images. Crisp. Especially of the one you want to swap to. Its fine if they are not all perfect but do get lots of good ones.
Get your alignments right! Don't have to over correct them, just fix the wacky ones (If your looking you'll see some).
Let it train for a long while. Have patience.

When it hit 200K iterations at Batch size 6, you could really see it coming through.
A long week at work to let my computer churn away helps.
Ran it on XUbuntu because Windows steals a bit of cpu and Vram all the time for other things.

I know numbers make you feel good so this is what I'm running.
1070 with 8gb Vram.
DFL-SAE
Conv Aware Init :true
Loss function: ssim
Mask type: Extended
Reflect padding : True
Mask blur Kernel : 5
Input size: 192
Clipnorm :True
Architecture DF
Autoencoder 384 (Had to set it a little lower than 512 for my vram limits)

Currently @ 205,000 iterations , probably going to let it hit 250K before I do a convert and see, but its starting to look real crisp.

Im going to use the same input images with a couple other models to see how they compare, and expect a few days each to see.

Sure you can do things with less. I used 1060 6gb mining cards very successfully, but was much slower and would have to adjust the input-size (to 128) and things to make it fit.

:o I dunno what I'm doing :shock:
2X RTX 3090 : RTX 3080 : RTX: 2060 : 2x RTX 2080 Super : Ghetto 1060

Locked