Unbalanced, the way to go.

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
tokafondo
Posts: 32
Joined: Mon Dec 16, 2019 1:43 pm
Has thanked: 10 times
Been thanked: 5 times

Unbalanced, the way to go.

Post by tokafondo »

I wanted to share that I'm getting used to the Unbalanced model. I've tried DFL-SAE, DFL-H128 and Villain.

I let that last one I let to train for >600K iterations with decent quality sources, 128px resolution and cared alignments, and found that up to that point the details were not surfacing. Convert tests always had the results blurred and no improvement at all after 100K iterations each.

I got tired and started again with unbalanced with 160px resolutions. If I got it right, it will decode to 256px, won't it?? I did a fit training of 50K and now I'm training with the definitive sources and I'm happy with what I'm getting with >56K iterations.

I have a GTX1070 and I don't care about waiting but seeing no real advance is frustrating.

So for me, unbalanced is the way to go.

Thanks for reading.

Locked