Page 1 of 1

is this good?

Posted: Tue Feb 18, 2020 9:18 pm
by siliconunit

Hello,

I was wondering if this looks like it's converging correctly... I'm using villain, around 40k iterations... is the blurriness reasonable or it should be 100% sharp? Thanks!

faceswp.jpg
faceswp.jpg (93.7 KiB) Viewed 7551 times

Re: is this good?

Posted: Tue Feb 18, 2020 9:20 pm
by bryanlyon

Blurriness at 40k is perfectly normal. Especially with Villain. You'll want to keep training. My only suggestion is that due to one side being black and white, and the other side being partial, you're going to see some color leakage. Not a problem if you're aware of it. You could always turn the whole B set B&W if you want to keep it black and white.


Re: is this good?

Posted: Tue Feb 18, 2020 9:58 pm
by siliconunit

Thanks for the info!, do you reckon Villain is the way to go for max quality? around how many iterations?

SU


Re: is this good?

Posted: Tue Feb 18, 2020 9:59 pm
by torzdf

I would say so.

It took me 10 days to train a Villain model to convergence on a GTX 1080


Re: is this good?

Posted: Tue Feb 18, 2020 10:00 pm
by bryanlyon

Depending on your data and needs, Villain starts to converge somewhere around 300k-500k.


Re: is this good?

Posted: Tue Feb 18, 2020 10:02 pm
by siliconunit
faceswp2.jpg
faceswp2.jpg (142.48 KiB) Viewed 7541 times

Any anomaly?


Re: is this good?

Posted: Tue Feb 18, 2020 10:03 pm
by bryanlyon

Nope. All good.


Re: is this good?

Posted: Tue Feb 18, 2020 10:04 pm
by siliconunit

Oh I see! so multi days stuff :) fair enough!


Re: is this good?

Posted: Tue Feb 18, 2020 10:15 pm
by siliconunit

One thing I noticed is that when I start training I get a red message:
Setting Faceswap backend to NVIDIA
02/18/2020 22:11:56 INFO Log level set to: INFO
Using TensorFlow backend. -> in red.
my EGs/s are around 6...
I've got 1070gtx.


Re: is this good?

Posted: Sun Feb 23, 2020 12:32 pm
by SwarmTogether

I've read that Villain favors B so:
during Training I exchange the Faces: Input A and Alignment A for Input B and Alignment B and then,
during Conversion set all inputs as your intended goal but scroll down and select the Swap Model checkbox.

I can start Villain on batches of 10, but after a few hundred iterations I bump the batches up to 14.

I start on an extended mask training at 100% Face Coverage for about 50k iterations, if I like the previews. Then I might bump the Face Coverage slider down to 95% for another 50k iterations, 90% for 50k.

After that I might decrease the Optimizer Learning rate by half a decimal once or twice, to 4.5e-5 then later to 4e-5 as I also continue to tighten the Face Coverage percentage.

I get good enough results from YouTube videos for hobbyist fun!


Re: is this good?

Posted: Sun Feb 23, 2020 1:00 pm
by torzdf
siliconunit wrote: Tue Feb 18, 2020 10:15 pm

One thing I noticed is that when I start training I get a red message:
Setting Faceswap backend to NVIDIA
02/18/2020 22:11:56 INFO Log level set to: INFO
Using TensorFlow backend. -> in red.
my EGs/s are around 6...
I've got 1070gtx.

That Red is fine. It's just because Keras outputs the backend information to stderr, which the GUI auto formats as red.


Re: is this good?

Posted: Sun Feb 23, 2020 1:03 pm
by torzdf
SwarmTogether wrote: Sun Feb 23, 2020 12:32 pm

I've read that Villain favors B so:
during Training I exchange the Faces: Input A and Alignment A for Input B and Alignment B and then,
during Conversion set all inputs as your intended goal but scroll down and select the Swap Model checkbox.

Villain is a balanced model, so has no bias towards A or B. These steps are not necessary.

SwarmTogether wrote: Sun Feb 23, 2020 12:32 pm

I start on an extended mask training at 100% Face Coverage for about 50k iterations, if I like the previews. Then I might bump the Face Coverage slider down to 95% for another 50k iterations, 90% for 50k.

If you aren't deleting your model and starting again, then this will have no effect. Coverage is fixed when the model is created.


Re: is this good?

Posted: Sun Feb 23, 2020 3:49 pm
by SwarmTogether
torzdf wrote: Sun Feb 23, 2020 1:03 pm
SwarmTogether wrote: Sun Feb 23, 2020 12:32 pm

I've read that Villain favors B so:
during Training I exchange the Faces: Input A and Alignment A for Input B and Alignment B and then,
during Conversion set all inputs as your intended goal but scroll down and select the Swap Model checkbox.

Villain is a balanced model, so has no bias towards A or B. These steps are not necessary.

SwarmTogether wrote: Sun Feb 23, 2020 12:32 pm

I start on an extended mask training at 100% Face Coverage for about 50k iterations, if I like the previews. Then I might bump the Face Coverage slider down to 95% for another 50k iterations, 90% for 50k.

If you aren't deleting your model and starting again, then this will have no effect. Coverage is fixed when the model is created.

Thank you! Don't know where I read about those pointless practices, thanks for the correction!