A couple small questions

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
cosmico
Posts: 95
Joined: Sat Jan 18, 2020 6:32 pm
Has thanked: 13 times
Been thanked: 35 times

A couple small questions

Post by cosmico »

I was training and I was rotating my data like its recommended for better training, you know... letting the model see something new after a while, and I've noticed these odd valleys coming as a result. Now I've seen this happen many times and thought it was weird but never really thought too much about it, but I think I just made a connection in my head I hadn't realized before.
Is this huge drop in loss the evidence for "rotating your data often improves training"? Like this right here is exactly why you suggest people do it?
Image
If the answer to that question is yes, then how come this same "evidence" does not seem to occur with my B data? You can clearly see that the B data still is higher than where it's estimated it would have been if I didn't rotate the data around.

Also a question on a unrelated note, I know that running dual or triple monitors often will affect my gaming performance, and when the game is serious I'll unplug the extra monitors. Is the same concept true for face swap? If I let faceswap run without extra monitors hooked up, can I squeak out 2-3% better training times?

User avatar
bryanlyon
Site Admin
Posts: 793
Joined: Fri Jul 12, 2019 12:49 am
Answers: 44
Location: San Francisco
Has thanked: 4 times
Been thanked: 218 times
Contact:

Re: A couple small questions

Post by bryanlyon »

Most likely, the data you added in for B side had more variety than the data for A. Low variety is easier to recreate and so has lower loss.

Locked