I think,
I extracted Too many B faces (9000pcs+ 512size)
and most of faces are similar to each others
is it bad for training?
I am using rtx 3090
Removing more similar faces are helpful for training?
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
I think,
I extracted Too many B faces (9000pcs+ 512size)
and most of faces are similar to each others
is it bad for training?
I am using rtx 3090
Removing more similar faces are helpful for training?
Q1
if I remove 6000 B faces from 9000 B faces, Can I save more Vram?
Q2
and if i reduce 512 size ---> 256 size ,, Can i save more Vram?
Q3
and I want Higher resolution converted result. but 128x128 is too low resolution how can I do??
at least I need 256x256 size.
No
Depends on the model, but generally yes
This is a long and complicated question which I will save for someone else. However, it is not as simple as "doubling resolution". The model structure needs to be adjusted to handle different resolutions, and increasing resolution is not the same as improving detail.
My word is final
Unless your converting 2k or beyond video, I think you might find that you probably don't need 256x256 model. I was in a similar place wondering what's the smallest resolution I can get away with. Obviously I want the highest resolution possible so that there can be potentially the highest detail possible. And when faceswap is converting that model onto a face that bigger than the model, it will stretch it, which is noticeable. So I created a little reference image to help me better visualize models and their resolutions. I put some labels on it so you can understand it.
On the left is a hypothetical perfectly trained model at various resolutions, and on the right is what that conversion would look like when the model stretches itself to various sizes. Right click and open in new tab to zoom in and see how much detail there is for each pic. It's a 1822 x 3993 sized picture.
Basically the conclusion I came too is that a perfect model can convert to a face about 2.5x it's size and still look so real, its impossible to tell its been faceswapped. You can see this by comparing Trump's 128px stretched image compared to the base 325px, or Pokimane 176px stretched image compared to the base 500px. However If your model isn't perfect (which nobodies is), this 2.5x number goes down.