Matching extraction "size" configuration with selected model type

Want to know about the Faceswap's Face Extraction process? Got tips, ideas or just want to learn about how it all works? Then this is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved for Extracting and preparing face sets for training a model in Faceswap.

If you have found a bug are having issues with the Extraction process not working, then you should post in the Extract Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
artisan
Posts: 14
Joined: Sat Feb 12, 2022 1:22 am
Has thanked: 9 times
Been thanked: 2 times

Matching extraction "size" configuration with selected model type

Post by artisan »

Per the extraction guide:

Size: This is the size of the image holding the extracted face. Generally 512px will be fine for most models. This is the size of the full head extracted image. When training a 'sub-crop' of this image will be used depending on the centering you have chosen. For 'face' centering, the image size available to the model will be 384px.

E.g. The Original model is noted as having input 64px, output 64px.

I understand this to be the portion of the extracted image fed to the model based on selected options for Face "Centering' and "Coverage."

[Newbie question] Does it make sense to extract to a size no more than the basic formula of:

Extraction Size = Model Input / Coverage (%)

Or, have I missed the relationship between the size of the extraction, the model input/output, and quality/efficiency of processing completely? :)

User avatar
artisan
Posts: 14
Joined: Sat Feb 12, 2022 1:22 am
Has thanked: 9 times
Been thanked: 2 times

Re: Model Snapshots saved at same directory level as model directory?

Post by artisan »

Re: viewtopic.php?p=5888#p5888

I'm adding some info I found on the topic that is likely the best answer to my question:

The input resolution to the model is separate from the size of the extracted faces

Try not to think of the extracted faces as the training images, rather, they contain the training images. Whether you use Face or legacy centering, the actual images fed to the model will be a sub-crop from the extracted faces. Therefore, the extracted faces should always be of a higher resolution than the model input.

This gives a visualization of the sub-crops:
https://github.com/deepfakes/faceswap/pull/1095

Bear in mind that this is with 100% coverage. Sub-crop will shrink further with lower centering.

User avatar
MaxHunter
Posts: 193
Joined: Thu May 26, 2022 6:02 am
Has thanked: 177 times
Been thanked: 13 times

Re: Matching extraction "size" configuration with selected model type

Post by MaxHunter »

So, I was going to post I think the same question. Is it better to train @512px even if you're source face material is 64px? I'm understanding that it is fine (if not preferable.). Or, will that make the swap too grainy, like you cranked the sharp button up to 11?

Thoughts?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Matching extraction "size" configuration with selected model type

Post by torzdf »

I'm not sure I fully understand the question. Training a model that is of a significantly higher resolution than any of your dataset is likely to not end with great results

My word is final

User avatar
artisan
Posts: 14
Joined: Sat Feb 12, 2022 1:22 am
Has thanked: 9 times
Been thanked: 2 times

Re: Matching extraction "size" configuration with selected model type

Post by artisan »

Is there any such thing as a variable input (face size) model?

I know some models have configurable input sizes. However, is there any model that would handle different sized inputs (without rescaling to a fixed input)?

User avatar
MaxHunter
Posts: 193
Joined: Thu May 26, 2022 6:02 am
Has thanked: 177 times
Been thanked: 13 times

Re: Matching extraction "size" configuration with selected model type

Post by MaxHunter »

torzdf wrote: Wed Sep 14, 2022 11:08 am

I'm not sure I fully understand the question. Training a model that is of a significantly higher resolution than any of your dataset is likely to not end with great results

I think you answered my question.

To further explain:

I've recently chose to build a DNY512 model. Most of the faces I've been training with are not even close to 512px, there are a few but most are probably half that size or less.

If I were to use this DNY512 model on a video with a resolution of 480, will the faces be over sharp? Or, will the program detect this and downscale the face?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Matching extraction "size" configuration with selected model type

Post by torzdf »

I highly doubt the image will be overly sharp. I wouldn't imagine anything detrimental would come from this approach, just that you wouldn't necessarily be using the model to its full capacity.

My word is final

Locked