Can somebody help me in using Efficientnet in Disney(DNY) models?

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
vichitra5587
Posts: 17
Joined: Fri Aug 19, 2022 5:08 am
Location: India
Has thanked: 7 times
Been thanked: 4 times

Can somebody help me in using Efficientnet in Disney(DNY) models?

Post by vichitra5587 »

I don't understand the deep learning fundamentals so when opening the Phaze A model, I always get blown away by the options I see.

I have read that Efficientnet v1/v2 are great trainers with high accuracy, faster training & lower model size.
Has somebody here used the Efficientnet v1/v2 with Disney(DNY) models?
If so can you share the settings with me because when I only change the FS trainer of Disney(DNY) 256 model to Efficientnet v2_S ,
my model shows only colorful blocks & no faces.

@torzdf @ianstephens any suggestions on using Efficientnet v1/v2 with Disney(DNY) 256 model.

Please help as I am already in deep love with the Disney(DNY) 256 model because of its low GPU/VRAM usage & incredible results.

Ex-Dunning Kruger-ian

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Can somebody help me in using Efficientnet in Disney(DNY) models?

Post by torzdf »

Not tested, But you can try:

EncoderEnc Scaling
EfficentNetV2_s67
EfficientNetB467

If neither work. Delete the model folder and lower learning rate to 1e-3.5 and try again

(You can test other EfficientNet encoders. To get the encoder scaling divide the 256 (DNY input size) by encoder's output size, multiply by 100 and round up

eg. EffNetV2S has an output size of 384. So
256/384 = 0.666666667.
Result * 100 = 66.66666667
Rounded up = 67%

My word is final

User avatar
vichitra5587
Posts: 17
Joined: Fri Aug 19, 2022 5:08 am
Location: India
Has thanked: 7 times
Been thanked: 4 times

Re: Can somebody help me in using Efficientnet in Disney(DNY) models?

Post by vichitra5587 »

Thanks a million for explaining this math.
I am sure this post will help many people who are trying to experiment with Efficientnet encoder.

I applied your EfficientnetV2_S scaling settings on DNY 256 model & it woked straight away.
I always train all my models at learning rate of 3e & so it worked fine on this scaling setting.

From the initial 6k iterations, I think the DNY 256 default model is winning over EfficientnetV2_S.
I am saying this on the basis of the iteration speed, the speed at which it is learning the facial details, GPU usage, temperature & power consumption.
Disney 256's default settings are faster in iterations & in learning facial details with less GPU usage & power consumption.

However 6k iterations are nothing for getting into a conclusion so I will let the DNY256 model train in both default & EfficientnetV2_S settings
for at least some 500k-600k iterations & then update the results here with some high resolution photo face swapping example.

Ex-Dunning Kruger-ian

User avatar
MaxHunter
Posts: 193
Joined: Thu May 26, 2022 6:02 am
Has thanked: 176 times
Been thanked: 13 times

Re: Can somebody help me in using Efficientnet in Disney(DNY) models?

Post by MaxHunter »

I want to piggy-back off this post:
@torzdf
What are the ramifications of using the DNY512 with the efficientnet v2, even though higher resolutions aren't supported? Will it just be slower because of the higher resolution? It's been spotty when I've tried, but maybe I'm just not experienced enough to know how to make it work.

Locked