LPIPS_ALEX vs LPIPS_VG16

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Post Reply
User avatar
d3x
Posts: 9
Joined: Tue Mar 12, 2024 1:12 pm
Has thanked: 2 times
Been thanked: 2 times

LPIPS_ALEX vs LPIPS_VG16

Post by d3x »

I thought I would get better results from VG16, but it feels like each got its pros & cons

  • With VG16 the moire pattern is a lot less noticeable when training at high values (like 50%) or sometimes even gets completely trained out with lower values (even when zooming in to 200%)

  • With Alex I get a lot more sharpness, but the pattern stays visible when zooming in even when training at very low values (also much faster training of course)

  • I tried squeeze too, but didn't let it run for very long because it just seemed way worse than both alex & vg16 in the early stages

This is with SSIM as the main loss function btw and no other extra loss functions

What are other people's experiences with these loss functions?

What I've started doing on my latest model is using a combination of alex & vg16 and I'm really liking the results so far (100 SSIM / 5 lpips_alex / 30 lpips_vg16 / 90 FFL)
--> it feels like adding 5% alex rly improves the sharpness compared to just using vg16 - not sure if it's worth the heavier GPU load though, might be better to tweak my phaze-a settings a bit to bring out more detail instead of combining 2 lpips loss functions 😅

Last edited by d3x on Fri Apr 05, 2024 7:22 pm, edited 8 times in total.
User avatar
graciousskip
Posts: 1
Joined: Thu Aug 29, 2024 1:15 am

Re: LPIPS_ALEX vs LPIPS_VG16

Post by graciousskip »

You're seeing VGG16 provide smoother results with fewer moiré patterns, but less sharpness compared to AlexNet, which gives sharper images but retains more artifacts. SqueezeNet is lightweight but less effective early on. By combining AlexNet and VGG16 with SSIM, LPIPS, and FFL, you're balancing sharpness and artifact reduction, though it adds GPU load. Tweaking phaze-a settingscould enhance detail without needing both LPIPS models. Your current combination seems effective, but fine-tuning the weights or settings may optimize performance further.

User avatar
upstagetower
Posts: 1
Joined: Mon Sep 09, 2024 6:40 am

Re: LPIPS_ALEX vs LPIPS_VG16

Post by upstagetower »

d3x wrote: Fri Apr 05, 2024 7:01 pm

I thought I would get better results from VG16, but it feels like each got its pros & cons

  • With VG16 the moire pattern is a lot less noticeable when training at high values (like 50%) or sometimes even gets completely trained out with lower values (even when zooming in to 200%)

  • With Alex I get a lot more sharpness, but the pattern stays visible when zooming in even when training at very low values (also much faster training of course)

  • I tried squeeze too, but didn't let it run for very long because it just seemed way worse than both alex & vg16 in the early stages

This is with SSIM as the main loss function btw and no other extra loss functions

What are other people's experiences with these loss functions? scratch geometry dash

What I've started doing on my latest model is using a combination of alex & vg16 and I'm really liking the results so far (100 SSIM / 5 lpips_alex / 30 lpips_vg16 / 90 FFL)
--> it feels like adding 5% alex rly improves the sharpness compared to just using vg16 - not sure if it's worth the heavier GPU load though, might be better to tweak my phaze-a settings a bit to bring out more detail instead of combining 2 lpips loss functions 😅

VG16 vs. AlexNet
VG16 (VGG16): This is a deeper network compared to AlexNet, and it's known for capturing more nuanced features, which might be why you see a reduction in moiré patterns. However, it can be less sharp in the output because it might prioritize overall texture and pattern consistency over edge sharpness.

AlexNet: Being a shallower network, AlexNet tends to emphasize sharper edges and clearer features. This could explain why your results appear sharper but also why the moiré patterns are more persistent—AlexNet might not be as effective in learning or smoothing out those fine patterns.

Post Reply