Page 5 of 5

Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 07, 2023 7:29 pm
by MaxHunter

I can't upload the .JSON.

I call this Phaze-A setting, "Max-512" (because it's about as far as you can go on a 24Gb card) with Mixed Precision turned on.

Size: 23.25Gb (according to System Output)

Batch: 1

Recommended Learning Rate: 4.625e-6/-7 (With MS_SSM@100; Logcosh@50; FFL@100; LPIPS_VGG16@25) E.G.s 2.3-ish

Alt Learning Rate: 6.25e-6/-7 (With MS_SSM@100; Logcosh@50)

Code: Select all


{
"output_size": 512,
  "shared_fc": "none",
  "enable_gblock": true,
  "split_fc": true,
  "split_gblock": false,
  "split_decoders": true,
  "enc_architecture": "efficientnet_v2_l",
  "enc_scaling": 100,
  "enc_load_weights": true,
  "bottleneck_type": "dense",
  "bottleneck_norm": "none",
  "bottleneck_size": 512,
  "bottleneck_in_encoder": true,
  "fc_depth": 1,
  "fc_min_filters": 1280,
  "fc_max_filters": 1280,
  "fc_dimensions": 8,
  "fc_filter_slope": -0.5,
  "fc_dropout": 0.0,
  "fc_upsampler": "upscale_hybrid",
  "fc_upsamples": 1,
  "fc_upsample_filters": 512,
  "fc_gblock_depth": 3,
  "fc_gblock_min_nodes": 512,
  "fc_gblock_max_nodes": 512,
  "fc_gblock_filter_slope": -0.5,
  "fc_gblock_dropout": 0.0,
  "dec_upscale_method": "upscale_hybrid",
  "dec_upscales_in_fc": 0,
  "dec_norm": "none",
  "dec_min_filters": 160,
  "dec_max_filters": 640,
  "dec_slope_mode": "full",
  "dec_filter_slope": -0.33,
  "dec_res_blocks": 1,
  "dec_output_kernel": 3,
  "dec_gaussian": true,
  "dec_skip_last_residual": false,
  "freeze_layers": "keras_encoder",
  "load_layers": "encoder",
  "fs_original_depth": 4,
  "fs_original_min_filters": 128,
  "fs_original_max_filters": 1024,
  "fs_original_use_alt": false,
  "mobilenet_width": 1.0,
  "mobilenet_depth": 1,
  "mobilenet_dropout": 0.001,
  "mobilenet_minimalistic": false,
  "__filetype": "faceswap_preset",
  "__section": "train|model|phaze_a"

}

Explanation and a few thoughts:

This was based on the STOJO setting with some of the @Icarus modifications, and added modifications by myself. It uses Efficienetv2 L @100, and instead of the Subpixel that Icarus likes to use, I used the Upscale Hybrid to save some VRAM.

The learning rate was based on a ratio formula suggested by @couleurs and @torzdf original 5e-5. If you would like to use it as a basis for you own Learning Rates it looks like this: βœ“Batch Size Γ· 8 * 5 (Square root of your batch size divide by 8 times 5). So, in this instance, √1Γ·8x5=.625. .626e-5 or 6.25e-6.

I don't know if I'd have this peer reviewed :lol: but it worked for me. For further reading see viewtopic.php?t=2083&start=20.

After finding your base learning rate you then adjust by your percent difference of you egs. when adding different losses.

For example, when I added FFL and VGG16 (to my MS_SSIM & Logcosh learning rate) there was a rough 26% difference in egs, and I subtracted 26% from 6.25e-6, which is how I came about 4.625. I am not saying this is ideal, I am just saying it's stable for my rtx3090. It's possible you can up this learning rate. (Please report back if you've found better learning rates so everyone can benefit. πŸ™‚)

Again, I'm not sure if this formula is something I'd bring to a PhD in computer science, but it worked for this Mathematically challenged person. Maybe it will help you.

I had G-Block split originally but turned it off due to "Googley Eyes". Let me know if you have the same problem, and/or how you fixed it.

As a comparison, it took me around 2.1 million iterations (over 9million e.g.s) with a slightly modified DNY512 w/fs original, with a (struggling) LR of 1e-5 to reach losses of: face_a: 0.05342 / face_b: 0.03503. (The last 100K had No Warp)

This setting w/efficienetv2_L @100, took roughly 700K ITs (1.75 million e.g.s) to reach the same losses and IMHO better visual fidelity (Last 100K had No Warp) And the LR had no warnings, ooms, or problems.

If anyone has any suggestions to the settings or learning rate, please post a reply. Nothing is set in stone, we are all learning, and we're all building off other's suggestions. πŸ™‚ What may seem obvious and silly to you, will save hours for newbies and others.


Re: [Guide] Introducing - Phaze-A

Posted: Thu Feb 09, 2023 11:57 am
by torzdf

You can save a preset in the Phaze-A config settings and upload it here, if you want.


Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 14, 2023 6:55 pm
by bryanlyon

Also, a note, you can "save draft" at the bottom of a post that lets you edit it before you decide it's ready for you to post it ;) .


Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 14, 2023 7:09 pm
by MaxHunter

I couldn't upload the JSON, so I just re-edited the above post with final thoughts, and deleted my last post as it was duplicated in the new edit. :)


Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 14, 2023 7:18 pm
by MaxHunter

@bryanlyon
Yeah I know, but I was writing the post on my phone and then was going to insert the JSON from my computer in another room and thought it would be just a quick edit, but turned into a SNAFU of sorts. LOL faceslap Sorry. Story of my life. LOL


Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 14, 2023 7:26 pm
by bryanlyon

Not a problem, just trying to help you avoid edits on your posts.


Re: [Guide] Introducing - Phaze-A

Posted: Tue Feb 14, 2023 11:39 pm
by torzdf

Pro tip: Use a code block and mark it as JSON....

I have to put this in a code block so it doesn't render, but you do it like this:

Code: Select all

```json

{"test": "json"}```

This will render as

Code: Select all

{"test": "json"}

People can then just press the copy button on the code block


Re: [Guide] Introducing - Phaze-A

Posted: Wed Feb 15, 2023 6:13 pm
by MaxHunter

@torzdf
Thanks. It took me a few tries to do it, and find the "code box" edit feature, but looks like we got it now. 😁


Re: [Guide] Introducing - Phaze-A

Posted: Sun Sep 17, 2023 3:33 pm
by Hotel85

Hi folks,

Today, I tried out the DFL-SAEHD-DF preset from Phaze A. It's really impressive with an input size of 192 and a batch size of 16 (110 EGs/sec). I was expecting it to be much slower.

And here comes the newbie question:
Why is it that the original DFL-SAE-DF is much slower (80 EGs/sec)?

Thank you


Re: [Guide] Introducing - Phaze-A

Posted: Sun Sep 17, 2023 6:53 pm
by torzdf

Without having looked at the actual layouts of the models (I don't use those presets myself), I would guess that the latter is a 'deeper' model. That is, it has more parameters to train.

You can check this yourself by initiating each with the "summary" option checked and looking at the model structures.