Possible to turn Fc off?

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Scrapemist
Posts: 16
Joined: Sat Nov 05, 2022 2:26 pm
Has thanked: 8 times

Possible to turn Fc off?

Post by Scrapemist »

Is it possible to only have Gblock as an Fc and turn the regular Fc off?
If Shared Fc is set to None a Split Fc is created by default I noticed.

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: Possible to turn Fc off?

Post by torzdf »

Probably not... I can't remember how I built it. I would need to look into the code to see if that would be possible. For now though, if you are unable to disable the fully-connected layers entirely, then I would say no.

However, you may be able to simulate disabling the fully connected layers by setting them to a depth of "0". Again, could be wrong. Not tested.

My word is final

User avatar
Scrapemist
Posts: 16
Joined: Sat Nov 05, 2022 2:26 pm
Has thanked: 8 times

Re: Possible to turn Fc off?

Post by Scrapemist »

The settings are a bit confusing tbh.
If I set Shared Fc to none but leave Split Fc unchecked, do I have a Shared Fc or not?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: Possible to turn Fc off?

Post by torzdf »

Ok, I just did a quick test generating a model. Not tried training it, but it works....

I don't know your model setup so I just did this based off the default preset.

  • G-Block enabled
  • For the hidden layers set:
    • FC Depth to 0
    • FC Min/Max Filters to the same as your bottleneck size (1024 for the default preset)
    • FC Dimensions to 1
    • FC Upsample to 0

This will still create the Fully Connected layer, but all it will do is reshape the bottleneck output into something the encoder can take (i.e. in the example above, will turn the node from 1024 to 1x1x1024 so that the decoder can accept is as an input.

Like I say, not tested training, just that it will generate.

Model summary:

Code: Select all

____________________________________________________________________________________________________
Model: "phaze_a"
____________________________________________________________________________________________________
 Layer (type)                    Output Shape          Param #     Connected to
====================================================================================================
 face_in_a (InputLayer)          [(None, 64, 64, 3)]   0           []

 face_in_b (InputLayer)          [(None, 64, 64, 3)]   0           []

 encoder (Functional)            (None, 1024)          33992960    ['face_in_a[0][0]',
                                                                    'face_in_b[0][0]']

 fc_a (Functional)               (None, 1, 1, 1024)    0           ['encoder[0][0]']

 fc_gblock (Functional)          (None, 512)           1050112     ['encoder[0][0]',
                                                                    'encoder[1][0]']

 fc_b (Functional)               (None, 1, 1, 1024)    0           ['encoder[1][0]']

 g_block_both (Functional)       (None, 1, 1, 1024)    23864832    ['fc_a[0][0]',
                                                                    'fc_gblock[0][0]',
                                                                    'fc_b[0][0]',
                                                                    'fc_gblock[1][0]']

 decoder_both (Functional)       (None, 128, 128, 3)   40944739    ['g_block_both[0][0]',
                                                                    'g_block_both[1][0]']

====================================================================================================
Total params: 99,852,643
Trainable params: 99,852,643
Non-trainable params: 0

FC Layers:

Code: Select all

____________________________________________________________________________________________________
Model: "fc_b"
____________________________________________________________________________________________________
 Layer (type)                                Output Shape                            Param #
====================================================================================================
 input_3 (InputLayer)                        [(None, 1024)]                          0

 reshape_1 (Reshape)                         (None, 1, 1, 1024)                      0

 leaky_re_lu_1 (LeakyReLU)                   (None, 1, 1, 1024)                      0

====================================================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0

My word is final

Locked