Page 1 of 1

Possible to turn Fc off?

Posted: Mon Dec 12, 2022 9:46 am
by Scrapemist

Is it possible to only have Gblock as an Fc and turn the regular Fc off?
If Shared Fc is set to None a Split Fc is created by default I noticed.


Re: Possible to turn Fc off?

Posted: Mon Dec 12, 2022 11:06 am
by torzdf

Probably not... I can't remember how I built it. I would need to look into the code to see if that would be possible. For now though, if you are unable to disable the fully-connected layers entirely, then I would say no.

However, you may be able to simulate disabling the fully connected layers by setting them to a depth of "0". Again, could be wrong. Not tested.


Re: Possible to turn Fc off?

Posted: Mon Dec 12, 2022 1:27 pm
by Scrapemist

The settings are a bit confusing tbh.
If I set Shared Fc to none but leave Split Fc unchecked, do I have a Shared Fc or not?


Re: Possible to turn Fc off?

Posted: Wed Dec 14, 2022 2:55 pm
by torzdf

Ok, I just did a quick test generating a model. Not tried training it, but it works....

I don't know your model setup so I just did this based off the default preset.

  • G-Block enabled
  • For the hidden layers set:
    • FC Depth to 0
    • FC Min/Max Filters to the same as your bottleneck size (1024 for the default preset)
    • FC Dimensions to 1
    • FC Upsample to 0

This will still create the Fully Connected layer, but all it will do is reshape the bottleneck output into something the encoder can take (i.e. in the example above, will turn the node from 1024 to 1x1x1024 so that the decoder can accept is as an input.

Like I say, not tested training, just that it will generate.

Model summary:

Code: Select all

____________________________________________________________________________________________________
Model: "phaze_a"
____________________________________________________________________________________________________
 Layer (type)                    Output Shape          Param #     Connected to
====================================================================================================
 face_in_a (InputLayer)          [(None, 64, 64, 3)]   0           []

 face_in_b (InputLayer)          [(None, 64, 64, 3)]   0           []

 encoder (Functional)            (None, 1024)          33992960    ['face_in_a[0][0]',
                                                                    'face_in_b[0][0]']

 fc_a (Functional)               (None, 1, 1, 1024)    0           ['encoder[0][0]']

 fc_gblock (Functional)          (None, 512)           1050112     ['encoder[0][0]',
                                                                    'encoder[1][0]']

 fc_b (Functional)               (None, 1, 1, 1024)    0           ['encoder[1][0]']

 g_block_both (Functional)       (None, 1, 1, 1024)    23864832    ['fc_a[0][0]',
                                                                    'fc_gblock[0][0]',
                                                                    'fc_b[0][0]',
                                                                    'fc_gblock[1][0]']

 decoder_both (Functional)       (None, 128, 128, 3)   40944739    ['g_block_both[0][0]',
                                                                    'g_block_both[1][0]']

====================================================================================================
Total params: 99,852,643
Trainable params: 99,852,643
Non-trainable params: 0

FC Layers:

Code: Select all

____________________________________________________________________________________________________
Model: "fc_b"
____________________________________________________________________________________________________
 Layer (type)                                Output Shape                            Param #
====================================================================================================
 input_3 (InputLayer)                        [(None, 1024)]                          0

 reshape_1 (Reshape)                         (None, 1, 1, 1024)                      0

 leaky_re_lu_1 (LeakyReLU)                   (None, 1, 1, 1024)                      0

====================================================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0