After the first training I get a strange error

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
bismarckvier
Posts: 2
Joined: Thu Jun 22, 2023 5:59 pm
Has thanked: 1 time

After the first training I get a strange error

Post by bismarckvier »

Hi,i just want to ask this error
Everything was fine the first time I finished training, but when I did my second training it just popped up with this message

Code: Select all

Model: "encoder"
____________________________________________________________________________________________________
 Layer (type)                                Output Shape                            Param #
====================================================================================================
 input_1 (InputLayer)                        [(None, 64, 64, 3)]                     0

 conv_128_0_conv2d (Conv2D)                  (None, 32, 32, 128)                     9728

 conv_128_0_leakyrelu (LeakyReLU)            (None, 32, 32, 128)                     0

 conv_256_0_conv2d (Conv2D)                  (None, 16, 16, 256)                     819456

 conv_256_0_leakyrelu (LeakyReLU)            (None, 16, 16, 256)                     0

 conv_512_0_conv2d (Conv2D)                  (None, 8, 8, 512)                       3277312

 conv_512_0_leakyrelu (LeakyReLU)            (None, 8, 8, 512)                       0

 conv_1024_0_conv2d (Conv2D)                 (None, 4, 4, 1024)                      13108224

 conv_1024_0_leakyrelu (LeakyReLU)           (None, 4, 4, 1024)                      0

 flatten (Flatten)                           (None, 16384)                           0

 dense (Dense)                               (None, 1024)                            16778240

 dense_1 (Dense)                             (None, 16384)                           16793600

 reshape (Reshape)                           (None, 4, 4, 1024)                      0

 upscale_512_0_conv2d_conv2d (Conv2D)        (None, 4, 4, 2048)                      18876416

 upscale_512_0_conv2d_leakyrelu (LeakyReLU)  (None, 4, 4, 2048)                      0

 upscale_512_0_pixelshuffler (PixelShuffler)  (None, 8, 8, 512)                      0

====================================================================================================
Total params: 69,662,976
Trainable params: 69,662,976
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "decoder_a"
____________________________________________________________________________________________________
 Layer (type)                                Output Shape                            Param #
====================================================================================================
 input_2 (InputLayer)                        [(None, 8, 8, 512)]                     0

 upscale_256_0_conv2d_conv2d (Conv2D)        (None, 8, 8, 1024)                      4719616

 upscale_256_0_conv2d_leakyrelu (LeakyReLU)  (None, 8, 8, 1024)                      0

 upscale_256_0_pixelshuffler (PixelShuffler)  (None, 16, 16, 256)                    0

 upscale_128_0_conv2d_conv2d (Conv2D)        (None, 16, 16, 512)                     1180160

 upscale_128_0_conv2d_leakyrelu (LeakyReLU)  (None, 16, 16, 512)                     0

 upscale_128_0_pixelshuffler (PixelShuffler)  (None, 32, 32, 128)                    0

 upscale_64_0_conv2d_conv2d (Conv2D)         (None, 32, 32, 256)                     295168

 upscale_64_0_conv2d_leakyrelu (LeakyReLU)   (None, 32, 32, 256)                     0

 upscale_64_0_pixelshuffler (PixelShuffler)  (None, 64, 64, 64)                      0

 face_out_a_conv2d (Conv2D)                  (None, 64, 64, 3)                       4803

 face_out_a (Activation)                     (None, 64, 64, 3)                       0

====================================================================================================
Total params: 6,199,747
Trainable params: 6,199,747
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "decoder_b"
____________________________________________________________________________________________________
 Layer (type)                                Output Shape                            Param #
====================================================================================================
 input_3 (InputLayer)                        [(None, 8, 8, 512)]                     0

 upscale_256_1_conv2d_conv2d (Conv2D)        (None, 8, 8, 1024)                      4719616

 upscale_256_1_conv2d_leakyrelu (LeakyReLU)  (None, 8, 8, 1024)                      0

 upscale_256_1_pixelshuffler (PixelShuffler)  (None, 16, 16, 256)                    0

 upscale_128_1_conv2d_conv2d (Conv2D)        (None, 16, 16, 512)                     1180160

 upscale_128_1_conv2d_leakyrelu (LeakyReLU)  (None, 16, 16, 512)                     0

 upscale_128_1_pixelshuffler (PixelShuffler)  (None, 32, 32, 128)                    0

 upscale_64_1_conv2d_conv2d (Conv2D)         (None, 32, 32, 256)                     295168

 upscale_64_1_conv2d_leakyrelu (LeakyReLU)   (None, 32, 32, 256)                     0

 upscale_64_1_pixelshuffler (PixelShuffler)  (None, 64, 64, 64)                      0

 face_out_b_conv2d (Conv2D)                  (None, 64, 64, 3)                       4803

 face_out_b (Activation)                     (None, 64, 64, 3)                       0

====================================================================================================
Total params: 6,199,747
Trainable params: 6,199,747
Non-trainable params: 0
____________________________________________________________________________________________________
Model: "original"
____________________________________________________________________________________________________
 Layer (type)                    Output Shape          Param #     Connected to
====================================================================================================
 face_in_a (InputLayer)          [(None, 64, 64, 3)]   0           []

 face_in_b (InputLayer)          [(None, 64, 64, 3)]   0           []

 encoder (Functional)            (None, 8, 8, 512)     69662976    ['face_in_a[0][0]',
                                                                    'face_in_b[0][0]']

 decoder_a (Functional)          (None, 64, 64, 3)     6199747     ['encoder[0][0]']

 decoder_b (Functional)          (None, 64, 64, 3)     6199747     ['encoder[1][0]']

====================================================================================================
Total params: 82,062,470
Trainable params: 82,062,470
Non-trainable params: 0
____________________________________________________________________________________________________
Process exited.
User avatar
torzdf
Posts: 2687
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 135 times
Been thanked: 628 times

Re: After the first training I get a strange error

Post by torzdf »

You've probably got the "summary" option checked. Uncheck it and you'll be good to go.

My word is final

User avatar
bismarckvier
Posts: 2
Joined: Thu Jun 22, 2023 5:59 pm
Has thanked: 1 time

Re: After the first training I get a strange error

Post by bismarckvier »

torzdf wrote: Fri Jun 23, 2023 1:59 am

You've probably got the "summary" option checked. Uncheck it and you'll be good to go.

oh,I have solved it, thank you very much.

Locked