Train doesn't start

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
nasd123
Posts: 1
Joined: Fri Apr 30, 2021 4:42 pm

Train doesn't start

Post by nasd123 »

Hi, I am trying to train a new model and the process ends without any errors.

Here's the log:

Loading...
Setting Faceswap backend to NVIDIA
05/04/2021 22:26:08 INFO Log level set to: INFO
05/04/2021 22:26:10 INFO Loading Model from Dfl_H128 plugin...
05/04/2021 22:26:10 INFO No existing state file found. Generating.
05/04/2021 22:26:10 INFO Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
05/04/2021 22:26:10 INFO Enabling Mixed Precision Training.
05/04/2021 22:26:10 INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: NVIDIA GeForce RTX 3070, compute capability 8.6
05/04/2021 22:26:10 INFO Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
05/04/2021 22:26:10 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 3, 128)
05/04/2021 22:26:10 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 128, 256)
05/04/2021 22:26:10 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 256, 512)
05/04/2021 22:26:11 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 512, 1024)
05/04/2021 22:26:13 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
05/04/2021 22:26:14 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
05/04/2021 22:26:14 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 256]
05/04/2021 22:26:14 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 256, 128]
05/04/2021 22:26:15 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 128, 3)
05/04/2021 22:26:15 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
05/04/2021 22:26:15 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 256]
05/04/2021 22:26:15 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 256, 128]
05/04/2021 22:26:15 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 128, 1)
05/04/2021 22:26:16 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
05/04/2021 22:26:17 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 256]
05/04/2021 22:26:17 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 256, 128]
05/04/2021 22:26:17 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 128, 3)
05/04/2021 22:26:17 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
05/04/2021 22:26:18 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 256]
05/04/2021 22:26:18 INFO Calculating Convolution Aware Initializer for shape: [3, 3, 256, 128]
05/04/2021 22:26:18 INFO Calculating Convolution Aware Initializer for shape: (5, 5, 128, 1)
Model: "dfl_h128"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
face_in_a (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
face_in_b (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
encoder (Functional) (None, 16, 16, 512) 77018880 face_in_a[0][0]
face_in_b[0][0]
__________________________________________________________________________________________________
decoder_a (Functional) [(None, 128, 128, 3) 30690820 encoder[0][0]
__________________________________________________________________________________________________
decoder_b (Functional) [(None, 128, 128, 3) 30690820 encoder[1][0]
==================================================================================================
Total params: 138,400,520
Trainable params: 138,400,520
Non-trainable params: 0
__________________________________________________________________________________________________
Model: "encoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 3)] 0
_________________________________________________________________
conv_128_0_conv2d (Conv2D) (None, 64, 64, 128) 9728
_________________________________________________________________
conv_128_0_leakyrelu (LeakyR (None, 64, 64, 128) 0
_________________________________________________________________
conv_256_0_conv2d (Conv2D) (None, 32, 32, 256) 819456
_________________________________________________________________
conv_256_0_leakyrelu (LeakyR (None, 32, 32, 256) 0
_________________________________________________________________
conv_512_0_conv2d (Conv2D) (None, 16, 16, 512) 3277312
_________________________________________________________________
conv_512_0_leakyrelu (LeakyR (None, 16, 16, 512) 0
_________________________________________________________________
conv_1024_0_conv2d (Conv2D) (None, 8, 8, 1024) 13108224
_________________________________________________________________
conv_1024_0_leakyrelu (Leaky (None, 8, 8, 1024) 0
_________________________________________________________________
flatten (Flatten) (None, 65536) 0
_________________________________________________________________
dense (Dense) (None, 512) 33554944
_________________________________________________________________
dense_1 (Dense) (None, 32768) 16809984
_________________________________________________________________
reshape (Reshape) (None, 8, 8, 512) 0
_________________________________________________________________
upscale_512_0_conv2d_conv2d (None, 8, 8, 2048) 9439232
_________________________________________________________________
upscale_512_0_conv2d_leakyre (None, 8, 8, 2048) 0
_________________________________________________________________
upscale_512_0_pixelshuffler (None, 16, 16, 512) 0
=================================================================
Total params: 77,018,880
Trainable params: 77,018,880
Non-trainable params: 0
_________________________________________________________________
Model: "decoder_a"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 16, 16, 512) 0
__________________________________________________________________________________________________
upscale_512_1_conv2d_conv2d (Co (None, 16, 16, 2048) 9439232 input_2[0][0]
__________________________________________________________________________________________________
upscale_512_2_conv2d_conv2d (Co (None, 16, 16, 2048) 9439232 input_2[0][0]
__________________________________________________________________________________________________
upscale_512_1_conv2d_leakyrelu (None, 16, 16, 2048) 0 upscale_512_1_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_512_2_conv2d_leakyrelu (None, 16, 16, 2048) 0 upscale_512_2_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_512_1_pixelshuffler (Pi (None, 32, 32, 512) 0 upscale_512_1_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_512_2_pixelshuffler (Pi (None, 32, 32, 512) 0 upscale_512_2_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_256_0_conv2d_conv2d (Co (None, 32, 32, 1024) 4719616 upscale_512_1_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_256_1_conv2d_conv2d (Co (None, 32, 32, 1024) 4719616 upscale_512_2_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_256_0_conv2d_leakyrelu (None, 32, 32, 1024) 0 upscale_256_0_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_256_1_conv2d_leakyrelu (None, 32, 32, 1024) 0 upscale_256_1_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_256_0_pixelshuffler (Pi (None, 64, 64, 256) 0 upscale_256_0_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_256_1_pixelshuffler (Pi (None, 64, 64, 256) 0 upscale_256_1_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_128_0_conv2d_conv2d (Co (None, 64, 64, 512) 1180160 upscale_256_0_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_128_1_conv2d_conv2d (Co (None, 64, 64, 512) 1180160 upscale_256_1_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_128_0_conv2d_leakyrelu (None, 64, 64, 512) 0 upscale_128_0_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_128_1_conv2d_leakyrelu (None, 64, 64, 512) 0 upscale_128_1_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_128_0_pixelshuffler (Pi (None, 128, 128, 128 0 upscale_128_0_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_128_1_pixelshuffler (Pi (None, 128, 128, 128 0 upscale_128_1_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
face_out_a_conv2d (Conv2D) (None, 128, 128, 3) 9603 upscale_128_0_pixelshuffler[0][0]
__________________________________________________________________________________________________
mask_out_a_conv2d (Conv2D) (None, 128, 128, 1) 3201 upscale_128_1_pixelshuffler[0][0]
__________________________________________________________________________________________________
face_out_a (Activation) (None, 128, 128, 3) 0 face_out_a_conv2d[0][0]
__________________________________________________________________________________________________
mask_out_a (Activation) (None, 128, 128, 1) 0 mask_out_a_conv2d[0][0]
==================================================================================================
Total params: 30,690,820
Trainable params: 30,690,820
Non-trainable params: 0
__________________________________________________________________________________________________
Model: "decoder_b"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 16, 16, 512) 0
__________________________________________________________________________________________________
upscale_512_3_conv2d_conv2d (Co (None, 16, 16, 2048) 9439232 input_3[0][0]
__________________________________________________________________________________________________
upscale_512_4_conv2d_conv2d (Co (None, 16, 16, 2048) 9439232 input_3[0][0]
__________________________________________________________________________________________________
upscale_512_3_conv2d_leakyrelu (None, 16, 16, 2048) 0 upscale_512_3_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_512_4_conv2d_leakyrelu (None, 16, 16, 2048) 0 upscale_512_4_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_512_3_pixelshuffler (Pi (None, 32, 32, 512) 0 upscale_512_3_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_512_4_pixelshuffler (Pi (None, 32, 32, 512) 0 upscale_512_4_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_256_2_conv2d_conv2d (Co (None, 32, 32, 1024) 4719616 upscale_512_3_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_256_3_conv2d_conv2d (Co (None, 32, 32, 1024) 4719616 upscale_512_4_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_256_2_conv2d_leakyrelu (None, 32, 32, 1024) 0 upscale_256_2_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_256_3_conv2d_leakyrelu (None, 32, 32, 1024) 0 upscale_256_3_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_256_2_pixelshuffler (Pi (None, 64, 64, 256) 0 upscale_256_2_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_256_3_pixelshuffler (Pi (None, 64, 64, 256) 0 upscale_256_3_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_128_2_conv2d_conv2d (Co (None, 64, 64, 512) 1180160 upscale_256_2_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_128_3_conv2d_conv2d (Co (None, 64, 64, 512) 1180160 upscale_256_3_pixelshuffler[0][0]
__________________________________________________________________________________________________
upscale_128_2_conv2d_leakyrelu (None, 64, 64, 512) 0 upscale_128_2_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_128_3_conv2d_leakyrelu (None, 64, 64, 512) 0 upscale_128_3_conv2d_conv2d[0][0]
__________________________________________________________________________________________________
upscale_128_2_pixelshuffler (Pi (None, 128, 128, 128 0 upscale_128_2_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
upscale_128_3_pixelshuffler (Pi (None, 128, 128, 128 0 upscale_128_3_conv2d_leakyrelu[0]
__________________________________________________________________________________________________
face_out_b_conv2d (Conv2D) (None, 128, 128, 3) 9603 upscale_128_2_pixelshuffler[0][0]
__________________________________________________________________________________________________
mask_out_b_conv2d (Conv2D) (None, 128, 128, 1) 3201 upscale_128_3_pixelshuffler[0][0]
__________________________________________________________________________________________________
face_out_b (Activation) (None, 128, 128, 3) 0 face_out_b_conv2d[0][0]
__________________________________________________________________________________________________
mask_out_b (Activation) (None, 128, 128, 1) 0 mask_out_b_conv2d[0][0]
==================================================================================================
Total params: 30,690,820
Trainable params: 30,690,820
Non-trainable params: 0
__________________________________________________________________________________________________
Process exited.


User avatar
torzdf
Posts: 1495
Joined: Fri Jul 12, 2019 12:53 am
Answers: 127
Has thanked: 51 times
Been thanked: 287 times

Re: Train doesn't start

Post by torzdf »

Disable "summary".

My word is final


Locked