problem starting dlight model

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

problem starting dlight model

Post by Linhaohoward »

Hi, just want to double verify the problem as i think i might have read somewhere that dlight can't run with optimizing savings and multi gpu. Is this the case? cause i can't seem to get my dlight model to start.

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: problem starting dlight model

Post by torzdf »

I don't believe that to be the case. Afaik there is nothing intrinsic to dlight to stop it behaving like any other model. What is the specific issue you are having?

My word is final

User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

Code: Select all

Loading...
Setting Faceswap backend to NVIDIA
02/05/2020 21:36:55 INFO     Log level set to: INFO
Using TensorFlow backend.
02/05/2020 21:36:57 INFO     Model A Directory: D:\Faceswap Images\A_src\A_aligned384x384
02/05/2020 21:36:57 INFO     Model B Directory: D:\Faceswap Images\fan_jilamika_src\combined_faceset
02/05/2020 21:36:57 INFO     Training data directory: D:\Faceswap Images\jilamikariona_dlightmod
02/05/2020 21:36:57 INFO     ===================================================
02/05/2020 21:36:57 INFO       Starting
02/05/2020 21:36:57 INFO       Press 'Stop' to save and quit
02/05/2020 21:36:57 INFO     ===================================================
02/05/2020 21:36:58 INFO     Loading data, this may take a while...
02/05/2020 21:36:58 INFO     Loading Model from Dlight plugin...
02/05/2020 21:36:58 INFO     Using Optimizer Savings
02/05/2020 21:36:58 INFO     No existing state file found. Generating.
02/05/2020 21:36:58 INFO     Using Convolutional Aware Initialization. Model generation will take a few minutes...
02/05/2020 21:36:58 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 512)
02/05/2020 21:36:59 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 128)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 256)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 64)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 32)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 64)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 16)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 32, 3)
02/05/2020 21:37:00 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 256)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 64)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 32)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 64)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 16)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 32)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 8)
02/05/2020 21:37:01 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 16, 1)
02/05/2020 21:37:04 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 9216)
02/05/2020 21:37:30 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 1024, 256)
02/05/2020 21:37:31 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:32 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:32 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:33 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:34 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:34 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:35 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 1024)
02/05/2020 21:37:37 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 256)
02/05/2020 21:37:37 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:38 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:38 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:39 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:52 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 512)
02/05/2020 21:37:53 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 128)
02/05/2020 21:37:53 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 256)
02/05/2020 21:37:53 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 256)
02/05/2020 21:37:53 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 256)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 256, 64)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 32)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 64, 3)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 256)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 64)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 32)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 64)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 16)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 32)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 8)
02/05/2020 21:37:54 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 16, 1)
02/05/2020 21:37:58 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 3, 32)
02/05/2020 21:37:58 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 35, 64)
02/05/2020 21:37:58 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 99, 128)
02/05/2020 21:37:58 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 227, 256)
02/05/2020 21:37:58 INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 483, 512)
02/05/2020 21:38:04 INFO     Creating new 'dlight' model in folder: 'D:\Faceswap Images\jilamikariona_dlightmod'
02/05/2020 21:38:07 CRITICAL Error caught! Exiting...
02/05/2020 21:38:07 ERROR    Caught exception in thread: '_training_0'
02/05/2020 21:38:10 ERROR    Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\Howard\faceswap\lib\cli.py", line 128, in execute_script
process.process()
File "C:\Users\Howard\faceswap\scripts\train.py", line 159, in process
self._end_thread(thread, err)
File "C:\Users\Howard\faceswap\scripts\train.py", line 199, in _end_thread
thread.join()
File "C:\Users\Howard\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Howard\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Howard\faceswap\scripts\train.py", line 224, in _training
raise err
File "C:\Users\Howard\faceswap\scripts\train.py", line 212, in _training
model = self._load_model()
File "C:\Users\Howard\faceswap\scripts\train.py", line 253, in _load_model
predict=False)
File "C:\Users\Howard\faceswap\plugins\train\model\dlight.py", line 85, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\Howard\faceswap\plugins\train\model\original.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 126, in __init__
self.build()
File "C:\Users\Howard\faceswap\plugins\train\model\dlight.py", line 132, in build
super().build()
File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 248, in build
self.build_autoencoders(inputs)
File "C:\Users\Howard\faceswap\plugins\train\model\original.py", line 44, in build_autoencoders
self.add_predictor(side, autoencoder)
File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 326, in add_predictor
model = multi_gpu_model(model, self.gpus)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\multi_gpu_utils.py", line 227, in multi_gpu_model
outputs = model(inputs)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 564, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 721, in run_internal_graph
layer.call(computed_tensor, **kwargs))
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 564, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 721, in run_internal_graph
layer.call(computed_tensor, **kwargs))
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\normalization.py", line 185, in call
epsilon=self.epsilon)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 1858, in normalize_batch_in_training
if not _has_nchw_support() and list(reduction_axes) == [0, 2, 3]:
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 291, in _has_nchw_support
explicitly_on_cpu = _is_current_explicit_device('CPU')
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 266, in _is_current_explicit_device
device = _get_current_tf_device()
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 247, in _get_current_tf_device
g._apply_device_functions(op)
File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 4398, in _apply_device_functions
op._set_device_from_string(device_string)
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
02/05/2020 21:38:10 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Howard\faceswap\crash_report.2020.02.05.213809959053.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.
User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

torzdf wrote: Tue Feb 04, 2020 11:55 pm

I don't believe that to be the case. Afaik there is nothing intrinsic to dlight to stop it behaving like any other model. What is the specific issue you are having?

Hi torzdf, above is what i'm seeing. Below is the crash log

Code: Select all

02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ConvolutionAware object at 0x0000024607A0CD08>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 256)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_8_concatenate/concat:0", shape=(?, 24, 24, 512), dtype=float32), filters: 64, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale2x_hyb_13_conv2d'})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024607A0C788>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 512, 64)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale2x_hyb_14
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    inp: Tensor("upscale2x_hyb_13_concatenate/concat:0", shape=(?, 48, 48, 128), dtype=float32), filters: 32, kernel_size: 3, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale_48_3
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024606288788>
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_13_concatenate/concat:0", shape=(?, 48, 48, 128), dtype=float32), filters: 128, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_48_3_conv2d', 'kernel_initializer': <lib.model.initializers.ConvolutionAware object at 0x0000024606288788>})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ConvolutionAware object at 0x0000024606288788>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 128)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_13_concatenate/concat:0", shape=(?, 48, 48, 128), dtype=float32), filters: 32, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale2x_hyb_14_conv2d'})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024606282C48>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 128, 32)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale2x_hyb_15
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    inp: Tensor("upscale2x_hyb_14_concatenate/concat:0", shape=(?, 96, 96, 64), dtype=float32), filters: 16, kernel_size: 3, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale_96_3
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x00000246062B4D08>
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_14_concatenate/concat:0", shape=(?, 96, 96, 64), dtype=float32), filters: 64, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_96_3_conv2d', 'kernel_initializer': <lib.model.initializers.ConvolutionAware object at 0x00000246062B4D08>})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ConvolutionAware object at 0x00000246062B4D08>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 64)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_14_concatenate/concat:0", shape=(?, 96, 96, 64), dtype=float32), filters: 16, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale2x_hyb_15_conv2d'})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x00000246062B4F08>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 64, 16)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale2x_hyb_16
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    inp: Tensor("upscale2x_hyb_15_concatenate/concat:0", shape=(?, 192, 192, 32), dtype=float32), filters: 8, kernel_size: 3, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale_192_3
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x00000246047FCA08>
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_15_concatenate/concat:0", shape=(?, 192, 192, 32), dtype=float32), filters: 32, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_192_3_conv2d', 'kernel_initializer': <lib.model.initializers.ConvolutionAware object at 0x00000246047FCA08>})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ConvolutionAware object at 0x00000246047FCA08>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 32)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_15_concatenate/concat:0", shape=(?, 192, 192, 32), dtype=float32), filters: 8, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale2x_hyb_16_conv2d'})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x00000246047EC588>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (3, 3, 32, 8)
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("upscale2x_hyb_16_concatenate/concat:0", shape=(?, 384, 384, 16), dtype=float32), filters: 1, kernel_size: 5, strides: (1, 1), padding: same, kwargs: {'activation': 'sigmoid', 'name': 'mask_out'})
02/05/2020 21:37:54 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x00000246044E5FC8>
02/05/2020 21:37:54 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 16, 1)
02/05/2020 21:37:54 MainProcess     _training_0     _base           add_network               DEBUG    network_type: 'decoder', side: 'b', network: '<keras.engine.training.Model object at 0x00000246044EE9C8>', is_output: True
02/05/2020 21:37:54 MainProcess     _training_0     _base           name                      DEBUG    model name: 'dlight'
02/05/2020 21:37:54 MainProcess     _training_0     _base           add_network               DEBUG    name: 'decoder_b', filename: 'dlight_decoder_B.h5'
02/05/2020 21:37:54 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing NNMeta: (filename: 'D:\Faceswap Images\jilamikariona_dlightmod\dlight_decoder_B.h5', network_type: 'decoder', side: 'b', network: <keras.engine.training.Model object at 0x00000246044EE9C8>, is_output: True
02/05/2020 21:37:58 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized NNMeta
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv                      DEBUG    inp: Tensor("input_3:0", shape=(?, 128, 128, 3), dtype=float32), filters: 32, kernel_size: 5, strides: 2, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: conv_128_0
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("input_3:0", shape=(?, 128, 128, 3), dtype=float32), filters: 32, kernel_size: 5, strides: 2, padding: same, kwargs: {'name': 'conv_128_0_conv2d'})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024604D10DC8>
02/05/2020 21:37:58 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 3, 32)
02/05/2020 21:37:58 MainProcess     _training_0     module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv                      DEBUG    inp: Tensor("concatenate_1/concat:0", shape=(?, 64, 64, 35), dtype=float32), filters: 64, kernel_size: 5, strides: 2, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: conv_64_0
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("concatenate_1/concat:0", shape=(?, 64, 64, 35), dtype=float32), filters: 64, kernel_size: 5, strides: 2, padding: same, kwargs: {'name': 'conv_64_0_conv2d'})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x000002460481A348>
02/05/2020 21:37:58 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 35, 64)
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv                      DEBUG    inp: Tensor("concatenate_2/concat:0", shape=(?, 32, 32, 99), dtype=float32), filters: 128, kernel_size: 5, strides: 2, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: conv_32_0
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("concatenate_2/concat:0", shape=(?, 32, 32, 99), dtype=float32), filters: 128, kernel_size: 5, strides: 2, padding: same, kwargs: {'name': 'conv_32_0_conv2d'})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024607A43C08>
02/05/2020 21:37:58 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 99, 128)
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv                      DEBUG    inp: Tensor("concatenate_3/concat:0", shape=(?, 16, 16, 227), dtype=float32), filters: 256, kernel_size: 5, strides: 2, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: conv_16_0
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("concatenate_3/concat:0", shape=(?, 16, 16, 227), dtype=float32), filters: 256, kernel_size: 5, strides: 2, padding: same, kwargs: {'name': 'conv_16_0_conv2d'})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024607A4B708>
02/05/2020 21:37:58 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 227, 256)
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv                      DEBUG    inp: Tensor("concatenate_4/concat:0", shape=(?, 8, 8, 483), dtype=float32), filters: 512, kernel_size: 5, strides: 2, use_instance_norm: False, kwargs: {})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: conv_8_0
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: Tensor("concatenate_4/concat:0", shape=(?, 8, 8, 483), dtype=float32), filters: 512, kernel_size: 5, strides: 2, padding: same, kwargs: {'name': 'conv_8_0_conv2d'})
02/05/2020 21:37:58 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000024607A69D08>
02/05/2020 21:37:58 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: (5, 5, 483, 512)
02/05/2020 21:38:00 MainProcess     _training_0     module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n
02/05/2020 21:38:00 MainProcess     _training_0     deprecation     new_func                  DEBUG    From C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
02/05/2020 21:38:00 MainProcess     _training_0     _base           add_network               DEBUG    network_type: 'encoder', side: 'None', network: '<keras.engine.training.Model object at 0x0000024829588E88>', is_output: False
02/05/2020 21:38:00 MainProcess     _training_0     _base           name                      DEBUG    model name: 'dlight'
02/05/2020 21:38:00 MainProcess     _training_0     _base           add_network               DEBUG    name: 'encoder', filename: 'dlight_encoder.h5'
02/05/2020 21:38:00 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing NNMeta: (filename: 'D:\Faceswap Images\jilamikariona_dlightmod\dlight_encoder.h5', network_type: 'encoder', side: 'None', network: <keras.engine.training.Model object at 0x0000024829588E88>, is_output: False
02/05/2020 21:38:04 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized NNMeta
02/05/2020 21:38:04 MainProcess     _training_0     dlight          add_networks              DEBUG    Added networks
02/05/2020 21:38:04 MainProcess     _training_0     _base           load_models               DEBUG    Load model: (swapped: False)
02/05/2020 21:38:04 MainProcess     _training_0     _base           models_exist              DEBUG    Pre-existing models exist: False
02/05/2020 21:38:04 MainProcess     _training_0     _base           name                      DEBUG    model name: 'dlight'
02/05/2020 21:38:04 MainProcess     _training_0     _base           load_models               INFO     Creating new 'dlight' model in folder: 'D:\Faceswap Images\jilamikariona_dlightmod'
02/05/2020 21:38:04 MainProcess     _training_0     _base           get_inputs                DEBUG    Getting inputs
02/05/2020 21:38:04 MainProcess     _training_0     _base           get_inputs                DEBUG    Got inputs: [<tf.Tensor 'face_in:0' shape=(?, 128, 128, 3) dtype=float32>, <tf.Tensor 'mask_in:0' shape=(?, 384, 384, 1) dtype=float32>]
02/05/2020 21:38:04 MainProcess     _training_0     original        build_autoencoders        DEBUG    Initializing model
02/05/2020 21:38:04 MainProcess     _training_0     original        build_autoencoders        DEBUG    Adding Autoencoder. Side: a
02/05/2020 21:38:04 MainProcess     _training_0     _base           add_predictor             DEBUG    Adding predictor: (side: 'a', model: <keras.engine.training.Model object at 0x0000024607467888>)
02/05/2020 21:38:04 MainProcess     _training_0     _base           add_predictor             DEBUG    Converting to multi-gpu: side a
02/05/2020 21:38:04 MainProcess     _training_0     _base           store_input_shapes        DEBUG    Adding input shapes to state for model
02/05/2020 21:38:04 MainProcess     _training_0     _base           store_input_shapes        DEBUG    Added input shapes: {'face_in:0': (128, 128, 3), 'mask_in:0': (384, 384, 1)}
02/05/2020 21:38:04 MainProcess     _training_0     original        build_autoencoders        DEBUG    Adding Autoencoder. Side: b
02/05/2020 21:38:04 MainProcess     _training_0     _base           add_predictor             DEBUG    Adding predictor: (side: 'b', model: <keras.engine.training.Model object at 0x00000246073F3E48>)
02/05/2020 21:38:04 MainProcess     _training_0     _base           add_predictor             DEBUG    Converting to multi-gpu: side b
02/05/2020 21:38:07 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
02/05/2020 21:38:07 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
02/05/2020 21:38:07 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
02/05/2020 21:38:07 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
02/05/2020 21:38:07 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
02/05/2020 21:38:07 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
02/05/2020 21:38:07 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
02/05/2020 21:38:07 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\Howard\faceswap\lib\cli.py", line 128, in execute_script
    process.process()
  File "C:\Users\Howard\faceswap\scripts\train.py", line 159, in process
    self._end_thread(thread, err)
  File "C:\Users\Howard\faceswap\scripts\train.py", line 199, in _end_thread
    thread.join()
  File "C:\Users\Howard\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\Howard\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Howard\faceswap\scripts\train.py", line 224, in _training
    raise err
  File "C:\Users\Howard\faceswap\scripts\train.py", line 212, in _training
    model = self._load_model()
  File "C:\Users\Howard\faceswap\scripts\train.py", line 253, in _load_model
    predict=False)
  File "C:\Users\Howard\faceswap\plugins\train\model\dlight.py", line 85, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\Howard\faceswap\plugins\train\model\original.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 126, in __init__
    self.build()
  File "C:\Users\Howard\faceswap\plugins\train\model\dlight.py", line 132, in build
    super().build()
  File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 248, in build
    self.build_autoencoders(inputs)
  File "C:\Users\Howard\faceswap\plugins\train\model\original.py", line 44, in build_autoencoders
    self.add_predictor(side, autoencoder)
  File "C:\Users\Howard\faceswap\plugins\train\model\_base.py", line 326, in add_predictor
    model = multi_gpu_model(model, self.gpus)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\multi_gpu_utils.py", line 227, in multi_gpu_model
    outputs = model(inputs)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 564, in call
    output_tensors, _, _ = self.run_internal_graph(inputs, masks)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 721, in run_internal_graph
    layer.call(computed_tensor, **kwargs))
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 564, in call
    output_tensors, _, _ = self.run_internal_graph(inputs, masks)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 721, in run_internal_graph
    layer.call(computed_tensor, **kwargs))
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\normalization.py", line 185, in call
    epsilon=self.epsilon)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 1858, in normalize_batch_in_training
    if not _has_nchw_support() and list(reduction_axes) == [0, 2, 3]:
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 291, in _has_nchw_support
    explicitly_on_cpu = _is_current_explicit_device('CPU')
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 266, in _is_current_explicit_device
    device = _get_current_tf_device()
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 247, in _get_current_tf_device
    g._apply_device_functions(op)
  File "C:\Users\Howard\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 4398, in _apply_device_functions
    op._set_device_from_string(device_string)
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         76bf610 plugins.extract.mask - Enable allow_growth option for mask tool
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: GeForce RTX 2070 SUPER, GPU_1: GeForce RTX 2070 SUPER
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          442.19
gpu_vram:            GPU_0: 8192MB, GPU_1: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.18362-SP0
os_release:          10
py_command:          C:\Users\Howard\faceswap\faceswap.py train -A D:/Faceswap Images/A_src/A_aligned384x384 -ala D:/Faceswap Images/A_src/alignments.fsa -B D:/Faceswap Images/fan_jilamika_src/combined_faceset -alb D:/Faceswap Images/fan_jilamika_src/alignments_merged_20200204_043810.fsa -m D:/Faceswap Images/jilamikariona_dlightmod -t dlight -bs 4 -it 5000000 -g 2 -o -s 1000 -ss 50000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.1
py_implementation:   CPython
py_version:          3.7.6
py_virtual_env:      True
sys_cores:           24
sys_processor:       AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram:             Total: 32697MB, Available: 21252MB, Used: 11445MB, Free: 21252MB

=============== Pip Packages ===============
absl-py==0.8.1
astor==0.8.0
certifi==2019.11.28
cloudpickle==1.2.2
cycler==0.10.0
cytoolz==0.10.1
dask==2.10.0
decorator==4.4.1
fastcluster==1.1.26
ffmpy==0.2.2
gast==0.2.2
google-pasta==0.1.8
grpcio==1.16.1
h5py==2.9.0
imageio==2.6.1
imageio-ffmpeg==0.3.0
joblib==0.14.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==3.1.1
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.17.4
nvidia-ml-py3==7.352.1
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==6.2.1
protobuf==3.11.2
psutil==5.6.7
pyparsing==2.4.6
pyreadline==2.1
python-dateutil==2.8.1
pytz==2019.3
PyWavelets==1.1.1
pywin32==227
PyYAML==5.2
scikit-image==0.15.0
scikit-learn==0.22.1
scipy==1.3.2
six==1.14.0
tensorboard==2.0.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.3
tqdm==4.42.0
Werkzeug==0.16.0
wincertstore==0.2
wrapt==1.11.2

============== Conda Packages ==============
# packages in environment at C:\Users\Howard\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.1.0                       gpu  
absl-py 0.8.1 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
ca-certificates 2019.11.27 0
certifi 2019.11.28 py37_0
cloudpickle 1.2.2 py_0
cudatoolkit 10.0.130 0
cudnn 7.6.5 cuda10.0_0
cycler 0.10.0 py37_0
cytoolz 0.10.1 py37he774522_0
dask-core 2.10.0 py_0
decorator 4.4.1 py_0
fastcluster 1.1.26 py37he350917_0 conda-forge ffmpeg 4.2 h6538335_0 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-pasta 0.1.8 py_0
grpcio 1.16.1 py37h351948d_1
h5py 2.9.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
imageio 2.6.1 py37_0
imageio-ffmpeg 0.3.0 py_0 conda-forge intel-openmp 2019.4 245
joblib 0.14.1 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py37ha925a31_0
libmklml 2019.0.5 0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.11.2 h7bd577a_0
libtiff 4.1.0 h56a325e_0
markdown 3.1.1 py37_0
matplotlib 3.1.1 py37hc8f65d3_0
mkl 2019.4 245
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.15 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
networkx 2.4 py_0
numpy 1.17.4 py37h4320e6b_0
numpy-base 1.17.4 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi olefile 0.46 py37_0
opencv-python 4.1.2.30 pypi_0 pypi openssl 1.1.1d he774522_3
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_1
pillow 6.2.1 py37hdc69c19_0
pip 20.0.2 py37_0
protobuf 3.11.2 py37h33f27b4_0
psutil 5.6.7 py37he774522_0
pyparsing 2.4.6 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
python 3.7.6 h60c2a47_2
python-dateutil 2.8.1 py_0
pytz 2019.3 py_0
pywavelets 1.1.1 py37he774522_0
pywin32 227 py37he774522_1
pyyaml 5.2 py37he774522_0
qt 5.9.7 vc14h73c81de_0
scikit-image 0.15.0 py37ha925a31_0
scikit-learn 0.22.1 py37h6288b17_0
scipy 1.3.2 py37h29ff71c_0
setuptools 45.1.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.14.0 py37_0
sqlite 3.30.1 he774522_0
tensorboard 2.0.0 pyhb38c66f_1
tensorflow 1.15.0 gpu_py37hc3743a6_0
tensorflow-base 1.15.0 gpu_py37h1afeea4_0
tensorflow-estimator 1.15.1 pyh2649769_0
tensorflow-gpu 1.15.0 h0d30ee6_0
termcolor 1.1.0 py37_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge tornado 6.0.3 py37he774522_0
tqdm 4.42.0 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
werkzeug 0.16.0 py_0
wheel 0.33.6 py37_0
wincertstore 0.2 py37_0
wrapt 1.11.2 py37he774522_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0 ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: gaussian kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: jpg draw_transparent: False jpg_quality: 85 png_compress_level: 3 [writer.pillow] format: jpg draw_transparent: False optimize: False gif_interlace: True jpg_quality: 95 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: True [align.fan] batch-size: 64 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 75 batch-size: 64 [mask.unet_dfl] batch-size: 64 [mask.vgg_clear] batch-size: 64 [mask.vgg_obstructed] batch-size: 64 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 72.0 mask_type: vgg-obstructed mask_blur_kernel: 3 mask_threshold: 4 learn_mask: True icnr_init: False conv_aware_init: True subpixel_upscaling: False reflect_padding: False penalized_mask_loss: True loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 256 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: True [model.dlight] features: best details: good output_size: 384 [model.original] lowmem: False [model.realface] input_size: 128 output_size: 256 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

Hi, I can start dlight on 256 output with single gpu at bs 16 but i can't get mgpu to work. i have 2xrtx 2070s. Once i change to 2 gpu the model will not even start.

I also tried training 384 output on dlight with single gpu since i succeeded with 256 single gpu but have failed to start as well. my extracted faceset is at 384x384 if this is any help.

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: problem starting dlight model

Post by torzdf »

Ok, the problem is with Keras and tf 1.15 with multi-gpu....

If you downgrade Tensorflow to 1.13.1 you should get it working fine.

My word is final

User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

torzdf wrote: Wed Feb 05, 2020 2:48 pm

Ok, the problem is with Keras and tf 1.15 with multi-gpu....

If you downgrade Tensorflow to 1.13.1 you should get it working fine.

Is there somewhere that can teach me how to downgrade tensorflow to the said version?

User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

should i also downgrade my CUDA? since faceswap auto downloads the needed configs and pip downloaded 1.15 is it because i'm running CUDA 10.2?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: problem starting dlight model

Post by torzdf »

Assuming you have used the installer (and it looks like you have) I would just let conda handle it.

Off the top of my head (so I apologize if I miss a step)

Start > Anaconda Prompt
Inside your anaconda prompt:

Code: Select all

conda activate faceswap
conda remove tensorflow*
conda remove cudatoolkit
conda remove cudnn 

conda install tensorflow-gpu==1.13

My word is final

User avatar
Linhaohoward
Posts: 23
Joined: Sat Dec 21, 2019 1:23 pm
Has thanked: 3 times

Re: problem starting dlight model

Post by Linhaohoward »

torzdf wrote: Wed Feb 05, 2020 3:44 pm

Assuming you have used the installer (and it looks like you have) I would just let conda handle it.

Off the top of my head (so I apologize if I miss a step)

Start > Anaconda Prompt
Inside your anaconda prompt:

Code: Select all

conda activate faceswap
conda remove tensorflow*
conda remove cudatoolkit
conda remove cudnn 

conda install tensorflow-gpu==1.13

thanks torzdf. i followed your instruction and installed 1.13
conda remove tensorflow* ---------- worked
conda remove cudatoolkit and conda remove cudnn didn't work though
will this be of any problem?

conda install tensorflow-gpu=1.13 ----------- worked

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: problem starting dlight model

Post by torzdf »

To be honest, Conda should be clever enough to work it out. Try it, and if you still have problems, let us know.

My word is final

Locked