Need Help! Training Error.

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Jetpackjules
Posts: 4
Joined: Wed Jul 29, 2020 6:45 pm

Need Help! Training Error.

Post by Jetpackjules »

Hello,
I followed the guide to the dot but when I ran my training I just crashed a few seconds in. Here is what the GUI said:

Code: Select all

Loading...
Setting Faceswap backend to AMD
07/29/2020 11:58:23 INFO     Log level set to: INFO
07/29/2020 11:58:23 INFO     Setting up for PlaidML
07/29/2020 11:58:24 INFO     Setting GPU to largest available supported device. If you want to override this selection, run `plaidml-setup` from the command line.
07/29/2020 11:58:24 INFO     Using GPU: ['opencl_amd_ellesmere.0', 'opencl_amd_ellesmere.0']
07/29/2020 11:58:24 INFO     Successfully set up for PlaidML
Using plaidml.keras.backend backend.
07/29/2020 11:58:27 INFO     Model A Directory: C:\Users\Jetpackjules\Documents\Elon_musk_unrefined_output
07/29/2020 11:58:27 INFO     Model B Directory: C:\Users\Jetpackjules\Documents\training
07/29/2020 11:58:27 INFO     Training data directory: C:\Users\Jetpackjules\Documents\model
07/29/2020 11:58:27 INFO     ===================================================
07/29/2020 11:58:27 INFO       Starting
07/29/2020 11:58:27 INFO       Press 'Stop' to save and quit
07/29/2020 11:58:27 INFO     ===================================================
07/29/2020 11:58:28 INFO     Loading data, this may take a while...
07/29/2020 11:58:28 INFO     Loading Model from Original plugin...
07/29/2020 11:58:28 INFO     No existing state file found. Generating.
07/29/2020 11:58:28 INFO     Opening device "opencl_amd_ellesmere.0"
07/29/2020 11:58:30 INFO     Creating new 'original' model in folder: 'C:\Users\Jetpackjules\Documents\model'
07/29/2020 11:58:30 INFO     Loading Trainer from Original plugin...
07/29/2020 11:58:30 INFO     Enabled TensorBoard Logging
07/29/2020 11:58:31 CRITICAL Error caught! Exiting...
07/29/2020 11:58:31 ERROR    Caught exception in thread: '_training_0'
07/29/2020 11:58:34 ERROR    Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\Jetpackjules\faceswap\lib\cli\launcher.py", line 155, in execute_script
process.process()
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 161, in process
self._end_thread(thread, err)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 201, in _end_thread
thread.join()
File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 226, in _training
raise err
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 216, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 305, in _run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
raise err
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
loss[side] = batcher.train_one_batch()
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 424, in train_one_batch
loss = self._model.predictors[self._side].train_on_batch(model_inputs, model_targets)
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 1211, in train_on_batch
class_weight=class_weight)
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training_utils.py", line 102, in standardize_input_data
str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.34117648, 0.30588236, 0.2627451 ],
[0.34117648, 0.30588236, 0.2627451 ],
[0.34509805, 0.30980393, 0.27058825],
...,
[0.5137255 , 0.57254905, 0.6627451 ...
07/29/2020 11:58:34 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Jetpackjules\faceswap\crash_report.2020.07.29.115831695773.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.

And here is my error log/fiile:

Code: Select all

07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
07/29/2020 11:03:20 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
07/29/2020 11:03:20 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'a')
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: a
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3), (64, 64, 1)], training_opts: {'alignments': {'a': 'C:\\Users\\Jetpackjules\\Downloads\\Elon_Musk\\Elon_Musk_Trim_alignments.fsa', 'b': 'C:\\Users\\Jetpackjules\\Documents\\training\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
07/29/2020 11:03:20 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 2856, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
07/29/2020 11:03:20 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
07/29/2020 11:03:20 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Set preview feed. Batchsize: 14
07/29/2020 11:03:20 MainProcess     _training_0     _base           _use_mask                 DEBUG    False
07/29/2020 11:03:20 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing Batcher: side: 'b', num_images: 3999, use_mask: False, batch_size: 64, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: b
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3), (64, 64, 1)], training_opts: {'alignments': {'a': 'C:\\Users\\Jetpackjules\\Downloads\\Elon_Musk\\Elon_Musk_Trim_alignments.fsa', 'b': 'C:\\Users\\Jetpackjules\\Documents\\training\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
07/29/2020 11:03:20 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 3999, batchsize: 64, side: 'b', do_shuffle: True, is_preview, False, is_timelapse: False)
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 64, is_display: False, input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
07/29/2020 11:03:20 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 3999, side: 'b', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
07/29/2020 11:03:20 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 3999, side: 'b', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'b')
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: b
07/29/2020 11:03:20 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3), (64, 64, 1)], training_opts: {'alignments': {'a': 'C:\\Users\\Jetpackjules\\Downloads\\Elon_Musk\\Elon_Musk_Trim_alignments.fsa', 'b': 'C:\\Users\\Jetpackjules\\Documents\\training\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
07/29/2020 11:03:20 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 3999, batchsize: 14, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
07/29/2020 11:03:20 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
07/29/2020 11:03:20 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 3999, side: 'b', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
07/29/2020 11:03:20 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 3999, side: 'b', do_shuffle: True)
07/29/2020 11:03:20 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Set preview feed. Batchsize: 14
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Enabling TensorBoard Logging
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Setting up TensorBoard Logging. Side: a
07/29/2020 11:03:20 MainProcess     _training_0     _base           name                      DEBUG    model name: 'original'
07/29/2020 11:03:20 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    Tensorflow version: [1, 15, 0]
07/29/2020 11:03:20 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0}
07/29/2020 11:03:20 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Setting up TensorBoard Logging. Side: b
07/29/2020 11:03:20 MainProcess     _training_0     train           _load_trainer             DEBUG    Loaded Trainer
07/29/2020 11:03:20 MainProcess     _training_0     train           _run_training_cycle       DEBUG    Running Training Cycle
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/29/2020 11:03:20 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/29/2020 11:03:20 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/29/2020 11:03:20 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/29/2020 11:03:21 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.31764707, 0.23529412, 0.26666668],\n         [0.32156864, 0.23921569, 0.26666668],\n         [0.32941177, 0.2509804 , 0.2901961 ],\n         ...,\n         [0.52156866, 0.5058824 , 0.7137255 ...
07/29/2020 11:03:21 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
07/29/2020 11:03:21 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
07/29/2020 11:03:21 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
07/29/2020 11:03:21 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
07/29/2020 11:03:21 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
07/29/2020 11:03:21 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
07/29/2020 11:03:21 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\Jetpackjules\faceswap\lib\cli\launcher.py", line 155, in execute_script
    process.process()
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 161, in process
    self._end_thread(thread, err)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 201, in _end_thread
    thread.join()
  File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 226, in _training
    raise err
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 216, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 305, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
    raise err
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
    loss[side] = batcher.train_one_batch()
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 424, in train_one_batch
    loss = self._model.predictors[self._side].train_on_batch(model_inputs, model_targets)
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 1211, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training_utils.py", line 102, in standardize_input_data
    str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.31764707, 0.23529412, 0.26666668],
         [0.32156864, 0.23921569, 0.26666668],
         [0.32941177, 0.2509804 , 0.2901961 ],
         ...,
         [0.52156866, 0.5058824 , 0.7137255 ...

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         3fd26b5 Manual Tool (#1038)
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - Ellesmere (experimental), GPU_1: Advanced Micro Devices, Inc. - Ellesmere (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3110.7', '3110.7']
gpu_vram:            GPU_0: 8192MB, GPU_1: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\Jetpackjules\faceswap\faceswap.py train -A C:/Users/Jetpackjules/Documents/Elon_musk_unrefined_output -ala C:/Users/Jetpackjules/Downloads/Elon_Musk/Elon_Musk_Trim_alignments.fsa -B C:/Users/Jetpackjules/Documents/training -alb C:/Users/Jetpackjules/Documents/training/alignments.fsa -m C:/Users/Jetpackjules/Documents/model -t original -bs 64 -it 1000000 -s 100 -ss 25000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.7
py_virtual_env:      True
sys_cores:           12
sys_processor:       AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram:             Total: 16334MB, Available: 10684MB, Used: 5650MB, Free: 10684MB

=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
brotlipy==0.7.0
cachetools==4.1.0
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cryptography==2.9.2
cycler==0.10.0
decorator==4.4.2
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-auth @ file:///tmp/build/80754af9/google-auth_1594357566944/work
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.10.0
idna @ file:///tmp/build/80754af9/idna_1593446292537/work
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1594236160679/work
Keras==2.2.4
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib==3.3.0
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
networkx==2.4
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.3.0.36
opt-einsum==3.1.0
Pillow @ file:///C:/ci/pillow_1594298234712/work
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.12.3
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
PyJWT==1.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work
pyparsing==2.4.7
pyreadline==2.1
PySocks @ file:///C:/ci/pysocks_1594394709107/work
python-dateutil==2.8.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1592841827918/work
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.17.2
scikit-learn @ file:///C:/ci/scikit-learn_1592847564598/work
scipy==1.5.2
six==1.15.0
tensorboard==2.2.1
tensorboard-plugin-wit==1.6.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tifffile==2020.7.24
toposort==1.5
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1593446365756/work
urllib3==1.25.9
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1:
#
# Name                    Version                   Build  Channel
_tflow_select             2.2.0                     eigen  
absl-py 0.9.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
brotlipy 0.7.0 py37he774522_1000
ca-certificates 2020.6.24 0
cachetools 4.1.0 py_1
certifi 2020.6.20 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.1.2 py_0
cryptography 2.9.2 py37h7a1dbc1_0
cycler 0.10.0 pypi_0 pypi decorator 4.4.2 pypi_0 pypi enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py37h9b59f54_1 conda-forge ffmpeg 4.3 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.2 hd328e21_0
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
python 3.7.7 h81c818b_4
python-dateutil 2.8.1 py_0
python_abi 3.7 1_cp37m conda-forge pywavelets 1.1.1 pypi_0 pypi pywin32 227 py37he774522_1
pyyaml 5.3.1 py37he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.24.0 py_0
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.17.2 pypi_0 pypi scikit-learn 0.23.1 py37h25d0782_0
scipy 1.5.2 pypi_0 pypi setuptools 49.2.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.15.0 py_0
sqlite 3.32.3 h2a8f88b_0
tensorboard 2.2.1 pyh532a8cf_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
threadpoolctl 2.1.0 pyh5ca1d4c_0
tifffile 2020.7.24 pypi_0 pypi tk 8.6.10 he774522_0
toposort 1.5 py_3 conda-forge tornado 6.0.4 py37he774522_1
tqdm 4.47.0 py_0
urllib3 1.25.9 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.12.1 py37he774522_1
xz 5.2.5 h62dcd97_0
yaml 0.2.5 he774522_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 ha9fde0e_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 68.75 mask_type: none mask_blur_kernel: 3 mask_threshold: 4 learn_mask: True icnr_init: False conv_aware_init: False reflect_padding: False penalized_mask_loss: True loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4

Any help would be appreciated!

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: Need Help! Training Error.

Post by torzdf »

This is odd. We would expect to see this error message reported multiple times if there was an issue, but we haven't.

Can you try a couple of things please and let me know if any one of these fix your issue.

1) Select a mask type in training settings.

If that doesn't fix the issue:

2) Turn off Penalized Loss in training settings

My word is final

User avatar
Jetpackjules
Posts: 4
Joined: Wed Jul 29, 2020 6:45 pm

Re: Need Help! Training Error.

Post by Jetpackjules »

I selected "original" and it still had the same error. I am unsure as to how to turn off penalized loss.

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: Need Help! Training Error.

Post by torzdf »

Edit > Settings > Training Settings > Global

My word is final

User avatar
Jetpackjules
Posts: 4
Joined: Wed Jul 29, 2020 6:45 pm

Re: Need Help! Training Error.

Post by Jetpackjules »

I disabled penalized loss but I am still getting this error:

Code: Select all

Loading...
Setting Faceswap backend to AMD
07/31/2020 11:17:52 INFO     Log level set to: INFO
07/31/2020 11:17:52 INFO     Setting up for PlaidML
07/31/2020 11:17:52 INFO     Setting GPU to largest available supported device. If you want to override this selection, run `plaidml-setup` from the command line.
07/31/2020 11:17:52 INFO     Using GPU: ['opencl_amd_ellesmere.0', 'opencl_amd_ellesmere.0']
07/31/2020 11:17:52 INFO     Successfully set up for PlaidML
Using plaidml.keras.backend backend.
07/31/2020 11:17:55 INFO     Model A Directory: C:\Users\Jetpackjules\Documents\Elon_musk_unrefined_output
07/31/2020 11:17:55 INFO     Model B Directory: C:\Users\Jetpackjules\Documents\training
07/31/2020 11:17:55 INFO     Training data directory: C:\Users\Jetpackjules\Documents\model
07/31/2020 11:17:55 INFO     ===================================================
07/31/2020 11:17:55 INFO       Starting
07/31/2020 11:17:55 INFO       Press 'Stop' to save and quit
07/31/2020 11:17:55 INFO     ===================================================
07/31/2020 11:17:55 INFO     Exit requested! The trainer will complete its current cycle, save the models and quit (This can take a couple of minutes depending on your training speed).
07/31/2020 11:17:56 INFO     Loading data, this may take a while...
07/31/2020 11:17:56 INFO     Loading Model from Original plugin...
07/31/2020 11:17:56 INFO     No existing state file found. Generating.
07/31/2020 11:17:56 INFO     Opening device "opencl_amd_ellesmere.0"
07/31/2020 11:17:57 INFO     Creating new 'original' model in folder: 'C:\Users\Jetpackjules\Documents\model'
07/31/2020 11:17:58 INFO     Loading Trainer from Original plugin...
07/31/2020 11:17:58 INFO     Enabled TensorBoard Logging
07/31/2020 11:17:58 ERROR    Caught exception in thread: '_training_0'
07/31/2020 11:18:01 ERROR    Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\Jetpackjules\faceswap\lib\cli\launcher.py", line 155, in execute_script
process.process()
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 161, in process
self._end_thread(thread, err)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 201, in _end_thread
thread.join()
File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 226, in _training
raise err
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 216, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 305, in _run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
raise err
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
loss[side] = batcher.train_one_batch()
File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 424, in train_one_batch
loss = self._model.predictors[self._side].train_on_batch(model_inputs, model_targets)
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 1211, in train_on_batch
class_weight=class_weight)
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training_utils.py", line 102, in standardize_input_data
str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.42352942, 0.42745098, 0.5647059 ],
[0.43529412, 0.4509804 , 0.6       ],
[0.44313726, 0.45882353, 0.60784316],
...,
[0.27058825, 0.21960784, 0.73333335...
07/31/2020 11:18:01 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Jetpackjules\faceswap\crash_report.2020.07.31.111758691313.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.

This is the file:

Code: Select all

07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 64, is_display: False, input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
07/31/2020 11:17:58 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
07/31/2020 11:17:58 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 2856, side: 'a', do_shuffle: True)
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
07/31/2020 11:17:58 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'a')
07/31/2020 11:17:58 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: a
07/31/2020 11:17:58 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)]
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3), (64, 64, 1)], training_opts: {'alignments': {'a': 'C:\\Users\\Jetpackjules\\Downloads\\Elon_Musk\\Elon_Musk_Trim_alignments.fsa', 'b': 'C:\\Users\\Jetpackjules\\Documents\\training\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
07/31/2020 11:17:58 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 2856, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3), (64, 64, 1)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
07/31/2020 11:17:58 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing
07/31/2020 11:17:58 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/31/2020 11:17:58 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/31/2020 11:17:58 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 256
07/31/2020 11:17:58 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/31/2020 11:17:58 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n ...\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n ...\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n ...\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  ...\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
07/31/2020 11:17:58 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.42352942, 0.42745098, 0.5647059 ],\n         [0.43529412, 0.4509804 , 0.6       ],\n         [0.44313726, 0.45882353, 0.60784316],\n         ...,\n         [0.27058825, 0.21960784, 0.73333335...
07/31/2020 11:17:58 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\Jetpackjules\faceswap\lib\cli\launcher.py", line 155, in execute_script
    process.process()
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 161, in process
    self._end_thread(thread, err)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 201, in _end_thread
    thread.join()
  File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\Jetpackjules\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 226, in _training
    raise err
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 216, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\Jetpackjules\faceswap\scripts\train.py", line 305, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
    raise err
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
    loss[side] = batcher.train_one_batch()
  File "C:\Users\Jetpackjules\faceswap\plugins\train\trainer\_base.py", line 424, in train_one_batch
    loss = self._model.predictors[self._side].train_on_batch(model_inputs, model_targets)
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 1211, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1\lib\site-packages\keras\engine\training_utils.py", line 102, in standardize_input_data
    str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.42352942, 0.42745098, 0.5647059 ],
         [0.43529412, 0.4509804 , 0.6       ],
         [0.44313726, 0.45882353, 0.60784316],
         ...,
         [0.27058825, 0.21960784, 0.73333335...

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         3fd26b5 Manual Tool (#1038)
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - Ellesmere (experimental), GPU_1: Advanced Micro Devices, Inc. - Ellesmere (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3110.7', '3110.7']
gpu_vram:            GPU_0: 8192MB, GPU_1: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\Jetpackjules\faceswap\faceswap.py train -A C:/Users/Jetpackjules/Documents/Elon_musk_unrefined_output -ala C:/Users/Jetpackjules/Downloads/Elon_Musk/Elon_Musk_Trim_alignments.fsa -B C:/Users/Jetpackjules/Documents/training -alb C:/Users/Jetpackjules/Documents/training/alignments.fsa -m C:/Users/Jetpackjules/Documents/model -t original -bs 64 -it 1000000 -s 100 -ss 25000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.7
py_virtual_env:      True
sys_cores:           12
sys_processor:       AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram:             Total: 16334MB, Available: 10283MB, Used: 6051MB, Free: 10283MB

=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
brotlipy==0.7.0
cachetools==4.1.0
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cryptography==2.9.2
cycler==0.10.0
decorator==4.4.2
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-auth @ file:///tmp/build/80754af9/google-auth_1594357566944/work
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.10.0
idna @ file:///tmp/build/80754af9/idna_1593446292537/work
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1594236160679/work
Keras==2.2.4
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib==3.3.0
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
networkx==2.4
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.3.0.36
opt-einsum==3.1.0
Pillow @ file:///C:/ci/pillow_1594298234712/work
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.12.3
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
PyJWT==1.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work
pyparsing==2.4.7
pyreadline==2.1
PySocks @ file:///C:/ci/pysocks_1594394709107/work
python-dateutil==2.8.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1592841827918/work
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.17.2
scikit-learn @ file:///C:/ci/scikit-learn_1592847564598/work
scipy==1.5.2
six==1.15.0
tensorboard==2.2.1
tensorboard-plugin-wit==1.6.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tifffile==2020.7.24
toposort==1.5
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1593446365756/work
urllib3==1.25.9
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\Jetpackjules\MiniConda3\envs\FaceSwapV1:
#
# Name                    Version                   Build  Channel
_tflow_select             2.2.0                     eigen  
absl-py 0.9.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
brotlipy 0.7.0 py37he774522_1000
ca-certificates 2020.6.24 0
cachetools 4.1.0 py_1
certifi 2020.6.20 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.1.2 py_0
cryptography 2.9.2 py37h7a1dbc1_0
cycler 0.10.0 pypi_0 pypi decorator 4.4.2 pypi_0 pypi enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py37h9b59f54_1 conda-forge ffmpeg 4.3 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.2 hd328e21_0
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-auth 1.17.2 py_0
google-auth-oauthlib 0.4.1 py_2
google-pasta 0.2.0 py_0
grpcio 1.27.2 py37h351948d_0
h5py 2.10.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 py_0
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge intel-openmp 2020.1 216
joblib 0.16.0 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_1
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.2.0 py37h74a9793_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.12.3 h7bd577a_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 h62dcd97_1
markdown 3.1.1 py37_0
matplotlib 3.3.0 pypi_0 pypi matplotlib-base 3.2.2 py37h64f37c6_0
mkl 2020.1 216
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.1.0 py37h45dec08_0
mkl_random 1.1.1 py37h47e9c7a_0
networkx 2.4 pypi_0 pypi numpy 1.18.5 py37h6530119_0
numpy-base 1.18.5 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv-python 4.3.0.36 pypi_0 pypi openssl 1.1.1g he774522_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_2
pillow 7.2.0 py37hcc1f983_0
pip 20.1.1 py37_1
plaidml 0.6.4 pypi_0 pypi plaidml-keras 0.6.4 pypi_0 pypi protobuf 3.12.3 py37h33f27b4_0
psutil 5.7.0 py37he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.7 py_0
pycparser 2.20 py_2
pyjwt 1.7.1 py37_0
pyopenssl 19.1.0 py_1
pyparsing 2.4.7 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pysocks 1.7.1 py37_1
python 3.7.7 h81c818b_4
python-dateutil 2.8.1 py_0
python_abi 3.7 1_cp37m conda-forge pywavelets 1.1.1 pypi_0 pypi pywin32 227 py37he774522_1
pyyaml 5.3.1 py37he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.24.0 py_0
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.17.2 pypi_0 pypi scikit-learn 0.23.1 py37h25d0782_0
scipy 1.5.2 pypi_0 pypi setuptools 49.2.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.15.0 py_0
sqlite 3.32.3 h2a8f88b_0
tensorboard 2.2.1 pyh532a8cf_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
threadpoolctl 2.1.0 pyh5ca1d4c_0
tifffile 2020.7.24 pypi_0 pypi tk 8.6.10 he774522_0
toposort 1.5 py_3 conda-forge tornado 6.0.4 py37he774522_1
tqdm 4.47.0 py_0
urllib3 1.25.9 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.12.1 py37he774522_1
xz 5.2.5 h62dcd97_0
yaml 0.2.5 he774522_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 ha9fde0e_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 68.75 mask_type: none mask_blur_kernel: 3 mask_threshold: 4 learn_mask: True icnr_init: False conv_aware_init: False reflect_padding: False penalized_mask_loss: False loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
User avatar
Jetpackjules
Posts: 4
Joined: Wed Jul 29, 2020 6:45 pm

Re: Need Help! Training Error.

Post by Jetpackjules »

Wait, I fixed it. I just turned off "learn mask". Thankyou for your help!

Locked