it crashes as soon as i try to train it

Training your model
Forum rules
Read the FAQs and search the forum before posting a new topic.

Please mark any answers that fixed your problems so others can find the solutions.
Post Reply
User avatar
satoshi tamura
Posts: 4
Joined: Fri May 22, 2020 12:20 pm
Has thanked: 1 time

it crashes as soon as i try to train it

Post by satoshi tamura » Fri May 22, 2020 12:29 pm

it crashes as soon as i try to train it and comes up with
Please help me!!!

Code: Select all

05/22/2020 21:27:26 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 32, side: 'a', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
05/22/2020 21:27:26 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 32, side: 'a', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'a')
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: a
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\phant\\Documents\\faceswap\\humanA\\movie_alignments.fsa', 'b': 'C:\\Users\\phant\\Documents\\faceswap\\humanB\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
05/22/2020 21:27:26 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 32, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
05/22/2020 21:27:26 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 32, side: 'a', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
05/22/2020 21:27:26 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 32, side: 'a', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Set preview feed. Batchsize: 14
05/22/2020 21:27:26 MainProcess     _training_0     _base           _use_mask                 DEBUG    False
05/22/2020 21:27:26 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing Batcher: side: 'b', num_images: 45, use_mask: False, batch_size: 8, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: b
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\phant\\Documents\\faceswap\\humanA\\movie_alignments.fsa', 'b': 'C:\\Users\\phant\\Documents\\faceswap\\humanB\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
05/22/2020 21:27:26 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 45, batchsize: 8, side: 'b', do_shuffle: True, is_preview, False, is_timelapse: False)
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 8, is_display: False, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
05/22/2020 21:27:26 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 45, side: 'b', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
05/22/2020 21:27:26 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 45, side: 'b', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'b')
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator: b
05/22/2020 21:27:26 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\phant\\Documents\\faceswap\\humanA\\movie_alignments.fsa', 'b': 'C:\\Users\\phant\\Documents\\faceswap\\humanB\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}, landmarks: {}, masks: {}, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
05/22/2020 21:27:26 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 45, batchsize: 14, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [64]
05/22/2020 21:27:26 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
05/22/2020 21:27:26 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 45, side: 'b', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
05/22/2020 21:27:26 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 45, side: 'b', do_shuffle: True)
05/22/2020 21:27:26 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Set preview feed. Batchsize: 14
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Enabling TensorBoard Logging
05/22/2020 21:27:26 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Setting up TensorBoard Logging. Side: a
05/22/2020 21:27:26 MainProcess     _training_0     _base           name                      DEBUG    model name: 'lightweight'
05/22/2020 21:27:26 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    Tensorflow version: [1, 15, 0]
05/22/2020 21:27:26 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0}
05/22/2020 21:27:26 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 256
05/22/2020 21:27:26 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
05/22/2020 21:27:26 MainProcess     _run_0          multithreading  run                       DEBUG    Error in thread (_run_0): tuple index out of range
05/22/2020 21:27:26 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 256
05/22/2020 21:27:26 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(40, 216, None), 'warp_mapx': '[[[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]\n\n [[ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]\n  [ 40.  84. 128. 172. 216.]]]', 'warp_mapy': '[[[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]\n\n [[ 40.  40.  40.  40.  40.]\n  [ 84.  84.  84.  84.  84.]\n  [128. 128. 128. 128. 128.]\n  [172. 172. 172. 172. 172.]\n  [216. 216. 216. 216. 216.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]\n\n [[  0   0]\n  [  0 255]\n  [255 255]\n  [255   0]\n  [127   0]\n  [127 255]\n  [255 127]\n  [  0 127]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [253. 253. 253. ... 253. 253. 253.]\n  [254. 254. 254. ... 254. 254. 254.]\n  [255. 255. 255. ... 255. 255. 255.]]\n\n [[  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  ...\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]\n  [  0.   1.   2. ... 253. 254. 255.]]]'}
05/22/2020 21:27:26 MainProcess     _run_1          multithreading  run                       DEBUG    Error in thread (_run_1): tuple index out of range
05/22/2020 21:27:27 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Setting up TensorBoard Logging. Side: b
05/22/2020 21:27:27 MainProcess     _training_0     _base           name                      DEBUG    model name: 'lightweight'
05/22/2020 21:27:27 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    Tensorflow version: [1, 15, 0]
05/22/2020 21:27:27 MainProcess     _training_0     _base           _tensorboard_kwargs       DEBUG    {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0}
05/22/2020 21:27:28 MainProcess     _training_0     _base           _set_tensorboard          INFO     Enabled TensorBoard Logging
05/22/2020 21:27:28 MainProcess     _training_0     _base           _use_mask                 DEBUG    False
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing Samples: model: '<plugins.train.model.lightweight.Model object at 0x0000018D93D2E808>', use_mask: False, coverage_ratio: 0.6875)
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized Samples
05/22/2020 21:27:28 MainProcess     _training_0     _base           _use_mask                 DEBUG    False
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing Timelapse: model: <plugins.train.model.lightweight.Model object at 0x0000018D93D2E808>, use_mask: False, coverage_ratio: 0.6875, image_count: 14, batchers: '{'a': <plugins.train.trainer._base.Batcher object at 0x0000018E781E0508>, 'b': <plugins.train.trainer._base.Batcher object at 0x0000018E781E0348>}')
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing Samples: model: '<plugins.train.model.lightweight.Model object at 0x0000018D93D2E808>', use_mask: False, coverage_ratio: 0.6875)
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized Samples
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized Timelapse
05/22/2020 21:27:28 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized Trainer
05/22/2020 21:27:28 MainProcess     _training_0     train           _load_trainer             DEBUG    Loaded Trainer
05/22/2020 21:27:28 MainProcess     _training_0     train           _run_training_cycle       DEBUG    Running Training Cycle
05/22/2020 21:27:28 MainProcess     _training_0     module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n
05/22/2020 21:27:28 MainProcess     _training_0     module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n
05/22/2020 21:27:35 MainProcess     _training_0     _base           generate_preview          DEBUG    Generating preview
05/22/2020 21:27:35 MainProcess     _training_0     _base           largest_face_index        DEBUG    0
05/22/2020 21:27:35 MainProcess     _training_0     _base           compile_sample            DEBUG    Compiling samples: (side: 'a', samples: 14)
05/22/2020 21:27:35 MainProcess     _training_0     multithreading  check_and_raise_error     DEBUG    Thread error caught: [(<class 'IndexError'>, IndexError('tuple index out of range'), <traceback object at 0x0000018E7A97C108>)]
05/22/2020 21:27:35 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): tuple index out of range
05/22/2020 21:27:35 MainProcess     _run_1          multithreading  run                       DEBUG    Error in thread (_run_1): tuple index out of range
05/22/2020 21:27:36 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
05/22/2020 21:27:36 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
05/22/2020 21:27:36 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
05/22/2020 21:27:36 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
05/22/2020 21:27:36 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
05/22/2020 21:27:36 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
05/22/2020 21:27:36 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\phant\faceswap\lib\cli\launcher.py", line 155, in execute_script
    process.process()
  File "C:\Users\phant\faceswap\scripts\train.py", line 161, in process
    self._end_thread(thread, err)
  File "C:\Users\phant\faceswap\scripts\train.py", line 201, in _end_thread
    thread.join()
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\phant\faceswap\scripts\train.py", line 226, in _training
    raise err
  File "C:\Users\phant\faceswap\scripts\train.py", line 216, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\phant\faceswap\scripts\train.py", line 305, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\phant\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
    raise err
  File "C:\Users\phant\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
    loss[side] = batcher.train_one_batch()
  File "C:\Users\phant\faceswap\plugins\train\trainer\_base.py", line 422, in train_one_batch
    model_inputs, model_targets = self._get_next()
  File "C:\Users\phant\faceswap\plugins\train\trainer\_base.py", line 452, in _get_next
    batch = next(self._feed)
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 156, in iterator
    self.check_and_raise_error()
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 84, in check_and_raise_error
    raise error[1].with_traceback(error[2])
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\phant\faceswap\lib\multithreading.py", line 145, in _run
    for item in self.generator(*self._gen_args, **self._gen_kwargs):
  File "C:\Users\phant\faceswap\lib\training_data.py", line 189, in _minibatch
    yield self._process_batch(img_paths, side)
  File "C:\Users\phant\faceswap\lib\training_data.py", line 203, in _process_batch
    self._processing.initialize(batch.shape[1])
IndexError: tuple index out of range

============ System Information ============
encoding:            cp932
git_branch:          master
git_commits:         ac40b0f Remove subpixel upscaling option (#1024)
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: GeForce GTX 1650
gpu_devices_active:  GPU_0
gpu_driver:          432.00
gpu_vram:            GPU_0: 4096MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.17763-SP0
os_release:          10
py_command:          C:\Users\phant\faceswap\faceswap.py train -A C:/Users/phant/Documents/faceswap/humanA_face -ala C:/Users/phant/Documents/faceswap/humanA/movie_alignments.fsa -B C:/Users/phant/Documents/faceswap/humanB -alb C:/Users/phant/Documents/faceswap/humanB/alignments.fsa -m C:/Users/phant/Documents/faceswap/model -t lightweight -bs 8 -it 10000000 -g 1 -s 100 -ss 25000 -ps 50 -L VERBOSE -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.7
py_virtual_env:      True
sys_cores:           8
sys_processor:       Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
sys_ram:             Total: 8114MB, Available: 1444MB, Used: 6670MB, Free: 1444MB

=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
cachetools==3.1.1
certifi==2020.4.5.1
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cloudpickle==1.4.1
cryptography==2.9.2
cycler==0.10.0
cytoolz==0.10.1
dask==2.16.0
decorator==4.4.2
fastcluster==1.1.26
ffmpy==0.2.2
gast==0.2.2
google-auth==1.14.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.9.0
idna==2.9
imageio==2.6.1
imageio-ffmpeg==0.4.2
joblib==0.15.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib==3.1.3
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.17.4
nvidia-ml-py3==7.352.1
oauthlib==3.1.0
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==6.2.1
protobuf==3.11.4
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.7
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
pytz==2020.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn==0.22.1
scipy==1.4.1
six==1.14.0
tensorboard==2.1.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm==4.46.0
urllib3==1.25.8
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\phant\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.1.0                       gpu  
absl-py                   0.9.0                    py37_0  
astor                     0.8.0                    py37_0  
blas                      1.0                         mkl  
blinker                   1.4                      py37_0  
ca-certificates           2020.1.1                      0  
cachetools                3.1.1                      py_0  
certifi                   2020.4.5.1               py37_0  
cffi                      1.14.0           py37h7a1dbc1_0  
chardet                   3.0.4                 py37_1003  
click                     7.1.2                      py_0  
cloudpickle               1.4.1                      py_0  
cryptography              2.9.2            py37h7a1dbc1_0  
cudatoolkit               10.0.130                      0  
cudnn                     7.6.5                cuda10.0_0  
cycler                    0.10.0                   py37_0  
cytoolz                   0.10.1           py37he774522_0  
dask-core                 2.16.0                     py_0  
decorator                 4.4.2                      py_0  
fastcluster               1.1.26           py37he350917_0    conda-forge
ffmpeg                    4.2                  h6538335_0    conda-forge
ffmpy                     0.2.2                    pypi_0    pypi
freetype                  2.9.1                ha9979f8_1  
gast                      0.2.2                    py37_0  
git                       2.23.0               h6bb4b03_0  
google-auth               1.14.1                     py_0  
google-auth-oauthlib      0.4.1                      py_2  
google-pasta              0.2.0                      py_0  
grpcio                    1.27.2           py37h351948d_0  
h5py                      2.9.0            py37h5e291fa_0  
hdf5                      1.10.4               h7ebc959_0  
icc_rt                    2019.0.0             h0cc432a_1  
icu                       58.2                 ha925a31_3  
idna                      2.9                        py_1  
imageio                   2.6.1                    py37_0  
imageio-ffmpeg            0.4.2                      py_0    conda-forge
intel-openmp              2020.1                      216  
joblib                    0.15.1                     py_0  
jpeg                      9b                   hb83a4c4_2  
keras                     2.2.4                         0  
keras-applications        1.0.8                      py_0  
keras-base                2.2.4                    py37_0  
keras-preprocessing       1.1.0                      py_1  
kiwisolver                1.2.0            py37h74a9793_0  
libpng                    1.6.37               h2a8f88b_0  
libprotobuf               3.11.4               h7bd577a_0  
libtiff                   4.1.0                h56a325e_0  
markdown                  3.1.1                    py37_0  
matplotlib                3.1.1            py37hc8f65d3_0  
matplotlib-base           3.1.3            py37h64f37c6_0  
mkl                       2020.1                      216  
mkl-service               2.3.0            py37hb782905_0  
mkl_fft                   1.0.15           py37h14836fe_0  
mkl_random                1.1.0            py37h675688f_0  
networkx                  2.4                        py_0  
numpy                     1.17.4           py37h4320e6b_0  
numpy-base                1.17.4           py37hc3f5095_0  
nvidia-ml-py3             7.352.1                  pypi_0    pypi
oauthlib                  3.1.0                      py_0  
olefile                   0.46                     py37_0  
opencv-python             4.1.2.30                 pypi_0    pypi
openssl                   1.1.1g               he774522_0  
opt_einsum                3.1.0                      py_0  
pathlib                   1.0.1                    py37_1  
pillow                    6.2.1            py37hdc69c19_0  
pip                       20.0.2                   py37_3  
protobuf                  3.11.4           py37h33f27b4_0  
psutil                    5.7.0            py37he774522_0  
pyasn1                    0.4.8                      py_0  
pyasn1-modules            0.2.7                      py_0  
pycparser                 2.20                       py_0  
pyjwt                     1.7.1                    py37_0  
pyopenssl                 19.1.0                   py37_0  
pyparsing                 2.4.7                      py_0  
pyqt                      5.9.2            py37h6538335_2  
pyreadline                2.1                      py37_1  
pysocks                   1.7.1                    py37_0  
python                    3.7.7                h81c818b_4  
python-dateutil           2.8.1                      py_0  
python_abi                3.7                     1_cp37m    conda-forge
pytz                      2020.1                     py_0  
pywavelets                1.1.1            py37he774522_0  
pywin32                   227              py37he774522_1  
pyyaml                    5.3.1            py37he774522_0  
qt                        5.9.7            vc14h73c81de_0  
requests                  2.23.0                   py37_0  
requests-oauthlib         1.3.0                      py_0  
rsa                       4.0                        py_0  
scikit-image              0.16.2           py37h47e9c7a_0  
scikit-learn              0.22.1           py37h6288b17_0  
scipy                     1.4.1            py37h9439919_0  
setuptools                46.4.0                   py37_0  
sip                       4.19.8           py37h6538335_0  
six                       1.14.0                   py37_0  
sqlite                    3.31.1               h2a8f88b_1  
tensorboard               2.1.0                     py3_0  
tensorflow                1.15.0          gpu_py37hc3743a6_0  
tensorflow-base           1.15.0          gpu_py37h1afeea4_0  
tensorflow-estimator      1.15.1             pyh2649769_0  
tensorflow-gpu            1.15.0               h0d30ee6_0  
termcolor                 1.1.0                    py37_1  
tk                        8.6.8                hfa6e2cd_0  
toolz                     0.10.0                     py_0  
toposort                  1.5                        py_3    conda-forge
tornado                   6.0.4            py37he774522_1  
tqdm                      4.46.0                     py_0  
urllib3                   1.25.8                   py37_0  
vc                        14.1                 h0510ff6_4  
vs2015_runtime            14.16.27012          hf0eaf9b_1  
werkzeug                  0.16.1                     py_0  
wheel                     0.34.2                   py37_0  
win_inet_pton             1.1.0                    py37_0  
wincertstore              0.2                      py37_0  
wrapt                     1.12.1           py37he774522_1  
xz                        5.2.5                h62dcd97_0  
yaml                      0.1.7                hc54c509_2  
zlib                      1.2.11               h62dcd97_4  
zstd                      1.3.7                h508b16e_0  

=============== State File =================
{
  "name": "lightweight",
  "sessions": {
    "1": {
      "timestamp": 1590147873.7793572,
      "no_logs": false,
      "pingpong": true,
      "loss_names": {
        "a": [
          "face_loss"
        ],
        "b": [
          "face_loss"
        ]
      },
      "batchsize": 16,
      "iterations": 1,
      "config": {
        "learning_rate": 5e-05
      }
    }
  },
  "lowest_avg_loss": {
    "a": 0.19267281889915466
  },
  "iterations": 1,
  "inputs": {
    "face_in:0": [
      64,
      64,
      3
    ]
  },
  "training_size": 256,
  "config": {
    "coverage": 68.75,
    "mask_type": null,
    "mask_blur_kernel": 3,
    "mask_threshold": 4,
    "learn_mask": false,
    "icnr_init": false,
    "conv_aware_init": false,
    "reflect_padding": false,
    "penalized_mask_loss": true,
    "loss_function": "mae",
    "learning_rate": 5e-05
  }
}

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.box_blend]
type:                     gaussian
distance:                 11.0
radius:                   5.0
passes:                   1

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0

[scaling.sharpen]
method:                   unsharp_mask
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7
scalefactor:              0.709
batch-size:               8

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
coverage:                 68.75
mask_type:                none
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False
icnr_init:                False
conv_aware_init:          False
reflect_padding:          False
penalized_mask_loss:      True
loss_function:            mae
learning_rate:            5e-05

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
clipnorm:                 True
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
clipnorm:                 True
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

User avatar
torzdf
Posts: 547
Joined: Fri Jul 12, 2019 12:53 am
Answers: 86
Has thanked: 16 times
Been thanked: 120 times

Re: it crashes as soon as i try to train it

Post by torzdf » Sat May 23, 2020 11:17 am

Most likely you have different sized training images in one of your training folders
My word is final

User avatar
satoshi tamura
Posts: 4
Joined: Fri May 22, 2020 12:20 pm
Has thanked: 1 time

Re: it crashes as soon as i try to train it

Post by satoshi tamura » Sun May 24, 2020 6:52 am

Thank you for your reply.
All sizes are unified to 256 x 256. Or does size mean image capacity?

User avatar
satoshi tamura
Posts: 4
Joined: Fri May 22, 2020 12:20 pm
Has thanked: 1 time

Re: it crashes as soon as i try to train it

Post by satoshi tamura » Sun May 24, 2020 6:56 am

The files used in "Training" are the files automatically output in "Extraction", so the sizes are all the same.

User avatar
torzdf
Posts: 547
Joined: Fri Jul 12, 2019 12:53 am
Answers: 86
Has thanked: 16 times
Been thanked: 120 times

Re: it crashes as soon as i try to train it

Post by torzdf » Sun May 24, 2020 11:11 am

The issue is with both the A and B sides, so I'm not sure what is going wrong here. You will need to run with TRACE loglevel and then provide the full faceswap.log from your Faceswap folder.

NB: TRACE is slow and will generate large log files.
My word is final

User avatar
satoshi tamura
Posts: 4
Joined: Fri May 22, 2020 12:20 pm
Has thanked: 1 time

Re: it crashes as soon as i try to train it

Post by satoshi tamura » Mon May 25, 2020 10:59 am

"Training" was successfully performed. It seems that the specified folder was wrong. Thank you for your support.

Post Reply