Caught exception in thread: '_training'

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Caught exception in thread: '_training'

Post by adam_macchiato »

Hi ,

I search the forum already and try to roll back , deleted all conda , cuda , and reinstall faceswap , still can't fix this problem , now all model cannot continue traning , but no problem to convert , what can i do ?

thanks you.

Code: Select all

09/15/2022 10:11:47 ERROR    Caught exception in thread: '_training'
09/15/2022 10:11:49 ERROR    Got Exception on main handler:
Traceback (most recent call last):
  File "C:\Users\adama\faceswap\lib\cli\launcher.py", line 201, in execute_script
    process.process()
  File "C:\Users\adama\faceswap\scripts\train.py", line 217, in process
    self._end_thread(thread, err)
  File "C:\Users\adama\faceswap\scripts\train.py", line 257, in _end_thread
    thread.join()
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\adama\faceswap\scripts\train.py", line 279, in _training
    raise err
  File "C:\Users\adama\faceswap\scripts\train.py", line 269, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\adama\faceswap\scripts\train.py", line 357, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
    self._update_viewers(viewer, timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 354, in _update_viewers
    self._timelapse.output_timelapse(timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 1074, in output_timelapse
    image = self._samples.show_sample()
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 647, in show_sample
    return self._compile_preview(preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 743, in _compile_preview
    display = self._to_full_frame(side, samples, preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 798, in _to_full_frame
    images = self._compile_masked(images, samples[-1])
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in _compile_masked
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'

09/15/2022 10:11:49 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\adama\faceswap\crash_report.2022.09.15.101147088901.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Caught exception in thread: '_training'

Post by torzdf »

Code: Select all

Crash report written to 'C:\Users\adama\faceswap\crash_report.2022.09.15.101147088901.log'. You MUST provide this file if seeking assistance

My word is final

User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Re: Caught exception in thread: '_training'

Post by adam_macchiato »

hi , this is the report , thank you for your help .

Code: Select all

09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7570C70>, weight: 3.0, mask_channel: 4)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7570B80>, weight: 2.0, mask_channel: 5)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C75616D0>, weight: 1.0, mask_channel: 3)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C75617F0>, weight: 3.0, mask_channel: 4)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C755C700>, weight: 2.0, mask_channel: 5)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7561EE0>, weight: 1.0, mask_channel: 3)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C755C790>, weight: 3.0, mask_channel: 4)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C74D6640>, weight: 2.0, mask_channel: 5)
09/15/2022 10:09:59 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x0000027F50CB9CD0>, weight: 1.0, mask_channel: 3)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C21CC5E0>, weight: 3.0, mask_channel: 4)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7580FD0>, weight: 2.0, mask_channel: 5)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C1DC5040>, weight: 1.0, mask_channel: 3)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7570C70>, weight: 3.0, mask_channel: 4)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7570B80>, weight: 2.0, mask_channel: 5)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C75616D0>, weight: 1.0, mask_channel: 3)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C75617F0>, weight: 3.0, mask_channel: 4)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C755C700>, weight: 2.0, mask_channel: 5)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C7561EE0>, weight: 1.0, mask_channel: 3)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C755C790>, weight: 3.0, mask_channel: 4)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000284C74D6640>, weight: 2.0, mask_channel: 5)
09/15/2022 10:10:17 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/15/2022 10:11:34 MainProcess     _training                      _base           output_timelapse               DEBUG    Ouputting time-lapse
09/15/2022 10:11:34 MainProcess     _training                      _base           _setup                         DEBUG    Setting up time-lapse
09/15/2022 10:11:34 MainProcess     _training                      _base           _setup                         DEBUG    Time-lapse output set to 'D:\Temp\TimeLine'
09/15/2022 10:11:34 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 8017 files
09/15/2022 10:11:34 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 8017 images
09/15/2022 10:11:34 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 7202 files
09/15/2022 10:11:34 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 7202 images
09/15/2022 10:11:34 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting time-lapse feed: (input_images: '{'a': ['C:\\Faceswap\\044A\\02485.png','C:\\Faceswap\\044A\\02486.png', 'C:\\Faceswap\\044A\\02487.png', 'C:\\Faceswap\\044A\\02488.png', 'C:\\Faceswap\\044A\\02489.png', 'C:\\Faceswap\\044A\\02490.png', 'C:\\Faceswap\\044A\\02491.png', 'C:\\Faceswap\\044A\\02492.png', 'C:\\Faceswap\\044A\\02493.png', 'C:\\Faceswap\\044A\\02494.png', 'C:\\Faceswap\\044A\\02495.png', 'C:\\Faceswap\\044A\\02496.png', 'C:\\Faceswap\\044A\\02497.png', 'C:\\Faceswap\\044A\\02498.png', 'C:\\Faceswap\\044A\\02499.png', 'C:\\Faceswap\\044A\\02500.png', 'C:\\Faceswap\\044A\\02501.png', 'C:\\Faceswap\\044A\\02502.png', 'C:\\Faceswap\\044A\\02503.png', 'C:\\Faceswap\\044A\\025................

09/15/2022 10:11:34 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'a', images: 8017)
09/15/2022 10:11:34 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: a, is_display: True,  batch_size: 14
09/15/2022 10:11:34 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: a, images: 8017 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/15/2022 10:11:34 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: a, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/15/2022 10:11:34 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/15/2022 10:11:34 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/15/2022 10:11:34 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/15/2022 10:11:34 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/15/2022 10:11:34 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_3', thread_count: 1)
09/15/2022 10:11:34 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_3'
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_3'
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_3'
09/15/2022 10:11:34 MainProcess     _run_3                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 8017, do_shuffle: False)
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_3': 1
09/15/2022 10:11:34 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'b', images: 7202)
09/15/2022 10:11:34 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: b, is_display: True,  batch_size: 14
09/15/2022 10:11:34 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: b, images: 7202 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/15/2022 10:11:34 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: b, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/15/2022 10:11:34 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/15/2022 10:11:34 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/15/2022 10:11:34 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/15/2022 10:11:34 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/15/2022 10:11:34 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_4', thread_count: 1)
09/15/2022 10:11:34 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_4'
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_4'
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_4'
09/15/2022 10:11:34 MainProcess     _run_4                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 7202, do_shuffle: False)
09/15/2022 10:11:34 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_4': 1
09/15/2022 10:11:34 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x00000284FF1A2D60>, 'b': <generator object BackgroundGenerator.iterator at 0x000002851B2CE6D0>}
09/15/2022 10:11:34 MainProcess     _training                      _base           _setup                         DEBUG    Set up time-lapse
09/15/2022 10:11:34 MainProcess     _training                      _base           output_timelapse               DEBUG    Getting time-lapse samples
09/15/2022 10:11:34 MainProcess     _training                      _base           generate_preview               DEBUG    Generating preview (is_timelapse: True)
09/15/2022 10:11:34 MainProcess     _training                      _base           generate_preview               DEBUG    Generated samples: is_timelapse: True, images: {'feed': {'a': (14, 288, 288, 3), 'b': (14, 288, 288, 3)}, 'samples': {'a': (14, 292, 292, 3), 'b': (14, 292, 292, 3)}, 'sides': {'a': (14, 288, 288, 1), 'b': (14, 288, 288, 1)}}
09/15/2022 10:11:34 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
09/15/2022 10:11:34 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
09/15/2022 10:11:34 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiled Samples: {'a': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)], 'b': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)]}
09/15/2022 10:11:34 MainProcess     _training                      _base           output_timelapse               DEBUG    Got time-lapse samples: {'a': 3, 'b': 3}
09/15/2022 10:11:34 MainProcess     _training                      _base           show_sample                    DEBUG    Showing sample
09/15/2022 10:11:34 MainProcess     _training                      _base           _get_predictions               DEBUG    Getting Predictions
09/15/2022 10:11:46 MainProcess     _training                      _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 256, 256, 3), 'b_b': (14, 256, 256, 3), 'a_b': (14, 256, 256, 3), 'b_a': (14, 256, 256, 3)}
09/15/2022 10:11:46 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 256, 256, 3), (14, 256, 256, 3)])
09/15/2022 10:11:46 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 292, prediction_size: 256, color: (0.0, 0.0, 1.0)
09/15/2022 10:11:46 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 292, 292, 3)
09/15/2022 10:11:46 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'\n
09/15/2022 10:11:47 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
09/15/2022 10:11:47 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
09/15/2022 10:11:47 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
09/15/2022 10:11:47 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
09/15/2022 10:11:47 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
09/15/2022 10:11:47 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
09/15/2022 10:11:47 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "C:\Users\adama\faceswap\lib\cli\launcher.py", line 201, in execute_script
    process.process()
  File "C:\Users\adama\faceswap\scripts\train.py", line 217, in process
    self._end_thread(thread, err)
  File "C:\Users\adama\faceswap\scripts\train.py", line 257, in _end_thread
    thread.join()
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\adama\faceswap\scripts\train.py", line 279, in _training
    raise err
  File "C:\Users\adama\faceswap\scripts\train.py", line 269, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\adama\faceswap\scripts\train.py", line 357, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
    self._update_viewers(viewer, timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 354, in _update_viewers
    self._timelapse.output_timelapse(timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 1074, in output_timelapse
    image = self._samples.show_sample()
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 647, in show_sample
    return self._compile_preview(preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 743, in _compile_preview
    display = self._to_full_frame(side, samples, preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 798, in _to_full_frame
    images = self._compile_masked(images, samples[-1])
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in _compile_masked
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'


============ System Information ============
encoding:            cp950
git_branch:          master
git_commits:         952d799 Bugfixes:   - Extract - batch mode. Exclude folders with no images   - Train. Trigger the correct preview/mask update from gui trigger
gpu_cuda:            11.7
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce RTX 3090
gpu_devices_active:  GPU_0
gpu_driver:          516.94
gpu_vram:            GPU_0: 24576MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19044-SP0
os_release:          10
py_command:          C:\Users\adama\faceswap\faceswap.py train -A C:/Faceswap/044A -B C:/Faceswap/Face Li Finall -2 -m D:/Temp/Model/Ayu + Li -t phaze-a -bs 8 -it 2000000 -D default -s 250 -ss 25000 -tia C:/Faceswap/044A -tib C:/Faceswap/Face Li Finall -2 -to D:/Temp/TimeLine -L INFO -gui
py_conda_version:    conda 4.14.0
py_implementation:   CPython
py_version:          3.9.13
py_virtual_env:      True
sys_cores:           32
sys_processor:       AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD
sys_ram:             Total: 65460MB, Available: 50257MB, Used: 15202MB, Free: 50257MB

=============== Pip Packages ===============
absl-py==1.2.0
astunparse==1.6.3
cachetools==5.2.0
certifi==2022.6.15.2
charset-normalizer==2.1.1
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
colorama @ file:///C:/Windows/TEMP/abs_9439aeb1-0254-449a-96f7-33ab5eb17fc8apleb4yn/croots/recipe/colorama_1657009099097/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
dm-tree==0.1.5
fastcluster @ file:///D:/bld/fastcluster_1649783471014/work
ffmpy==0.3.0
flatbuffers==1.12
fonttools==4.25.0
gast==0.4.0
google-auth==2.11.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.48.1
h5py==3.7.0
idna==3.4
imageio @ file:///C:/Windows/TEMP/abs_24c1b783-7540-4ca9-a1b1-0e8aa8e6ae64hb79ssux/croots/recipe/imageio_1658785038775/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==4.12.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1633637554808/work
keras==2.9.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1653292407425/work
libclang==14.0.6
Markdown==3.4.1
MarkupSafe==2.1.1
matplotlib @ file:///C:/ci/matplotlib-suite_1660169687702/work
mkl-fft==1.3.1
mkl-random @ file:///C:/ci/mkl_random_1626186184308/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/Windows/Temp/abs_e2036a32-9fe9-47f3-a04c-dbb1c232ba4b334exiur/croots/recipe/numexpr_1656940304835/work
numpy @ file:///C:/Windows/Temp/abs_2a1e1vbeag/croots/recipe/numpy_and_numpy_base_1659432712056/work
nvidia-ml-py==11.515.75
oauthlib==3.2.1
opencv-python==4.6.0.66
opt-einsum==3.3.0
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==9.2.0
ply==3.11
protobuf==3.19.5
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==302
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp39-none-win_amd64.whl
requests==2.28.1
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///D:/bld/scikit-learn_1659726281030/work
scipy @ file:///C:/bld/scipy_1658811088396/work
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.9.0
tensorflow-gpu==2.9.2
tensorflow-io-gcs-filesystem==0.27.0
tensorflow-probability @ file:///tmp/build/80754af9/tensorflow-probability_1633017132682/work
termcolor==2.0.1
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/ci/tornado_1662458743919/work
tqdm @ file:///C:/ci/tqdm_1650636210717/work
typing_extensions @ file:///C:/Windows/TEMP/abs_dd2d0moa85/croots/recipe/typing_extensions_1659638831135/work
urllib3==1.26.12
Werkzeug==2.2.2
wincertstore==0.2
wrapt==1.14.1
zipp==3.8.1

============== Conda Packages ==============
# packages in environment at C:\Users\adama\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   1.2.0                    pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
brotli                    1.0.9                h2bbff1b_7  
brotli-bin                1.0.9                h2bbff1b_7  
ca-certificates           2022.6.15.2          h5b45459_0    conda-forge
cachetools                5.2.0                    pypi_0    pypi
certifi                   2022.6.15.2        pyhd8ed1ab_0    conda-forge
charset-normalizer        2.1.1                    pypi_0    pypi
cloudpickle               2.0.0              pyhd3eb1b0_0  
colorama                  0.4.5            py39haa95532_0  
cudatoolkit               11.2.2              h933977f_10    conda-forge
cudnn                     8.1.0.77             h3e0f4f4_0    conda-forge
cycler                    0.11.0             pyhd3eb1b0_0  
decorator                 5.1.1              pyhd3eb1b0_0  
dm-tree                   0.1.5            py39hf11a4ad_0  
fastcluster               1.2.6            py39h2e25243_1    conda-forge
ffmpeg                    4.3.1                ha925a31_0    conda-forge
ffmpy                     0.3.0                    pypi_0    pypi
flatbuffers               1.12                     pypi_0    pypi
fonttools                 4.25.0             pyhd3eb1b0_0  
freetype                  2.10.4               hd328e21_0  
gast                      0.4.0                    pypi_0    pypi
git                       2.34.1               haa95532_0  
glib                      2.69.1               h5dc1a3c_1  
google-auth               2.11.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.48.1                   pypi_0    pypi
gst-plugins-base          1.18.5               h9e645db_0  
gstreamer                 1.18.5               hd78058f_0  
h5py                      3.7.0                    pypi_0    pypi
icu                       58.2                 ha925a31_3  
idna                      3.4                      pypi_0    pypi
imageio                   2.19.3           py39haa95532_0  
imageio-ffmpeg            0.4.7              pyhd8ed1ab_0    conda-forge
importlib-metadata        4.12.0                   pypi_0    pypi
intel-openmp              2021.4.0          haa95532_3556  
joblib                    1.1.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h2bbff1b_0  
keras                     2.9.0                    pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.2            py39hd77b12b_0  
lerc                      3.0                  hd77b12b_0  
libblas                   3.9.0           1_h8933c1f_netlib    conda-forge
libbrotlicommon           1.0.9                h2bbff1b_7  
libbrotlidec              1.0.9                h2bbff1b_7  
libbrotlienc              1.0.9                h2bbff1b_7  
libcblas                  3.9.0           5_hd5c7e75_netlib    conda-forge
libclang                  14.0.6                   pypi_0    pypi
libdeflate                1.8                  h2bbff1b_5  
libffi                    3.4.2                hd77b12b_4  
libiconv                  1.16                 h2bbff1b_2  
liblapack                 3.9.0           5_hd5c7e75_netlib    conda-forge
libogg                    1.3.5                h2bbff1b_1  
libpng                    1.6.37               h2a8f88b_0  
libtiff                   4.4.0                h8a3f274_0  
libvorbis                 1.3.7                he774522_0  
libwebp                   1.2.2                h2bbff1b_0  
libxml2                   2.9.14               h0ad7f3c_0  
libxslt                   1.1.35               h2bbff1b_0  
lz4-c                     1.9.3                h2bbff1b_1  
m2w64-gcc-libgfortran     5.3.0                         6    conda-forge
m2w64-gcc-libs            5.3.0                         7    conda-forge
m2w64-gcc-libs-core       5.3.0                         7    conda-forge
m2w64-gmp                 6.1.0                         2    conda-forge
m2w64-libwinpthread-git   5.0.0.4634.697f757               2    conda-forge
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.1                    pypi_0    pypi
matplotlib                3.5.2            py39haa95532_0  
matplotlib-base           3.5.2            py39hd77b12b_0  
mkl                       2021.4.0           haa95532_640  
mkl-service               2.4.0            py39h2bbff1b_0  
mkl_fft                   1.3.1            py39h277e83a_0  
mkl_random                1.2.2            py39hf11a4ad_0  
msys2-conda-epoch         20160418                      1    conda-forge
munkres                   1.1.4                      py_0  
numexpr                   2.8.3            py39hb80d3ca_0  
numpy                     1.23.1           py39h7a0a035_0  
numpy-base                1.23.1           py39hca35cd5_0  
nvidia-ml-py              11.515.75                pypi_0    pypi
oauthlib                  3.2.1                    pypi_0    pypi
opencv-python             4.6.0.66                 pypi_0    pypi
openssl                   1.1.1q               h8ffe710_0    conda-forge
opt-einsum                3.3.0                    pypi_0    pypi
packaging                 21.3               pyhd3eb1b0_0  
pcre                      8.45                 hd77b12b_0  
pillow                    9.2.0            py39hdc2b20a_1  
pip                       22.1.2           py39haa95532_0  
ply                       3.11             py39haa95532_0  
protobuf                  3.19.5                   pypi_0    pypi
psutil                    5.9.0            py39h2bbff1b_0  
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
pyparsing                 3.0.9            py39haa95532_0  
pyqt                      5.15.7           py39hd77b12b_0  
pyqt5-sip                 12.11.0          py39hd77b12b_0  
python                    3.9.13               h6244533_1  
python-dateutil           2.8.2              pyhd3eb1b0_0  
python_abi                3.9                      2_cp39    conda-forge
pywin32                   302              py39h2bbff1b_2  
pywinpty                  2.0.2            py39h5da7b33_0  
qt-main                   5.15.2               he8e5bd7_7  
qt-webengine              5.15.9               hb9a9bb5_4  
qtwebkit                  5.212                h3ad3cdb_4  
requests                  2.28.1                   pypi_0    pypi
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-learn              1.1.2            py39hfd4428b_0    conda-forge
scipy                     1.8.1            py39h5567194_2    conda-forge
setuptools                63.4.1           py39haa95532_0  
sip                       6.6.2            py39hd77b12b_0  
six                       1.16.0             pyhd3eb1b0_1  
sqlite                    3.39.2               h2bbff1b_0  
tensorboard               2.9.1                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow-estimator      2.9.0                    pypi_0    pypi
tensorflow-gpu            2.9.2                    pypi_0    pypi
tensorflow-io-gcs-filesystem 0.27.0                   pypi_0    pypi
tensorflow-probability    0.14.0             pyhd3eb1b0_0  
termcolor                 2.0.1                    pypi_0    pypi
threadpoolctl             3.1.0              pyh8a188c0_0    conda-forge
tk                        8.6.12               h2bbff1b_0  
toml                      0.10.2             pyhd3eb1b0_0  
tornado                   6.2              py39h2bbff1b_0  
tqdm                      4.64.0           py39haa95532_0  
typing-extensions         4.3.0            py39haa95532_0  
typing_extensions         4.3.0            py39haa95532_0  
tzdata                    2022c                h04d1e81_0  
urllib3                   1.26.12                  pypi_0    pypi
vc                        14.2                 h21ff451_1  
vs2015_runtime            14.27.29016          h5e58377_2  
werkzeug                  2.2.2                    pypi_0    pypi
wheel                     0.37.1             pyhd3eb1b0_0  
wincertstore              0.2              py39haa95532_2  
winpty                    0.4.3                         4  
wrapt                     1.14.1                   pypi_0    pypi
xz                        5.2.5                h8cc25b3_1  
zipp                      3.8.1                    pypi_0    pypi
zlib                      1.2.12               h8cc25b3_3  
zstd                      1.5.2                h19a0ad4_0  

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0
erosion_top:              0.0
erosion_bottom:           0.0
erosion_left:             0.0
erosion_right:            0.0

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
separate_mask:            False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
separate_mask:            False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               64

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
cpu:                      True
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               12

[mask.bisenet_fp]
batch-size:               8
cpu:                      False
weights:                  faceswap
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.custom]
batch-size:               8
centering:                face
fill:                     False

[mask.unet_dfl]
batch-size:               64

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
centering:                face
coverage:                 87.5
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -5
autoclip:                 False
reflect_padding:          False
allow_growth:             False
mixed_precision:          False
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            mae
loss_function_2:          mse
loss_weight_2:            100
loss_function_3:          None
loss_weight_3:            0
loss_function_4:          None
loss_weight_4:            0
mask_loss_function:       mse
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                unet-dfl
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[model.dfaker]
output_size:              128

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.phaze_a]
output_size:              256
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         efficientnet_v2_l
enc_scaling:              100
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          512
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1280
fc_max_filters:           1280
fc_dimensions:            8
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      1280
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       resize_images
dec_upscales_in_fc:       0
dec_norm:                 none
dec_min_filters:          160
dec_max_filters:          640
dec_slope_mode:           full
dec_filter_slope:         -0.33
dec_res_blocks:           1
dec_output_kernel:        3
dec_gaussian:             True
dec_skip_last_residual:   False
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
fs_original_use_alt:      False
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001
mobilenet_minimalistic:   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Caught exception in thread: '_training'

Post by torzdf »

Ok, it's a long shot, but in the first instance, could you please uninstall the globally installed "Cuda 11.7" and see if that makes a difference.

My word is final

User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Re: Caught exception in thread: '_training'

Post by adam_macchiato »

Itried uninstall CUDA , and faceswap , and reinstall faceswap only , still not work . :cry:

Code: Select all

09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6E4CC70>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD190>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD5B0>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD940>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDDE20>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD9D0>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DF25E0>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DF2AC0>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6E34BE0>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6E4C040>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6E4C4F0>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:18 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AA6272AF40>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6E4CC70>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD190>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD5B0>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD940>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDDE20>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DDD9D0>, weight: 1.0, mask_channel: 3)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DF25E0>, weight: 3.0, mask_channel: 4)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001AFC6DF2AC0>, weight: 2.0, mask_channel: 5)
09/18/2022 20:08:19 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/18/2022 20:09:38 MainProcess     _training                      _base           output_timelapse               DEBUG    Ouputting time-lapse
09/18/2022 20:09:38 MainProcess     _training                      _base           _setup                         DEBUG    Setting up time-lapse
09/18/2022 20:09:38 MainProcess     _training                      _base           _setup                         DEBUG    Time-lapse output set to 'D:\Temp\TimeLine'
09/18/2022 20:09:38 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 8017 files
09/18/2022 20:09:38 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 8017 images
09/18/2022 20:09:38 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 7202 files
09/18/2022 20:09:38 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 7202 images
09/18/2022 20:09:38 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting time-lapse feed: (input_images: '{'a': ['C:\\Faceswap\\044A\\02485.png', 'C:\\Faceswap\\044A\\02486.png', 'C:\\Faceswap\\044A\\02487.png', 'C:\\Faceswap\\044A\\02488.png', 'C:\\Faceswap\\044A\\02489.png', 'C:\\Faceswap\\044A\\02490.png', 'C:\\Faceswap\\044A\\02491.png', 'C:\\Faceswap\\044A\\02492.png', 'C:\\Faceswap\\044A\\02493.png', 'C:\\Faceswap\\044A\\02494.png', 'C:\\Faceswap\\044A\\02495.png', 'C:\\Faceswap\\044A\\02496.png', 'C:\\Faceswap\\044A\\02497.png', 'C:\\Faceswap\\044A\\02498.png', 'C:\\Faceswap\\044A\\02499.png', 'C:\\Faceswap\\044A\\02500.png', 'C:\\Faceswap\\044A\\02501.png', 'C:\\Faceswap\\044A\\02502.png', 'C:\\Faceswap\\044A\\02503.png', 'C:\\Faceswap\\044A\\02504.png', 'C:\\Faceswap\\044A\\02505.png', 'C:\\Faceswap\\044A\\02506.png', 'C:\\Faceswap\\044A\\02507.png', 'C:\\Faceswap\\044A\\02508.png', 'C:\\Faceswap\\044A\\02509.png', 'C:\\Faceswap\\044A\\02510.png', 'C:\\Faceswap\\044A\\02511.png', 'C:\\Faceswap\\044A\\02512.png',...............................

09/18/2022 20:09:38 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'a', images: 8017)
09/18/2022 20:09:38 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: a, is_display: True,  batch_size: 14
09/18/2022 20:09:38 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: a, images: 8017 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/18/2022 20:09:38 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: a, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/18/2022 20:09:38 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/18/2022 20:09:38 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/18/2022 20:09:38 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/18/2022 20:09:38 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/18/2022 20:09:38 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_3', thread_count: 1)
09/18/2022 20:09:38 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_3'
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_3'
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_3'
09/18/2022 20:09:38 MainProcess     _run_3                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 8017, do_shuffle: False)
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_3': 1
09/18/2022 20:09:38 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'b', images: 7202)
09/18/2022 20:09:38 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: b, is_display: True,  batch_size: 14
09/18/2022 20:09:38 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: b, images: 7202 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/18/2022 20:09:38 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: b, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/18/2022 20:09:38 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/18/2022 20:09:38 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/18/2022 20:09:38 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/18/2022 20:09:38 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/18/2022 20:09:38 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_4', thread_count: 1)
09/18/2022 20:09:38 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_4'
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_4'
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_4'
09/18/2022 20:09:38 MainProcess     _run_4                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 7202, do_shuffle: False)
09/18/2022 20:09:38 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_4': 1
09/18/2022 20:09:38 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x000001AA5C112510>, 'b': <generator object BackgroundGenerator.iterator at 0x000001AFCD119A50>}
09/18/2022 20:09:38 MainProcess     _training                      _base           _setup                         DEBUG    Set up time-lapse
09/18/2022 20:09:38 MainProcess     _training                      _base           output_timelapse               DEBUG    Getting time-lapse samples
09/18/2022 20:09:38 MainProcess     _training                      _base           generate_preview               DEBUG    Generating preview (is_timelapse: True)
09/18/2022 20:09:38 MainProcess     _training                      _base           generate_preview               DEBUG    Generated samples: is_timelapse: True, images: {'feed': {'a': (14, 288, 288, 3), 'b': (14, 288, 288, 3)}, 'samples': {'a': (14, 292, 292, 3), 'b': (14, 292, 292, 3)}, 'sides': {'a': (14, 288, 288, 1), 'b': (14, 288, 288, 1)}}
09/18/2022 20:09:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
09/18/2022 20:09:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
09/18/2022 20:09:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiled Samples: {'a': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)], 'b': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)]}
09/18/2022 20:09:38 MainProcess     _training                      _base           output_timelapse               DEBUG    Got time-lapse samples: {'a': 3, 'b': 3}
09/18/2022 20:09:38 MainProcess     _training                      _base           show_sample                    DEBUG    Showing sample
09/18/2022 20:09:38 MainProcess     _training                      _base           _get_predictions               DEBUG    Getting Predictions
09/18/2022 20:09:48 MainProcess     _training                      _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 256, 256, 3), 'b_b': (14, 256, 256, 3), 'a_b': (14, 256, 256, 3), 'b_a': (14, 256, 256, 3)}
09/18/2022 20:09:48 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 256, 256, 3), (14, 256, 256, 3)])
09/18/2022 20:09:48 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 292, prediction_size: 256, color: (0.0, 0.0, 1.0)
09/18/2022 20:09:48 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 292, 292, 3)
09/18/2022 20:09:48 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'\n
09/18/2022 20:09:48 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
09/18/2022 20:09:48 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
09/18/2022 20:09:48 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
09/18/2022 20:09:48 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
09/18/2022 20:09:48 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
09/18/2022 20:09:48 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
09/18/2022 20:09:48 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "C:\Users\adama\faceswap\lib\cli\launcher.py", line 201, in execute_script
    process.process()
  File "C:\Users\adama\faceswap\scripts\train.py", line 217, in process
    self._end_thread(thread, err)
  File "C:\Users\adama\faceswap\scripts\train.py", line 257, in _end_thread
    thread.join()
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\adama\faceswap\scripts\train.py", line 279, in _training
    raise err
  File "C:\Users\adama\faceswap\scripts\train.py", line 269, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\adama\faceswap\scripts\train.py", line 357, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
    self._update_viewers(viewer, timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 354, in _update_viewers
    self._timelapse.output_timelapse(timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 1074, in output_timelapse
    image = self._samples.show_sample()
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 647, in show_sample
    return self._compile_preview(preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 743, in _compile_preview
    display = self._to_full_frame(side, samples, preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 798, in _to_full_frame
    images = self._compile_masked(images, samples[-1])
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in _compile_masked
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'


============ System Information ============
encoding:            cp950
git_branch:          master
git_commits:         2d312a9 Minor updates and fixups   - Mask Tool - Typing + BiSeNet mask update fix   - Alignments Tool - Auto search for alignments file
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce RTX 3090
gpu_devices_active:  GPU_0
gpu_driver:          516.94
gpu_vram:            GPU_0: 24576MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19044-SP0
os_release:          10
py_command:          C:\Users\adama\faceswap\faceswap.py train -A C:/Faceswap/044A -B C:/Faceswap/Face Li Finall -2 -m D:/Temp/Model/Ayu + Li -t phaze-a -bs 8 -it 2000000 -D default -s 250 -ss 25000 -tia C:/Faceswap/044A -tib C:/Faceswap/Face Li Finall -2 -to D:/Temp/TimeLine -L INFO -gui
py_conda_version:    conda 4.14.0
py_implementation:   CPython
py_version:          3.9.13
py_virtual_env:      True
sys_cores:           32
sys_processor:       AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD
sys_ram:             Total: 65460MB, Available: 48054MB, Used: 17405MB, Free: 48054MB

=============== Pip Packages ===============
absl-py==1.2.0
astunparse==1.6.3
cachetools==5.2.0
certifi==2022.9.14
charset-normalizer==2.1.1
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
colorama @ file:///C:/Windows/TEMP/abs_9439aeb1-0254-449a-96f7-33ab5eb17fc8apleb4yn/croots/recipe/colorama_1657009099097/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
dm-tree==0.1.5
fastcluster @ file:///D:/bld/fastcluster_1649783471014/work
ffmpy==0.3.0
flatbuffers==1.12
fonttools==4.25.0
gast==0.4.0
google-auth==2.11.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.49.0
h5py==3.7.0
idna==3.4
imageio @ file:///C:/Windows/TEMP/abs_24c1b783-7540-4ca9-a1b1-0e8aa8e6ae64hb79ssux/croots/recipe/imageio_1658785038775/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==4.12.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
keras==2.9.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1653292407425/work
libclang==14.0.6
Markdown==3.4.1
MarkupSafe==2.1.1
matplotlib @ file:///C:/ci/matplotlib-suite_1660169687702/work
mkl-fft==1.3.1
mkl-random @ file:///C:/ci/mkl_random_1626186184308/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/Windows/Temp/abs_e2036a32-9fe9-47f3-a04c-dbb1c232ba4b334exiur/croots/recipe/numexpr_1656940304835/work
numpy @ file:///C:/Windows/Temp/abs_2a1e1vbeag/croots/recipe/numpy_and_numpy_base_1659432712056/work
nvidia-ml-py==11.515.75
oauthlib==3.2.1
opencv-python==4.6.0.66
opt-einsum==3.3.0
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==9.2.0
ply==3.11
protobuf==3.19.5
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==302
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp39-none-win_amd64.whl
requests==2.28.1
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///D:/bld/scikit-learn_1659726281030/work
scipy @ file:///C:/bld/scipy_1658811088396/work
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.9.0
tensorflow-gpu==2.9.2
tensorflow-io-gcs-filesystem==0.27.0
tensorflow-probability @ file:///tmp/build/80754af9/tensorflow-probability_1633017132682/work
termcolor==2.0.1
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/ci/tornado_1662458743919/work
tqdm @ file:///C:/ci/tqdm_1650636210717/work
typing_extensions @ file:///C:/Windows/TEMP/abs_dd2d0moa85/croots/recipe/typing_extensions_1659638831135/work
urllib3==1.26.12
Werkzeug==2.2.2
wincertstore==0.2
wrapt==1.14.1
zipp==3.8.1

============== Conda Packages ==============
# packages in environment at C:\Users\adama\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   1.2.0                    pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
brotli                    1.0.9                h2bbff1b_7  
brotli-bin                1.0.9                h2bbff1b_7  
ca-certificates           2022.9.14            h5b45459_0    conda-forge
cachetools                5.2.0                    pypi_0    pypi
certifi                   2022.9.14          pyhd8ed1ab_0    conda-forge
charset-normalizer        2.1.1                    pypi_0    pypi
cloudpickle               2.0.0              pyhd3eb1b0_0  
colorama                  0.4.5            py39haa95532_0  
cudatoolkit               11.2.2              h933977f_10    conda-forge
cudnn                     8.1.0.77             h3e0f4f4_0    conda-forge
cycler                    0.11.0             pyhd3eb1b0_0  
decorator                 5.1.1              pyhd3eb1b0_0  
dm-tree                   0.1.5            py39hf11a4ad_0  
fastcluster               1.2.6            py39h2e25243_1    conda-forge
ffmpeg                    4.3.1                ha925a31_0    conda-forge
ffmpy                     0.3.0                    pypi_0    pypi
flatbuffers               1.12                     pypi_0    pypi
fonttools                 4.25.0             pyhd3eb1b0_0  
freetype                  2.10.4               hd328e21_0  
gast                      0.4.0                    pypi_0    pypi
git                       2.34.1               haa95532_0  
glib                      2.69.1               h5dc1a3c_1  
google-auth               2.11.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.49.0                   pypi_0    pypi
gst-plugins-base          1.18.5               h9e645db_0  
gstreamer                 1.18.5               hd78058f_0  
h5py                      3.7.0                    pypi_0    pypi
icu                       58.2                 ha925a31_3  
idna                      3.4                      pypi_0    pypi
imageio                   2.19.3           py39haa95532_0  
imageio-ffmpeg            0.4.7              pyhd8ed1ab_0    conda-forge
importlib-metadata        4.12.0                   pypi_0    pypi
intel-openmp              2021.4.0          haa95532_3556  
joblib                    1.2.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h2bbff1b_0  
keras                     2.9.0                    pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.2            py39hd77b12b_0  
lerc                      3.0                  hd77b12b_0  
libblas                   3.9.0           1_h8933c1f_netlib    conda-forge
libbrotlicommon           1.0.9                h2bbff1b_7  
libbrotlidec              1.0.9                h2bbff1b_7  
libbrotlienc              1.0.9                h2bbff1b_7  
libcblas                  3.9.0           5_hd5c7e75_netlib    conda-forge
libclang                  14.0.6                   pypi_0    pypi
libdeflate                1.8                  h2bbff1b_5  
libffi                    3.4.2                hd77b12b_4  
libiconv                  1.16                 h2bbff1b_2  
liblapack                 3.9.0           5_hd5c7e75_netlib    conda-forge
libogg                    1.3.5                h2bbff1b_1  
libpng                    1.6.37               h2a8f88b_0  
libtiff                   4.4.0                h8a3f274_0  
libvorbis                 1.3.7                he774522_0  
libwebp                   1.2.2                h2bbff1b_0  
libxml2                   2.9.14               h0ad7f3c_0  
libxslt                   1.1.35               h2bbff1b_0  
lz4-c                     1.9.3                h2bbff1b_1  
m2w64-gcc-libgfortran     5.3.0                         6    conda-forge
m2w64-gcc-libs            5.3.0                         7    conda-forge
m2w64-gcc-libs-core       5.3.0                         7    conda-forge
m2w64-gmp                 6.1.0                         2    conda-forge
m2w64-libwinpthread-git   5.0.0.4634.697f757               2    conda-forge
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.1                    pypi_0    pypi
matplotlib                3.5.2            py39haa95532_0  
matplotlib-base           3.5.2            py39hd77b12b_0  
mkl                       2021.4.0           haa95532_640  
mkl-service               2.4.0            py39h2bbff1b_0  
mkl_fft                   1.3.1            py39h277e83a_0  
mkl_random                1.2.2            py39hf11a4ad_0  
msys2-conda-epoch         20160418                      1    conda-forge
munkres                   1.1.4                      py_0  
numexpr                   2.8.3            py39hb80d3ca_0  
numpy                     1.23.1           py39h7a0a035_0  
numpy-base                1.23.1           py39hca35cd5_0  
nvidia-ml-py              11.515.75                pypi_0    pypi
oauthlib                  3.2.1                    pypi_0    pypi
opencv-python             4.6.0.66                 pypi_0    pypi
openssl                   1.1.1q               h8ffe710_0    conda-forge
opt-einsum                3.3.0                    pypi_0    pypi
packaging                 21.3               pyhd3eb1b0_0  
pcre                      8.45                 hd77b12b_0  
pillow                    9.2.0            py39hdc2b20a_1  
pip                       22.1.2           py39haa95532_0  
ply                       3.11             py39haa95532_0  
protobuf                  3.19.5                   pypi_0    pypi
psutil                    5.9.0            py39h2bbff1b_0  
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
pyparsing                 3.0.9            py39haa95532_0  
pyqt                      5.15.7           py39hd77b12b_0  
pyqt5-sip                 12.11.0          py39hd77b12b_0  
python                    3.9.13               h6244533_1  
python-dateutil           2.8.2              pyhd3eb1b0_0  
python_abi                3.9                      2_cp39    conda-forge
pywin32                   302              py39h2bbff1b_2  
pywinpty                  2.0.2            py39h5da7b33_0  
qt-main                   5.15.2               he8e5bd7_7  
qt-webengine              5.15.9               hb9a9bb5_4  
qtwebkit                  5.212                h3ad3cdb_4  
requests                  2.28.1                   pypi_0    pypi
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-learn              1.1.2            py39hfd4428b_0    conda-forge
scipy                     1.8.1            py39h5567194_2    conda-forge
setuptools                63.4.1           py39haa95532_0  
sip                       6.6.2            py39hd77b12b_0  
six                       1.16.0             pyhd3eb1b0_1  
sqlite                    3.39.2               h2bbff1b_0  
tensorboard               2.9.1                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow-estimator      2.9.0                    pypi_0    pypi
tensorflow-gpu            2.9.2                    pypi_0    pypi
tensorflow-io-gcs-filesystem 0.27.0                   pypi_0    pypi
tensorflow-probability    0.14.0             pyhd3eb1b0_0  
termcolor                 2.0.1                    pypi_0    pypi
threadpoolctl             3.1.0              pyh8a188c0_0    conda-forge
tk                        8.6.12               h2bbff1b_0  
toml                      0.10.2             pyhd3eb1b0_0  
tornado                   6.2              py39h2bbff1b_0  
tqdm                      4.64.0           py39haa95532_0  
typing-extensions         4.3.0            py39haa95532_0  
typing_extensions         4.3.0            py39haa95532_0  
tzdata                    2022c                h04d1e81_0  
urllib3                   1.26.12                  pypi_0    pypi
vc                        14.2                 h21ff451_1  
vs2015_runtime            14.27.29016          h5e58377_2  
werkzeug                  2.2.2                    pypi_0    pypi
wheel                     0.37.1             pyhd3eb1b0_0  
wincertstore              0.2              py39haa95532_2  
winpty                    0.4.3                         4  
wrapt                     1.14.1                   pypi_0    pypi
xz                        5.2.5                h8cc25b3_1  
zipp                      3.8.1                    pypi_0    pypi
zlib                      1.2.12               h8cc25b3_3  
zstd                      1.5.2                h19a0ad4_0  

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0
erosion_top:              0.0
erosion_bottom:           0.0
erosion_left:             0.0
erosion_right:            0.0

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
separate_mask:            False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
separate_mask:            False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
cpu:                      True
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.bisenet_fp]
batch-size:               8
cpu:                      False
weights:                  faceswap
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.custom]
batch-size:               8
centering:                face
fill:                     False

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
centering:                face
coverage:                 87.5
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -5
autoclip:                 False
reflect_padding:          False
allow_growth:             False
mixed_precision:          True
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            mae
loss_function_2:          mse
loss_weight_2:            100
loss_function_3:          None
loss_weight_3:            0
loss_function_4:          None
loss_weight_4:            0
mask_loss_function:       mse
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                unet-dfl
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[model.dfaker]
output_size:              128

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.phaze_a]
output_size:              256
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         efficientnet_v2_l
enc_scaling:              60
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          512
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1280
fc_max_filters:           1280
fc_dimensions:            8
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      1280
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       resize_images
dec_upscales_in_fc:       0
dec_norm:                 none
dec_min_filters:          192
dec_max_filters:          960
dec_slope_mode:           full
dec_filter_slope:         -0.33
dec_res_blocks:           1
dec_output_kernel:        3
dec_gaussian:             True
dec_skip_last_residual:   False
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
fs_original_use_alt:      False
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001
mobilenet_minimalistic:   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Caught exception in thread: '_training'

Post by torzdf »

Ok, I figured that was unlikely.

The issue appears to be the application of masks in the loss function. As this is not a widely reported issue, it is either specific to what you are doing, or it is due to the combination of some niche settings.

Can you please try the following:

  • Attempt training an original model, with learn_mask/penalized loss both set to False and eye/mouth multipliers set to 0
  • If this trains, started turning back on options that you had before around mask settings until it fails.
  • If it doesn't fail at any stage, then try the same with your current model settings until you hit a failure.

We should be able to then go from there and evaluate if this is an issue in code or in your dataset.

My word is final

User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Re: Caught exception in thread: '_training'

Post by adam_macchiato »

Got it , i tried situation as below :

1) Use original mode and set eyes/ mouth to 0 , start a new model - Its work
2) Use original mode and reset eyes/ mouth to default , start a new model - Its work
3) Use Phaze-A mode and eyes/ mouth is default , V2_l with Stojo preset, start a new model - Its work
4) Use Phaze-A mode continue my existing model , - Its Failed , whit this error :
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'

So the problem is Phaze-A setting ? thank you so much ,,,, :idea:

Code: Select all

09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E30A60>, weight: 3.0, mask_channel: 4)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E30F40>, weight: 2.0, mask_channel: 5)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E273A0>, weight: 1.0, mask_channel: 3)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E27730>, weight: 3.0, mask_channel: 4)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E27C10>, weight: 2.0, mask_channel: 5)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E277C0>, weight: 1.0, mask_channel: 3)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E033D0>, weight: 3.0, mask_channel: 4)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E038B0>, weight: 2.0, mask_channel: 5)
09/19/2022 21:27:52 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001DC75FAC790>, weight: 1.0, mask_channel: 3)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001DC7DF2AF10>, weight: 3.0, mask_channel: 4)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E302E0>, weight: 2.0, mask_channel: 5)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001DC7DF2AEB0>, weight: 1.0, mask_channel: 3)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E30A60>, weight: 3.0, mask_channel: 4)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E30F40>, weight: 2.0, mask_channel: 5)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E273A0>, weight: 1.0, mask_channel: 3)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E27730>, weight: 3.0, mask_channel: 4)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E27C10>, weight: 2.0, mask_channel: 5)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E277C0>, weight: 1.0, mask_channel: 3)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E033D0>, weight: 3.0, mask_channel: 4)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001E1C6E038B0>, weight: 2.0, mask_channel: 5)
09/19/2022 21:28:08 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
09/19/2022 21:29:26 MainProcess     _training                      _base           output_timelapse               DEBUG    Ouputting time-lapse
09/19/2022 21:29:26 MainProcess     _training                      _base           _setup                         DEBUG    Setting up time-lapse
09/19/2022 21:29:26 MainProcess     _training                      _base           _setup                         DEBUG    Time-lapse output set to 'D:\Temp\TimeLine'
09/19/2022 21:29:26 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 8017 files
09/19/2022 21:29:26 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 8017 images
09/19/2022 21:29:26 MainProcess     _training                      utils           get_image_paths                DEBUG    Scanned Folder contains 7202 files
09/19/2022 21:29:26 MainProcess     _training                      utils           get_image_paths                DEBUG    Returning 7202 images
09/19/2022 21:29:26 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting time-lapse feed: (input_images: '{'a': ['C:\\Faceswap\\044A\\02485.png', ...................

09/19/2022 21:29:26 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'a', images: 8017)
09/19/2022 21:29:26 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: a, is_display: True,  batch_size: 14
09/19/2022 21:29:26 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: a, images: 8017 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/19/2022 21:29:26 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: a, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/19/2022 21:29:26 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/19/2022 21:29:26 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/19/2022 21:29:26 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/19/2022 21:29:26 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/19/2022 21:29:26 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_3', thread_count: 1)
09/19/2022 21:29:26 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_3'
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_3'
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_3'
09/19/2022 21:29:26 MainProcess     _run_3                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 8017, do_shuffle: False)
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_3': 1
09/19/2022 21:29:26 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Setting preview feed: (side: 'b', images: 7202)
09/19/2022 21:29:26 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: b, is_display: True,  batch_size: 14
09/19/2022 21:29:26 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: phaze_a, side: b, images: 7202 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -5, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'mae', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'unet-dfl', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
09/19/2022 21:29:26 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: b, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
09/19/2022 21:29:26 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (288, 288, 6), buffer_size: 2, dtype: uint8
09/19/2022 21:29:26 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
09/19/2022 21:29:26 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
09/19/2022 21:29:26 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: False
09/19/2022 21:29:26 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_4', thread_count: 1)
09/19/2022 21:29:26 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_4'
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_4'
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_4'
09/19/2022 21:29:26 MainProcess     _run_4                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 7202, do_shuffle: False)
09/19/2022 21:29:26 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_4': 1
09/19/2022 21:29:26 MainProcess     _training                      _base           set_timelapse_feed             DEBUG    Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x000001E1CE44AF90>, 'b': <generator object BackgroundGenerator.iterator at 0x000001E1CC29E740>}
09/19/2022 21:29:26 MainProcess     _training                      _base           _setup                         DEBUG    Set up time-lapse
09/19/2022 21:29:26 MainProcess     _training                      _base           output_timelapse               DEBUG    Getting time-lapse samples
09/19/2022 21:29:26 MainProcess     _training                      _base           generate_preview               DEBUG    Generating preview (is_timelapse: True)
09/19/2022 21:29:27 MainProcess     _training                      _base           generate_preview               DEBUG    Generated samples: is_timelapse: True, images: {'feed': {'a': (14, 288, 288, 3), 'b': (14, 288, 288, 3)}, 'samples': {'a': (14, 292, 292, 3), 'b': (14, 292, 292, 3)}, 'sides': {'a': (14, 288, 288, 1), 'b': (14, 288, 288, 1)}}
09/19/2022 21:29:27 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
09/19/2022 21:29:27 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
09/19/2022 21:29:27 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiled Samples: {'a': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)], 'b': [(14, 288, 288, 3), (14, 292, 292, 3), (14, 288, 288, 1)]}
09/19/2022 21:29:27 MainProcess     _training                      _base           output_timelapse               DEBUG    Got time-lapse samples: {'a': 3, 'b': 3}
09/19/2022 21:29:27 MainProcess     _training                      _base           show_sample                    DEBUG    Showing sample
09/19/2022 21:29:27 MainProcess     _training                      _base           _get_predictions               DEBUG    Getting Predictions
09/19/2022 21:29:36 MainProcess     _training                      _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 256, 256, 3), 'b_b': (14, 256, 256, 3), 'a_b': (14, 256, 256, 3), 'b_a': (14, 256, 256, 3)}
09/19/2022 21:29:36 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 256, 256, 3), (14, 256, 256, 3)])
09/19/2022 21:29:36 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 292, prediction_size: 256, color: (0.0, 0.0, 1.0)
09/19/2022 21:29:36 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 292, 292, 3)
09/19/2022 21:29:36 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'\n
09/19/2022 21:29:37 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
09/19/2022 21:29:37 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
09/19/2022 21:29:37 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
09/19/2022 21:29:37 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
09/19/2022 21:29:37 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
09/19/2022 21:29:37 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
09/19/2022 21:29:37 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "C:\Users\adama\faceswap\lib\cli\launcher.py", line 201, in execute_script
    process.process()
  File "C:\Users\adama\faceswap\scripts\train.py", line 217, in process
    self._end_thread(thread, err)
  File "C:\Users\adama\faceswap\scripts\train.py", line 257, in _end_thread
    thread.join()
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\adama\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\adama\faceswap\scripts\train.py", line 279, in _training
    raise err
  File "C:\Users\adama\faceswap\scripts\train.py", line 269, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\adama\faceswap\scripts\train.py", line 357, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
    self._update_viewers(viewer, timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 354, in _update_viewers
    self._timelapse.output_timelapse(timelapse_kwargs)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 1074, in output_timelapse
    image = self._samples.show_sample()
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 647, in show_sample
    return self._compile_preview(preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 743, in _compile_preview
    display = self._to_full_frame(side, samples, preds)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 798, in _to_full_frame
    images = self._compile_masked(images, samples[-1])
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in _compile_masked
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\adama\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'

Exception occured trying to retrieve sysinfo: 'C'
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Caught exception in thread: '_training'

Post by torzdf »

I haven't forgotten about this, just need to find some time to troubleshoot

My word is final

User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Re: Caught exception in thread: '_training'

Post by adam_macchiato »

So do u ready for support rtx 4090 ? i cant wait to buy it :D :D :D

User avatar
MaxHunter
Posts: 193
Joined: Thu May 26, 2022 6:02 am
Has thanked: 177 times
Been thanked: 13 times

Re: Caught exception in thread: '_training'

Post by MaxHunter »

I was just about to post the same thing. I just started a new model and keep getting this error. I'm using the STOJO model edited with Icarus' settings. Loss Settings: 1) MS SSIM; 2) Mae 50%; 3) lpips alex 5%; 4) ffl 100%; Loss Function) MAE; Eye 3;Mouth 2; Penalized Mask Loss) On

Code: Select all

09/25/2022 13:18:48 ERROR    Caught exception in thread: '_training'
09/25/2022 13:18:52 ERROR    Got Exception on main handler:
Traceback (most recent call last):
  File "C:\Users\e4978\faceswap\lib\cli\launcher.py", line 217, in execute_script
    process.process()
  File "C:\Users\e4978\faceswap\scripts\train.py", line 217, in process
    self._end_thread(thread, err)
  File "C:\Users\e4978\faceswap\scripts\train.py", line 257, in _end_thread
    thread.join()
  File "C:\Users\e4978\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\e4978\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\e4978\faceswap\scripts\train.py", line 279, in _training
    raise err
  File "C:\Users\e4978\faceswap\scripts\train.py", line 269, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\e4978\faceswap\scripts\train.py", line 357, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
    self._update_viewers(viewer, timelapse_kwargs)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 347, in _update_viewers
    samples = self._samples.show_sample()
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 647, in show_sample
    return self._compile_preview(preds)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 743, in _compile_preview
    display = self._to_full_frame(side, samples, preds)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 798, in _to_full_frame
    images = self._compile_masked(images, samples[-1])
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 884, in _compile_masked
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
  File "C:\Users\e4978\faceswap\plugins\train\trainer\_base.py", line 884, in <listcomp>
    retval = [np.array([cv2.addWeighted(img, 1.0, mask, 0.3, 0)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'

09/25/2022 13:18:52 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\e4978\faceswap\crash_report.2022.09.25.131848534549.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Caught exception in thread: '_training'

Post by torzdf »

adam_macchiato wrote: Mon Sep 19, 2022 1:29 pm

Got it , i tried situation as below :

1) Use original mode and set eyes/ mouth to 0 , start a new model - Its work
2) Use original mode and reset eyes/ mouth to default , start a new model - Its work
3) Use Phaze-A mode and eyes/ mouth is default , V2_l with Stojo preset, start a new model - Its work
4) Use Phaze-A mode continue my existing model , - Its Failed , whit this error :
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'

I cannot recreate this issue. I have tried your settings (swapping out EffNetV2-L for V2-S and enablling mixed precision, so I can fit this model into 11GB VRAM), and the model works both starting new and resuming.

Can you provide a zip of the model folder (it may not help me, as most likely I won't be able to load it, but I need something to recreate).

Thanks.

My word is final

User avatar
adam_macchiato
Posts: 16
Joined: Tue Jul 26, 2022 5:26 am
Has thanked: 4 times

Re: Caught exception in thread: '_training'

Post by adam_macchiato »

dont know why , i have to click " learn mask" , can train normal , if i dont click " learn mask " , always show the error message as before ,

Attachments
螢幕擷取畫面 2022-10-17 172124.jpg
螢幕擷取畫面 2022-10-17 172124.jpg (178.6 KiB) Viewed 2511 times
螢幕擷取畫面 2022-10-17 172124.jpg
螢幕擷取畫面 2022-10-17 172124.jpg (178.6 KiB) Viewed 2511 times
Locked