Please help me. Shortly after I started training the model, my faceswap program reported an error.

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
MRchoucai
Posts: 1
Joined: Tue Aug 30, 2022 3:47 am

Please help me. Shortly after I started training the model, my faceswap program reported an error.

Post by MRchoucai »

I'm using a nvidia graphics card and the latest Windows 11 system and the latest faceseap version.
Here is the report.

Code: Select all

03/19/2023 12:39:37 CRITICAL An unexpected crash has occurred. Crash report written to 'D:\software\faceswap\crash_report.2023.03.19.123932350873.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting


03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_0': 1
03/19/2023 12:38:44 MainProcess     _training                      _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'a')
03/19/2023 12:38:44 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: a, is_display: True,  batch_size: 14
03/19/2023 12:38:44 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: original, side: a, images: 1451 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/19/2023 12:38:44 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: a, model output shapes: [(None, 64, 64, 3), (None, 64, 64, 3)], output sizes: [64]
03/19/2023 12:38:44 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (64, 64, 6), buffer_size: 2, dtype: uint8
03/19/2023 12:38:44 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
03/19/2023 12:38:44 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
03/19/2023 12:38:44 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: True
03/19/2023 12:38:44 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_1', thread_count: 1)
03/19/2023 12:38:44 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_1'
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_1'
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_1'
03/19/2023 12:38:44 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 1451, do_shuffle: True)
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_1': 1
03/19/2023 12:38:44 MainProcess     _training                      _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'b')
03/19/2023 12:38:44 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: b, is_display: True,  batch_size: 14
03/19/2023 12:38:44 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: original, side: b, images: 1422 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/19/2023 12:38:44 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: b, model output shapes: [(None, 64, 64, 3), (None, 64, 64, 3)], output sizes: [64]
03/19/2023 12:38:44 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (64, 64, 6), buffer_size: 2, dtype: uint8
03/19/2023 12:38:44 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
03/19/2023 12:38:44 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
03/19/2023 12:38:44 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: True
03/19/2023 12:38:44 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_2', thread_count: 1)
03/19/2023 12:38:44 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_2'
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_2'
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_2'
03/19/2023 12:38:44 MainProcess     _run_2                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 1422, do_shuffle: True)
03/19/2023 12:38:44 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_2': 1
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Feeder:
03/19/2023 12:38:44 MainProcess     _training                      _base           _set_tensorboard               DEBUG    Enabling TensorBoard Logging
03/19/2023 12:38:44 MainProcess     _training                      _base           _set_tensorboard               DEBUG    Setting up TensorBoard Logging
03/19/2023 12:38:44 MainProcess     _training                      _base           _set_tensorboard               VERBOSE  Enabled TensorBoard Logging
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x00000280CF4E7FD0>', coverage_ratio: 0.875, mask_opacity: 30, mask_color: #ff0000)
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Samples
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Timelapse: model: <plugins.train.model.original.Model object at 0x00000280CF4E7FD0>, coverage_ratio: 0.875, image_count: 14, mask_opacity: 30, mask_color: #ff0000, feeder: <plugins.train.trainer._base._Feeder object at 0x00000280CF5F9280>, image_paths: 2)
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x00000280CF4E7FD0>', coverage_ratio: 0.875, mask_opacity: 30, mask_color: #ff0000)
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Samples
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Timelapse
03/19/2023 12:38:44 MainProcess     _training                      _base           __init__                       DEBUG    Initialized Trainer
03/19/2023 12:38:44 MainProcess     _training                      train           _load_trainer                  DEBUG    Loaded Trainer
03/19/2023 12:38:44 MainProcess     _training                      train           _run_training_cycle            DEBUG    Running Training Cycle
03/19/2023 12:38:44 MainProcess     _run                           cache           _validate_version              DEBUG    Setting initial extract version: 2.3
03/19/2023 12:38:44 MainProcess     _run_0                         cache           _validate_version              DEBUG    Setting initial extract version: 2.3
03/19/2023 12:38:45 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E20460>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:46 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:46 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D699B4C0>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E36C40>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D699B820>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E362B0>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E36970>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6F40>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6520>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6B20>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6F70>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6C52F40>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6C52070>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:47 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E20460>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D699B4C0>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E36C40>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D699B820>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E362B0>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6E36970>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6F40>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6520>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6B20>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6CB6F70>, weight: 1.0, mask_channel: 3)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6C52F40>, weight: 3.0, mask_channel: 4)
03/19/2023 12:38:48 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
03/19/2023 12:38:49 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x00000280D6C52070>, weight: 2.0, mask_channel: 5)
03/19/2023 12:38:49 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
03/19/2023 12:39:32 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): Graph execution error:\n\nDetected at node 'original/encoder/upscale_512_1_conv2d_conv2d/Conv2D' defined at (most recent call last):\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\threading.py", line 937, in _bootstrap\n      self._bootstrap_inner()\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\threading.py", line 980, in _bootstrap_inner\n      self.run()\n    File "D:\software\faceswap\lib\multithreading.py", line 96, in run\n      self._target(*self._args, **self._kwargs)\n    File "D:\software\faceswap\scripts\train.py", line 265, in _training\n      self._run_training_cycle(model, trainer)\n    File "D:\software\faceswap\scripts\train.py", line 353, in _run_training_cycle\n      trainer.train_one_step(viewer, timelapse)\n    File "D:\software\faceswap\plugins\train\trainer\_base.py", line 223, in train_one_step\n      loss: List[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch\n      logs = self.train_function(iterator)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1160, in train_function\n      return step_function(self, iterator)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1146, in step_function\n      outputs = model.distribute_strategy.run(run_step, args=(data,))\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1135, in run_step\n      outputs = model.train_step(data)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 993, in train_step\n      y_pred = self(x, training=True)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 557, in __call__\n      return super().__call__(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__\n      outputs = call_fn(inputs, *args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 510, in call\n      return self._run_internal_graph(inputs, training=training, mask=mask)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 667, in _run_internal_graph\n      outputs = node.layer(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 557, in __call__\n      return super().__call__(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__\n      outputs = call_fn(inputs, *args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 510, in call\n      return self._run_internal_graph(inputs, training=training, mask=mask)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 667, in _run_internal_graph\n      outputs = node.layer(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__\n      outputs = call_fn(inputs, *args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler\n      return fn(*args, **kwargs)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional\base_conv.py", line 283, in call\n      outputs = self.convolution_op(inputs, self.kernel)\n    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional\base_conv.py", line 255, in convolution_op\n      return tf.nn.convolution(\nNode: 'original/encoder/upscale_512_1_conv2d_conv2d/Conv2D'\nNo algorithm worked!  Error messages:\n  Profiling failure on CUDNN engine 1#TC: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 94248976 bytes.\n  Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.\n  Profiling failure on CUDNN engine 0#TC: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.\n  Profiling failure on CUDNN engine 0: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.\n	 [[{{node original/encoder/upscale_512_1_conv2d_conv2d/Conv2D}}]] [Op:__inference_train_function_7778]
03/19/2023 12:39:32 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
03/19/2023 12:39:32 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
03/19/2023 12:39:32 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
03/19/2023 12:39:32 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
03/19/2023 12:39:32 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
03/19/2023 12:39:32 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
03/19/2023 12:39:32 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "D:\software\faceswap\lib\cli\launcher.py", line 230, in execute_script
    process.process()
  File "D:\software\faceswap\scripts\train.py", line 213, in process
    self._end_thread(thread, err)
  File "D:\software\faceswap\scripts\train.py", line 253, in _end_thread
    thread.join()
  File "D:\software\faceswap\lib\multithreading.py", line 220, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "D:\software\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "D:\software\faceswap\scripts\train.py", line 275, in _training
    raise err
  File "D:\software\faceswap\scripts\train.py", line 265, in _training
    self._run_training_cycle(model, trainer)
  File "D:\software\faceswap\scripts\train.py", line 353, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "D:\software\faceswap\plugins\train\trainer\_base.py", line 223, in train_one_step
    loss: List[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch
    logs = self.train_function(iterator)
  File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\eager\execute.py", line 54, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:

Detected at node 'original/encoder/upscale_512_1_conv2d_conv2d/Conv2D' defined at (most recent call last):
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\threading.py", line 937, in _bootstrap
      self._bootstrap_inner()
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\threading.py", line 980, in _bootstrap_inner
      self.run()
    File "D:\software\faceswap\lib\multithreading.py", line 96, in run
      self._target(*self._args, **self._kwargs)
    File "D:\software\faceswap\scripts\train.py", line 265, in _training
      self._run_training_cycle(model, trainer)
    File "D:\software\faceswap\scripts\train.py", line 353, in _run_training_cycle
      trainer.train_one_step(viewer, timelapse)
    File "D:\software\faceswap\plugins\train\trainer\_base.py", line 223, in train_one_step
      loss: List[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch
      logs = self.train_function(iterator)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1160, in train_function
      return step_function(self, iterator)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1146, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1135, in run_step
      outputs = model.train_step(data)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 993, in train_step
      y_pred = self(x, training=True)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 557, in __call__
      return super().__call__(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 510, in call
      return self._run_internal_graph(inputs, training=training, mask=mask)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 667, in _run_internal_graph
      outputs = node.layer(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 557, in __call__
      return super().__call__(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 510, in call
      return self._run_internal_graph(inputs, training=training, mask=mask)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 667, in _run_internal_graph
      outputs = node.layer(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1097, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 96, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional\base_conv.py", line 283, in call
      outputs = self.convolution_op(inputs, self.kernel)
    File "C:\Users\RJcxy\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional\base_conv.py", line 255, in convolution_op
      return tf.nn.convolution(
Node: 'original/encoder/upscale_512_1_conv2d_conv2d/Conv2D'
No algorithm worked!  Error messages:
  Profiling failure on CUDNN engine 1#TC: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 94248976 bytes.
  Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.
  Profiling failure on CUDNN engine 0#TC: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.
  Profiling failure on CUDNN engine 0: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.
	 [[{{node original/encoder/upscale_512_1_conv2d_conv2d/Conv2D}}]] [Op:__inference_train_function_7778]

============ System Information ============
backend:             nvidia
encoding:            cp936
git_branch:          master
git_commits:         216ef38 alignments tool - batch jobs to run in process
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
gpu_devices_active:  GPU_0
gpu_driver:          527.37
gpu_vram:            GPU_0: 4096MB (218MB free)
os_machine:          AMD64
os_platform:         Windows-10-10.0.22621-SP0
os_release:          10
py_command:          D:\software\faceswap\faceswap.py train -A C:/Users/RJcxy/Desktop/A -B C:/Users/RJcxy/Desktop/B -m C:/Users/RJcxy/Desktop/C -t original -bs 10 -it 1000000 -D default -s 250 -ss 25000 -tia C:/Users/RJcxy/Desktop/A -tib C:/Users/RJcxy/Desktop/B -to C:/Users/RJcxy/Desktop/D -L INFO -gui
py_conda_version:    conda 23.1.0
py_implementation:   CPython
py_version:          3.9.16
py_virtual_env:      True
sys_cores:           12
sys_processor:       AMD64 Family 25 Model 80 Stepping 0, AuthenticAMD
sys_ram:             Total: 16236MB, Available: 2869MB, Used: 13366MB, Free: 2869MB

=============== Pip Packages ===============
absl-py @ file:///C:/b/abs_5babsu7y5x/croot/absl-py_1666362945682/work
astunparse==1.6.3
cachetools==5.3.0
certifi==2022.12.7
charset-normalizer==3.1.0
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
dm-tree @ file:///C:/b/abs_10z0iy5knj/croot/dm-tree_1671027465819/work
fastcluster @ file:///D:/bld/fastcluster_1649783471014/work
ffmpy==0.3.0
flatbuffers==23.3.3
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
fonttools==4.25.0
gast==0.4.0
google-auth==2.16.2
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.51.3
h5py==3.8.0
idna==3.4
imageio @ file:///C:/b/abs_27kq2gy1us/croot/imageio_1677879918708/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1673483481485/work
importlib-metadata==6.1.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
keras==2.10.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/b/abs_88mdhvtahm/croot/kiwisolver_1672387921783/work
libclang==15.0.6.1
Markdown==3.4.1
MarkupSafe==2.1.2
matplotlib @ file:///C:/b/abs_ae02atcfur/croot/matplotlib-suite_1667356722968/work
mkl-fft==1.3.1
mkl-random @ file:///C:/ci/mkl_random_1626186184308/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/b/abs_a7kbak88hk/croot/numexpr_1668713882979/work
numpy @ file:///C:/b/abs_datssh7cer/croot/numpy_and_numpy_base_1672336199388/work
nvidia-ml-py==11.525.84
oauthlib==3.2.2
opencv-python==4.7.0.72
opt-einsum==3.3.0
packaging @ file:///C:/b/abs_ed_kb9w6g4/croot/packaging_1678965418855/work
Pillow==9.4.0
ply==3.11
protobuf==3.19.6
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==305.1
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp39-none-win_amd64.whl
requests==2.28.2
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///C:/b/abs_7ck_bnw91r/croot/scikit-learn_1676911676133/work
scipy==1.9.3
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.10.0
tensorflow-gpu==2.10.1
tensorflow-io-gcs-filesystem==0.31.0
tensorflow-probability @ file:///tmp/build/80754af9/tensorflow-probability_1633017132682/work
termcolor==2.2.0
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/ci/tornado_1662458743919/work
tqdm @ file:///C:/b/abs_0axbz66qik/croots/recipe/tqdm_1664392691071/work
typing_extensions @ file:///C:/b/abs_89eui86zuq/croot/typing_extensions_1669923792806/work
urllib3==1.26.15
Werkzeug==2.2.3
wincertstore==0.2
wrapt==1.15.0
zipp==3.15.0

============== Conda Packages ==============
# packages in environment at C:\Users\RJcxy\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   1.3.0            py39haa95532_0  
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
brotli                    1.0.9                h2bbff1b_7  
brotli-bin                1.0.9                h2bbff1b_7  
ca-certificates           2022.12.7            h5b45459_0    conda-forge
cachetools                5.3.0                    pypi_0    pypi
certifi                   2022.12.7          pyhd8ed1ab_0    conda-forge
charset-normalizer        3.1.0                    pypi_0    pypi
cloudpickle               2.0.0              pyhd3eb1b0_0  
colorama                  0.4.6            py39haa95532_0  
cudatoolkit               11.2.2              h933977f_10    conda-forge
cudnn                     8.1.0.77             h3e0f4f4_0    conda-forge
cycler                    0.11.0             pyhd3eb1b0_0  
decorator                 5.1.1              pyhd3eb1b0_0  
dm-tree                   0.1.7            py39hd77b12b_1  
fastcluster               1.2.6            py39h2e25243_1    conda-forge
ffmpeg                    4.3.1                ha925a31_0    conda-forge
ffmpy                     0.3.0                    pypi_0    pypi
fftw                      3.3.10          nompi_h52fa85e_103    conda-forge
flatbuffers               23.3.3                   pypi_0    pypi
flit-core                 3.6.0              pyhd3eb1b0_0  
fonttools                 4.25.0             pyhd3eb1b0_0  
freetype                  2.12.1               ha860e81_0  
gast                      0.4.0                    pypi_0    pypi
giflib                    5.2.1                h8cc25b3_3  
git                       2.34.1               haa95532_0  
glib                      2.69.1               h5dc1a3c_2  
google-auth               2.16.2                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.51.3                   pypi_0    pypi
gst-plugins-base          1.18.5               h9e645db_0  
gstreamer                 1.18.5               hd78058f_0  
h5py                      3.8.0                    pypi_0    pypi
icc_rt                    2022.1.0             h6049295_2  
icu                       58.2                 ha925a31_3  
idna                      3.4                      pypi_0    pypi
imageio                   2.26.0           py39haa95532_0  
imageio-ffmpeg            0.4.8              pyhd8ed1ab_0    conda-forge
importlib-metadata        6.1.0                    pypi_0    pypi
intel-openmp              2021.4.0          haa95532_3556  
joblib                    1.2.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h2bbff1b_1  
keras                     2.10.0                   pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.4            py39hd77b12b_0  
lerc                      3.0                  hd77b12b_0  
libbrotlicommon           1.0.9                h2bbff1b_7  
libbrotlidec              1.0.9                h2bbff1b_7  
libbrotlienc              1.0.9                h2bbff1b_7  
libclang                  15.0.6.1                 pypi_0    pypi
libdeflate                1.17                 h2bbff1b_0  
libffi                    3.4.2                hd77b12b_6  
libiconv                  1.16                 h2bbff1b_2  
libogg                    1.3.5                h2bbff1b_1  
libpng                    1.6.39               h8cc25b3_0  
libtiff                   4.5.0                h6c2663c_2  
libvorbis                 1.3.7                he774522_0  
libwebp                   1.2.4                hbc33d0d_1  
libwebp-base              1.2.4                h2bbff1b_1  
libxml2                   2.9.14               h0ad7f3c_0  
libxslt                   1.1.35               h2bbff1b_0  
lz4-c                     1.9.4                h2bbff1b_0  
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.2                    pypi_0    pypi
matplotlib                3.5.3            py39haa95532_0  
matplotlib-base           3.5.3            py39hd77b12b_0  
mkl                       2021.4.0           haa95532_640  
mkl-service               2.4.0            py39h2bbff1b_0  
mkl_fft                   1.3.1            py39h277e83a_0  
mkl_random                1.2.2            py39hf11a4ad_0  
munkres                   1.1.4                      py_0  
numexpr                   2.8.4            py39h5b0cc5e_0  
numpy                     1.23.5           py39h3b20f71_0  
numpy-base                1.23.5           py39h4da318b_0  
nvidia-ml-py              11.525.84                pypi_0    pypi
oauthlib                  3.2.2                    pypi_0    pypi
opencv-python             4.7.0.72                 pypi_0    pypi
openssl                   1.1.1t               h2bbff1b_0  
opt-einsum                3.3.0                    pypi_0    pypi
packaging                 23.0             py39haa95532_0  
pcre                      8.45                 hd77b12b_0  
pillow                    9.4.0            py39hd77b12b_0  
pip                       23.0.1           py39haa95532_0  
ply                       3.11             py39haa95532_0  
protobuf                  3.19.6                   pypi_0    pypi
psutil                    5.9.0            py39h2bbff1b_0  
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
pyparsing                 3.0.9            py39haa95532_0  
pyqt                      5.15.7           py39hd77b12b_0  
pyqt5-sip                 12.11.0          py39hd77b12b_0  
python                    3.9.16               h6244533_2  
python-dateutil           2.8.2              pyhd3eb1b0_0  
python_abi                3.9                      2_cp39    conda-forge
pywin32                   305              py39h2bbff1b_0  
pywinpty                  2.0.2            py39h5da7b33_0  
qt-main                   5.15.2               he8e5bd7_7  
qt-webengine              5.15.9               hb9a9bb5_5  
qtwebkit                  5.212                h3ad3cdb_4  
requests                  2.28.2                   pypi_0    pypi
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-learn              1.2.1            py39hd77b12b_0  
scipy                     1.9.3            py39he11b74f_0  
setuptools                65.6.3           py39haa95532_0  
sip                       6.6.2            py39hd77b12b_0  
six                       1.16.0             pyhd3eb1b0_1  
sqlite                    3.41.1               h2bbff1b_0  
tensorboard               2.10.1                   pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow-estimator      2.10.0                   pypi_0    pypi
tensorflow-gpu            2.10.1                   pypi_0    pypi
tensorflow-io-gcs-filesystem 0.31.0                   pypi_0    pypi
tensorflow-probability    0.14.0             pyhd3eb1b0_0  
termcolor                 2.2.0                    pypi_0    pypi
threadpoolctl             3.1.0              pyh8a188c0_0    conda-forge
tk                        8.6.12               h2bbff1b_0  
toml                      0.10.2             pyhd3eb1b0_0  
tornado                   6.2              py39h2bbff1b_0  
tqdm                      4.64.1           py39haa95532_0  
typing-extensions         4.4.0            py39haa95532_0  
typing_extensions         4.4.0            py39haa95532_0  
tzdata                    2022g                h04d1e81_0  
urllib3                   1.26.15                  pypi_0    pypi
vc                        14.2                 h21ff451_1  
vs2015_runtime            14.27.29016          h5e58377_2  
werkzeug                  2.2.3                    pypi_0    pypi
wheel                     0.38.4           py39haa95532_0  
wincertstore              0.2              py39haa95532_2  
winpty                    0.4.3                         4  
wrapt                     1.15.0                   pypi_0    pypi
xz                        5.2.10               h8cc25b3_1  
zipp                      3.15.0                   pypi_0    pypi
zlib                      1.2.13               h8cc25b3_0  
zstd                      1.5.2                h19a0ad4_0  

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0
erosion_top:              0.0
erosion_bottom:           0.0
erosion_left:             0.0
erosion_right:            0.0

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
separate_mask:            False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
separate_mask:            False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False
aligner_min_scale:        0.07
aligner_max_scale:        2.0
aligner_distance:         22.5
aligner_roll:             45.0
aligner_features:         True
filter_refeed:            True
save_filtered:            False
realign_refeeds:          True
filter_realign:           True

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
cpu:                      True
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.bisenet_fp]
batch-size:               8
cpu:                      False
weights:                  faceswap
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.custom]
batch-size:               8
centering:                face
fill:                     False

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

[recognition.vgg_face2]
batch-size:               16
cpu:                      False

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
centering:                face
coverage:                 87.5
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -7
autoclip:                 False
reflect_padding:          False
allow_growth:             False
mixed_precision:          False
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            ssim
loss_function_2:          mse
loss_weight_2:            100
loss_function_3:          none
loss_weight_3:            0
loss_function_4:          none
loss_weight_4:            0
mask_loss_function:       mse
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                extended
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[model.dfaker]
output_size:              128

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.phaze_a]
output_size:              128
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         fs_original
enc_scaling:              7
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          1024
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1024
fc_max_filters:           1024
fc_dimensions:            4
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      512
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       subpixel
dec_upscales_in_fc:       0
dec_norm:                 none
dec_min_filters:          64
dec_max_filters:          512
dec_slope_mode:           full
dec_filter_slope:         -0.45
dec_res_blocks:           1
dec_output_kernel:        5
dec_gaussian:             True
dec_skip_last_residual:   True
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
fs_original_use_alt:      False
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001
mobilenet_minimalistic:   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
mask_opacity:             30
mask_color:               #ff0000
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4
Last edited by torzdf on Sun Mar 19, 2023 1:33 pm, edited 1 time in total.
User avatar
torzdf
Posts: 2636
Joined: Fri Jul 12, 2019 12:53 am
Answers: 155
Has thanked: 128 times
Been thanked: 614 times

Re: Please help me. Shortly after I started training the model, my faceswap program reported an error.

Post by torzdf »

This is an OOM (Out of Memory) error. See here: app.php/faqpage#f3r9

My word is final

Locked