Page 1 of 1

Training error: Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED

Posted: Sat Jul 29, 2023 10:11 am
by linshao520

Code: Select all

07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_0': 1
07/29/2023 17:54:59 MainProcess     _training                      _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'a')
07/29/2023 17:54:59 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: a, is_display: True,  batch_size: 14
07/29/2023 17:54:59 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: original, side: a, images: 2936 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'save_optimizer': 'exit', 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2023 17:54:59 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: a, model output shapes: [(None, 64, 64, 3), (None, 64, 64, 3)], output sizes: [64]
07/29/2023 17:54:59 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (64, 64, 6), buffer_size: 2, dtype: uint8
07/29/2023 17:54:59 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
07/29/2023 17:54:59 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
07/29/2023 17:54:59 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: True
07/29/2023 17:54:59 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_1', thread_count: 1)
07/29/2023 17:54:59 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_1'
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_1'
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_1'
07/29/2023 17:54:59 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 2936, do_shuffle: True)
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_1': 1
07/29/2023 17:54:59 MainProcess     _training                      _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'b')
07/29/2023 17:54:59 MainProcess     _training                      _base           _load_generator                DEBUG    Loading generator, side: b, is_display: True,  batch_size: 14
07/29/2023 17:54:59 MainProcess     _training                      generator       __init__                       DEBUG    Initializing PreviewDataGenerator: (model: original, side: b, images: 400 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'save_optimizer': 'exit', 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
07/29/2023 17:54:59 MainProcess     _training                      generator       _get_output_sizes              DEBUG    side: b, model output shapes: [(None, 64, 64, 3), (None, 64, 64, 3)], output sizes: [64]
07/29/2023 17:54:59 MainProcess     _training                      cache           __init__                       DEBUG    Initializing: RingBuffer (batch_size: 14, image_shape: (64, 64, 6), buffer_size: 2, dtype: uint8
07/29/2023 17:54:59 MainProcess     _training                      cache           __init__                       DEBUG    Initialized: RingBuffer
07/29/2023 17:54:59 MainProcess     _training                      generator       __init__                       DEBUG    Initialized PreviewDataGenerator
07/29/2023 17:54:59 MainProcess     _training                      generator       minibatch_ab                   DEBUG    do_shuffle: True
07/29/2023 17:54:59 MainProcess     _training                      multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run_2', thread_count: 1)
07/29/2023 17:54:59 MainProcess     _training                      multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run_2'
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread(s): '_run_2'
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Starting thread 1 of 1: '_run_2'
07/29/2023 17:54:59 MainProcess     _run_2                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 400, do_shuffle: True)
07/29/2023 17:54:59 MainProcess     _training                      multithreading  start                          DEBUG    Started all threads '_run_2': 1
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Feeder:
07/29/2023 17:54:59 MainProcess     _training                      _base           _set_tensorboard               DEBUG    Enabling TensorBoard Logging
07/29/2023 17:54:59 MainProcess     _training                      _base           _set_tensorboard               DEBUG    Setting up TensorBoard Logging
07/29/2023 17:54:59 MainProcess     _training                      _base           _set_tensorboard               VERBOSE  Enabled TensorBoard Logging
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x000001BA24870190>', coverage_ratio: 0.875, mask_opacity: 30, mask_color: #ff0000)
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Samples
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Timelapse: model: <plugins.train.model.original.Model object at 0x000001BA24870190>, coverage_ratio: 0.875, image_count: 14, mask_opacity: 30, mask_color: #ff0000, feeder: <plugins.train.trainer._base._Feeder object at 0x000001BA248726B0>, image_paths: 2)
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x000001BA24870190>', coverage_ratio: 0.875, mask_opacity: 30, mask_color: #ff0000)
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Samples
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initialized _Timelapse
07/29/2023 17:54:59 MainProcess     _training                      _base           __init__                       DEBUG    Initialized Trainer
07/29/2023 17:54:59 MainProcess     _training                      train           _load_trainer                  DEBUG    Loaded Trainer
07/29/2023 17:54:59 MainProcess     _training                      train           _run_training_cycle            DEBUG    Running Training Cycle
07/29/2023 17:54:59 MainProcess     _run                           cache           _validate_version              DEBUG    Setting initial extract version: 2.3
07/29/2023 17:54:59 MainProcess     _run_0                         cache           _validate_version              DEBUG    Setting initial extract version: 2.3
07/29/2023 17:55:00 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA215F5A50>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD7A830>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79840>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79600>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD7A2C0>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79810>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79D50>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD78EE0>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79360>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD791B0>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD78370>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD8FD30>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:01 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA215F5A50>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD7A830>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79840>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79600>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD7A2C0>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79810>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79D50>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD78EE0>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD79360>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD791B0>, weight: 1.0, mask_channel: 3)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 3
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD78370>, weight: 3.0, mask_channel: 4)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 4
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x000001BA3DD8FD30>, weight: 2.0, mask_channel: 5)
07/29/2023 17:55:02 MainProcess     _training                      api             converted_call                 DEBUG    Applying mask from channel 5
07/29/2023 17:55:05 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): Graph execution error:\n\nDetected at node 'gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput' defined at (most recent call last):\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\threading.py", line 973, in _bootstrap\n      self._bootstrap_inner()\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\threading.py", line 1016, in _bootstrap_inner\n      self.run()\n    File "D:\faceswap\lib\multithreading.py", line 100, in run\n      self._target(*self._args, **self._kwargs)\n    File "D:\faceswap\scripts\train.py", line 261, in _training\n      self._run_training_cycle(model, trainer)\n    File "D:\faceswap\scripts\train.py", line 349, in _run_training_cycle\n      trainer.train_one_step(viewer, timelapse)\n    File "D:\faceswap\plugins\train\trainer\_base.py", line 214, in train_one_step\n      loss: list[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch\n      logs = self.train_function(iterator)\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1160, in train_function\n      return step_function(self, iterator)\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1146, in step_function\n      outputs = model.distribute_strategy.run(run_step, args=(data,))\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1135, in run_step\n      outputs = model.train_step(data)\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 997, in train_step\n      self.optimizer.minimize(loss, self.trainable_variables, tape=tape)\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 576, in minimize\n      grads_and_vars = self._compute_gradients(\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 634, in _compute_gradients\n      grads_and_vars = self._get_gradients(\n    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 510, in _get_gradients\n      grads = tape.gradient(loss, var_list, grad_loss)\nNode: 'gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput'\nNo algorithm worked!  Error messages:\n  Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 17041664 bytes.\n  Profiling failure on CUDNN engine 4: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 226496512 bytes.\n  Profiling failure on CUDNN engine 5: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 325844992 bytes.\n  Profiling failure on CUDNN engine 0: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.\n	 [[{{node gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput}}]] [Op:__inference_train_function_7782]
07/29/2023 17:55:05 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
07/29/2023 17:55:05 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
07/29/2023 17:55:05 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
07/29/2023 17:55:05 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
07/29/2023 17:55:05 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
07/29/2023 17:55:05 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
07/29/2023 17:55:05 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "D:\faceswap\lib\cli\launcher.py", line 225, in execute_script
    process.process()
  File "D:\faceswap\scripts\train.py", line 209, in process
    self._end_thread(thread, err)
  File "D:\faceswap\scripts\train.py", line 249, in _end_thread
    thread.join()
  File "D:\faceswap\lib\multithreading.py", line 224, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "D:\faceswap\lib\multithreading.py", line 100, in run
    self._target(*self._args, **self._kwargs)
  File "D:\faceswap\scripts\train.py", line 271, in _training
    raise err
  File "D:\faceswap\scripts\train.py", line 261, in _training
    self._run_training_cycle(model, trainer)
  File "D:\faceswap\scripts\train.py", line 349, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "D:\faceswap\plugins\train\trainer\_base.py", line 214, in train_one_step
    loss: list[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch
    logs = self.train_function(iterator)
  File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\eager\execute.py", line 54, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:

Detected at node 'gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput' defined at (most recent call last):
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\threading.py", line 973, in _bootstrap
      self._bootstrap_inner()
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\threading.py", line 1016, in _bootstrap_inner
      self.run()
    File "D:\faceswap\lib\multithreading.py", line 100, in run
      self._target(*self._args, **self._kwargs)
    File "D:\faceswap\scripts\train.py", line 261, in _training
      self._run_training_cycle(model, trainer)
    File "D:\faceswap\scripts\train.py", line 349, in _run_training_cycle
      trainer.train_one_step(viewer, timelapse)
    File "D:\faceswap\plugins\train\trainer\_base.py", line 214, in train_one_step
      loss: list[float] = self._model.model.train_on_batch(model_inputs, y=model_targets)
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2381, in train_on_batch
      logs = self.train_function(iterator)
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1160, in train_function
      return step_function(self, iterator)
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1146, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1135, in run_step
      outputs = model.train_step(data)
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 997, in train_step
      self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 576, in minimize
      grads_and_vars = self._compute_gradients(
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 634, in _compute_gradients
      grads_and_vars = self._get_gradients(
    File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers\optimizer_v2\optimizer_v2.py", line 510, in _get_gradients
      grads = tape.gradient(loss, var_list, grad_loss)
Node: 'gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput'
No algorithm worked!  Error messages:
  Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 17041664 bytes.
  Profiling failure on CUDNN engine 4: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 226496512 bytes.
  Profiling failure on CUDNN engine 5: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 325844992 bytes.
  Profiling failure on CUDNN engine 0: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.
	 [[{{node gradient_tape/original/encoder/upscale_512_0_conv2d_conv2d/Conv2D_1/Conv2DBackpropInput}}]] [Op:__inference_train_function_7782]

============ System Information ============
backend:             nvidia
encoding:            cp936
git_branch:          master
git_commits:         81e3bf5 Merge pull request #1331 from torzdf/macos
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce GTX 960
gpu_devices_active:  GPU_0
gpu_driver:          528.02
gpu_vram:            GPU_0: 2048MB (57MB free)
os_machine:          AMD64
os_platform:         Windows-10-10.0.19044-SP0
os_release:          10
py_command:          D:\faceswap\faceswap.py train -A C:/Users/Administrator/Desktop/a -B C:/Users/Administrator/Desktop/b -m C:/Users/Administrator/Desktop/C -t original -bs 4 -it 1000000 -D default -s 250 -ss 25000 -tia C:/Users/Administrator/Desktop/a -tib C:/Users/Administrator/Desktop/b -to C:/Users/Administrator/Desktop/D -L INFO -gui
py_conda_version:    conda 23.7.2
py_implementation:   CPython
py_version:          3.10.12
py_virtual_env:      True
sys_cores:           16
sys_processor:       Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
sys_ram:             Total: 32609MB, Available: 23124MB, Used: 9484MB, Free: 23124MB

=============== Pip Packages ===============
absl-py==1.4.0
appdirs==1.4.4
astunparse==1.6.3
brotlipy==0.7.0
cachetools==5.3.1
certifi==2023.7.22
cffi @ file:///C:/b/abs_49n3v2hyhr/croot/cffi_1670423218144/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
contourpy @ file:///C:/b/abs_d5rpy288vc/croots/recipe/contourpy_1663827418189/work
cryptography @ file:///C:/b/abs_13590mi9q9/croot/cryptography_1689373706078/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
fastcluster @ file:///D:/bld/fastcluster_1667859055985/work
ffmpy @ file:///home/conda/feedstock_root/build_artifacts/ffmpy_1659474992694/work
flatbuffers==23.5.26
fonttools==4.25.0
gast==0.4.0
google-auth==2.22.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.56.2
h5py==3.9.0
idna @ file:///C:/b/abs_bdhbebrioa/croot/idna_1666125572046/work
imageio @ file:///C:/b/abs_27kq2gy1us/croot/imageio_1677879918708/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1673483481485/work
joblib @ file:///C:/b/abs_1anqjntpan/croot/joblib_1685113317150/work
keras==2.10.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/b/abs_88mdhvtahm/croot/kiwisolver_1672387921783/work
libclang==16.0.6
Markdown==3.4.4
MarkupSafe==2.1.3
matplotlib @ file:///C:/b/abs_49b2acwxd4/croot/matplotlib-suite_1679593486357/work
mkl-fft==1.3.6
mkl-random @ file:///C:/Users/dev-admin/mkl/mkl_random_1682977971003/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/b/abs_afm0oewmmt/croot/numexpr_1683221839116/work
numpy @ file:///C:/b/abs_5akk51tu0f/croot/numpy_and_numpy_base_1687466253743/work
nvidia-ml-py @ file:///home/conda/feedstock_root/build_artifacts/nvidia-ml-py_1688681764027/work
oauthlib==3.2.2
opencv-python==4.8.0.74
opt-einsum==3.3.0
packaging @ file:///C:/b/abs_ed_kb9w6g4/croot/packaging_1678965418855/work
Pillow==9.4.0
ply==3.11
pooch @ file:///tmp/build/80754af9/pooch_1623324770023/work
protobuf==3.19.6
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyOpenSSL @ file:///C:/b/abs_08f38zyck4/croot/pyopenssl_1690225407403/work
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
PySocks @ file:///C:/ci_310/pysocks_1642089375450/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==305.1
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp310-none-win_amd64.whl
requests @ file:///C:/b/abs_316c2inijk/croot/requests_1690400295842/work
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///C:/b/abs_38k7ridbgr/croot/scikit-learn_1684954723009/work
scipy==1.10.1
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.10.1
tensorflow-estimator==2.10.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.3.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/ci/tornado_1662476985533/work
tqdm @ file:///C:/b/abs_f76j9hg7pv/croot/tqdm_1679561871187/work
typing_extensions==4.7.1
urllib3 @ file:///C:/b/abs_889_loyqv4/croot/urllib3_1686163174463/work
Werkzeug==2.3.6
win-inet-pton @ file:///C:/ci_310/win_inet_pton_1642658466512/work
wrapt==1.15.0

============== Conda Packages ==============
# packages in environment at C:\Users\Administrator\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   1.4.0                    pypi_0    pypi
appdirs                   1.4.4              pyhd3eb1b0_0  
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
brotli                    1.0.9                h2bbff1b_7  
brotli-bin                1.0.9                h2bbff1b_7  
brotlipy                  0.7.0           py310h2bbff1b_1002  
bzip2                     1.0.8                he774522_0  
ca-certificates           2023.7.22            h56e8100_0    conda-forge
cachetools                5.3.1                    pypi_0    pypi
certifi                   2023.7.22          pyhd8ed1ab_0    conda-forge
cffi                      1.15.1          py310h2bbff1b_3  
charset-normalizer        2.0.4              pyhd3eb1b0_0  
colorama                  0.4.6           py310haa95532_0  
contourpy                 1.0.5           py310h59b6b97_0  
cryptography              41.0.2          py310h31511bf_0  
cudatoolkit               11.8.0               hd77b12b_0  
cudnn                     8.9.2.26               cuda11_0  
cycler                    0.11.0             pyhd3eb1b0_0  
fastcluster               1.2.6           py310h1c4a608_2    conda-forge
ffmpeg                    4.3.1                ha925a31_0    conda-forge
ffmpy                     0.3.0              pyhb6f538c_0    conda-forge
flatbuffers               23.5.26                  pypi_0    pypi
fonttools                 4.25.0             pyhd3eb1b0_0  
freetype                  2.12.1               ha860e81_0  
gast                      0.4.0                    pypi_0    pypi
giflib                    5.2.1                h8cc25b3_3  
git                       2.40.1               haa95532_1  
glib                      2.69.1               h5dc1a3c_2  
google-auth               2.22.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.56.2                   pypi_0    pypi
gst-plugins-base          1.18.5               h9e645db_0  
gstreamer                 1.18.5               hd78058f_0  
h5py                      3.9.0                    pypi_0    pypi
icc_rt                    2022.1.0             h6049295_2  
icu                       58.2                 ha925a31_3  
idna                      3.4             py310haa95532_0  
imageio                   2.26.0          py310haa95532_0  
imageio-ffmpeg            0.4.8              pyhd8ed1ab_0    conda-forge
intel-openmp              2023.1.0         h59b6b97_46319  
joblib                    1.2.0           py310haa95532_0  
jpeg                      9e                   h2bbff1b_1  
keras                     2.10.0                   pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.4           py310hd77b12b_0  
krb5                      1.19.4               h5b6d351_0  
lerc                      3.0                  hd77b12b_0  
libbrotlicommon           1.0.9                h2bbff1b_7  
libbrotlidec              1.0.9                h2bbff1b_7  
libbrotlienc              1.0.9                h2bbff1b_7  
libclang                  16.0.6                   pypi_0    pypi
libclang13                14.0.6          default_h8e68704_1  
libdeflate                1.17                 h2bbff1b_0  
libffi                    3.4.4                hd77b12b_0  
libiconv                  1.16                 h2bbff1b_2  
libogg                    1.3.5                h2bbff1b_1  
libpng                    1.6.39               h8cc25b3_0  
libtiff                   4.5.0                h6c2663c_2  
libvorbis                 1.3.7                he774522_0  
libwebp                   1.2.4                hbc33d0d_1  
libwebp-base              1.2.4                h2bbff1b_1  
libxml2                   2.10.3               h0ad7f3c_0  
libxslt                   1.1.37               h2bbff1b_0  
libzlib                   1.2.13               hcfcfb64_5    conda-forge
libzlib-wapi              1.2.13               hcfcfb64_5    conda-forge
lz4-c                     1.9.4                h2bbff1b_0  
markdown                  3.4.4                    pypi_0    pypi
markupsafe                2.1.3                    pypi_0    pypi
matplotlib                3.7.1           py310haa95532_1  
matplotlib-base           3.7.1           py310h4ed8f06_1  
mkl                       2023.1.0         h8bd8f75_46356  
mkl-service               2.4.0           py310h2bbff1b_1  
mkl_fft                   1.3.6           py310h4ed8f06_1  
mkl_random                1.2.2           py310h4ed8f06_1  
munkres                   1.1.4                      py_0  
numexpr                   2.8.4           py310h2cd9be0_1  
numpy                     1.25.0          py310h055cbcc_0  
numpy-base                1.25.0          py310h65a83cf_0  
nvidia-ml-py              12.535.77          pyhd8ed1ab_0    conda-forge
oauthlib                  3.2.2                    pypi_0    pypi
opencv-python             4.8.0.74                 pypi_0    pypi
openssl                   1.1.1u               hcfcfb64_0    conda-forge
opt-einsum                3.3.0                    pypi_0    pypi
packaging                 23.0            py310haa95532_0  
pcre                      8.45                 hd77b12b_0  
pillow                    9.4.0           py310hd77b12b_0  
pip                       23.2.1          py310haa95532_0  
ply                       3.11            py310haa95532_0  
pooch                     1.4.0              pyhd3eb1b0_0  
protobuf                  3.19.6                   pypi_0    pypi
psutil                    5.9.0           py310h2bbff1b_0  
pyasn1                    0.5.0                    pypi_0    pypi
pyasn1-modules            0.3.0                    pypi_0    pypi
pycparser                 2.21               pyhd3eb1b0_0  
pyopenssl                 23.2.0          py310haa95532_0  
pyparsing                 3.0.9           py310haa95532_0  
pyqt                      5.15.7          py310hd77b12b_0  
pyqt5-sip                 12.11.0         py310hd77b12b_0  
pysocks                   1.7.1           py310haa95532_0  
python                    3.10.12              h966fe2a_0  
python-dateutil           2.8.2              pyhd3eb1b0_0  
python_abi                3.10                    2_cp310    conda-forge
pywin32                   305             py310h2bbff1b_0  
pywinpty                  2.0.2           py310h5da7b33_0  
qt-main                   5.15.2               he8e5bd7_8  
qt-webengine              5.15.9               hb9a9bb5_5  
qtwebkit                  5.212                h2bbfb41_5  
requests                  2.31.0          py310haa95532_0  
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-learn              1.2.2           py310hd77b12b_1  
scipy                     1.10.1          py310h309d312_1  
setuptools                68.0.0          py310haa95532_0  
sip                       6.6.2           py310hd77b12b_0  
six                       1.16.0             pyhd3eb1b0_1  
sqlite                    3.41.2               h2bbff1b_0  
tbb                       2021.8.0             h59b6b97_0  
tensorboard               2.10.1                   pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow                2.10.1                   pypi_0    pypi
tensorflow-estimator      2.10.0                   pypi_0    pypi
tensorflow-io-gcs-filesystem 0.31.0                   pypi_0    pypi
termcolor                 2.3.0                    pypi_0    pypi
threadpoolctl             2.2.0              pyh0d69192_0  
tk                        8.6.12               h2bbff1b_0  
toml                      0.10.2             pyhd3eb1b0_0  
tornado                   6.2             py310h2bbff1b_0  
tqdm                      4.65.0          py310h9909e9c_0  
typing-extensions         4.7.1                    pypi_0    pypi
tzdata                    2023c                h04d1e81_0  
ucrt                      10.0.22621.0         h57928b3_0    conda-forge
urllib3                   1.26.16         py310haa95532_0  
vc                        14.2                 h21ff451_1  
vc14_runtime              14.36.32532         hfdfe4a8_17    conda-forge
vs2015_runtime            14.36.32532         h05e6639_17    conda-forge
werkzeug                  2.3.6                    pypi_0    pypi
wheel                     0.38.4          py310haa95532_0  
win_inet_pton             1.1.0           py310haa95532_0  
winpty                    0.4.3                         4  
wrapt                     1.15.0                   pypi_0    pypi
xz                        5.4.2                h8cc25b3_0  
zlib                      1.2.13               hcfcfb64_5    conda-forge
zlib-wapi                 1.2.13               hcfcfb64_5    conda-forge
zstd                      1.5.5                hd43e919_0  

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0
erosion_top:              0.0
erosion_bottom:           0.0
erosion_left:             0.0
erosion_right:            0.0

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
separate_mask:            False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
separate_mask:            False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False
aligner_min_scale:        0.07
aligner_max_scale:        2.0
aligner_distance:         22.5
aligner_roll:             45.0
aligner_features:         True
filter_refeed:            True
save_filtered:            False
realign_refeeds:          True
filter_realign:           True

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
cpu:                      True
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.bisenet_fp]
batch-size:               8
cpu:                      False
weights:                  faceswap
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.custom]
batch-size:               8
centering:                face
fill:                     False

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

[recognition.vgg_face2]
batch-size:               16
cpu:                      False

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
centering:                face
coverage:                 87.5
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -7
save_optimizer:           exit
autoclip:                 False
reflect_padding:          False
allow_growth:             False
mixed_precision:          False
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            ssim
loss_function_2:          mse
loss_weight_2:            100
loss_function_3:          none
loss_weight_3:            0
loss_function_4:          none
loss_weight_4:            0
mask_loss_function:       mse
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                extended
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[model.dfaker]
output_size:              128

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.phaze_a]
output_size:              128
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         fs_original
enc_scaling:              7
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          1024
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1024
fc_max_filters:           1024
fc_dimensions:            4
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      512
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       subpixel
dec_upscales_in_fc:       0
dec_norm:                 none
dec_min_filters:          64
dec_max_filters:          512
dec_slope_mode:           full
dec_filter_slope:         -0.45
dec_res_blocks:           1
dec_output_kernel:        5
dec_gaussian:             True
dec_skip_last_residual:   True
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
fs_original_use_alt:      False
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001
mobilenet_minimalistic:   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
mask_opacity:             30
mask_color:               #ff0000
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

Re: Training error: Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED

Posted: Sat Jul 29, 2023 10:22 am
by torzdf

This is an OOM (out of memory) issue.

You have a GPU with only 2GBs of VRAM. If you are lucky you might be able to train the lightweight model at a low batch size, but 4GB is the recommended minimum.

Also see here:
app.php/faqpage#f3r9