Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

After the update I had to uninstall python and cuda and reinstall faceswap in order for it to open. now it crashes on training - log attached

crash_report.2022.05.08.092926095626.log
(53.16 KiB) Downloaded 145 times
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

I have just tested training on latest faceswap with your exact setup (command line options + Model config) and I cannot recreate this bug. Unfortunately I cannot solve what I cannot recreate, which tends to suggest that the issue is with your setup somewhere. Not hugely helpful, I know.

I know that you indicated that you've done this already, but can't hurt to do again:
app.php/faqpage#f1r1

Also, I would suggest using DDU to remove your Nvidia drivers and re-install.

FWIW, this is the configuration I tested with:

Code: Select all

command:

python faceswap.py train -A C:/Users/Matt/fstest/data/cage_head -B C:/Users/Matt/fstest/data/trump_head -m C:/Users/Matt/fstest/train/delme -t dfl-sae -bs 3 -it 1000000 -s 250 -ss 25000 -ps 100 -wl -nf -L INFO

config:
[global]
centering = face
coverage = 87.5
icnr_init = False
conv_aware_init = True
optimizer = adam
learning_rate = 5e-05
epsilon_exponent = -7
reflect_padding = False
allow_growth = False
mixed_precision = False
nan_protection = True
convert_batchsize = 2

[global.loss]
loss_function = ssim
mask_loss_function = mse
l2_reg_term = 100
eye_multiplier = 3
mouth_multiplier = 2
penalized_mask_loss = True
mask_type = vgg-clear
mask_blur_kernel = 5
mask_threshold = 8
learn_mask = False

[model.dfl_sae]
input_size = 128
clipnorm = True
architecture = df
autoencoder_dims = 0
encoder_dims = 42
decoder_dims = 21
multiscale_decoder = False

system:
============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         8ab085f bugfix: gui - settings popup. Always reload config
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce GTX 1080
gpu_devices_active:  GPU_0
gpu_driver:          497.09
gpu_vram:            GPU_0: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19044-SP0
os_release:          10
py_command:          faceswap.py gui -d
py_conda_version:    conda 4.12.0
py_implementation:   CPython
py_version:          3.8.13
py_virtual_env:      True
sys_cores:           20
sys_processor:       Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
sys_ram:             Total: 32555MB, Available: 19236MB, Used: 13319MB, Free: 19236MB

My word is final

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

Thanks for the reply!
I'll try reinstalling from scratch again - perhaps something in the process messed it up

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

well unfortunately - that didn't work :(
however - I thought perhaps a different trainer would work and indeed I started fresh with orginal, lae and dfl-h128
all worked except dfl-sae (which is the one I usually use)

Code: Select all

05/09/2022 15:15:37 ERROR    Caught exception in thread: '_training_0'
05/09/2022 15:15:41 ERROR    Got Exception on main handler:
Traceback (most recent call last):
  File "C:\Users\PC\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\PC\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\PC\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\PC\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\PC\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\PC\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\PC\faceswap\scripts\train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\PC\faceswap\scripts\train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 233, in train_one_step
    samples = self._samples.show_sample()
  File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 656, in show_sample
    preds = self._get_predictions(feeds["a"], feeds["b"])
  File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 730, in _get_predictions
    standard = self._model.model.predict([feed_a, feed_b])
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\eager\execute.py", line 54, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:

Detected at node 'dfl_sae_df/decoder_a/upscale_126_0_conv2d_conv2d/Conv2D' defined at (most recent call last):
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\threading.py", line 890, in _bootstrap
      self._bootstrap_inner()
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\threading.py", line 932, in _bootstrap_inner
      self.run()
    File "C:\Users\PC\faceswap\lib\multithreading.py", line 37, in run
      self._target(*self._args, **self._kwargs)
    File "C:\Users\PC\faceswap\scripts\train.py", line 242, in _training
      self._run_training_cycle(model, trainer)
    File "C:\Users\PC\faceswap\scripts\train.py", line 327, in _run_training_cycle
      trainer.train_one_step(viewer, timelapse)
    File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 233, in train_one_step
      samples = self._samples.show_sample()
    File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 656, in show_sample
      preds = self._get_predictions(feeds["a"], feeds["b"])
    File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 730, in _get_predictions
      standard = self._model.model.predict([feed_a, feed_b])
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1982, in predict
      tmp_batch_outputs = self.predict_function(iterator)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1801, in predict_function
      return step_function(self, iterator)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1790, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1783, in run_step
      outputs = model.predict_step(data)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1751, in predict_step
      return self(x, training=False)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1096, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 92, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 451, in call
      return self._run_internal_graph(
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 589, in _run_internal_graph
      outputs = node.layer(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1096, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 92, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 451, in call
      return self._run_internal_graph(
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\functional.py", line 589, in _run_internal_graph
      outputs = node.layer(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 1096, in __call__
      outputs = call_fn(inputs, *args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 92, in error_handler
      return fn(*args, **kwargs)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 248, in call
      outputs = self.convolution_op(inputs, self.kernel)
    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 233, in convolution_op
      return tf.nn.convolution(
Node: 'dfl_sae_df/decoder_a/upscale_126_0_conv2d_conv2d/Conv2D'
No algorithm worked!  Error messages:
  Profiling failure on CUDNN engine 1: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 21376256 bytes.
  Profiling failure on CUDNN engine 0: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16777216 bytes.
  Profiling failure on CUDNN engine 2: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 537001984 bytes.
  Profiling failure on CUDNN engine 6: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 29742176 bytes.
  Profiling failure on CUDNN engine 5: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 661639168 bytes.
  Profiling failure on CUDNN engine 7: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 480973312 bytes.
	 [[{{node dfl_sae_df/decoder_a/upscale_126_0_conv2d_conv2d/Conv2D}}]] [Op:__inference_predict_function_16263]
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Ok, well that's a different error, which leads me to believe that the initial error was a false positive. That error means you have run out of GPU memory. Try enabling "Mixed Precision" and/or lowering your batch size.

My word is final

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

Thanks for all the help, but that didn't do it either (lowered the batch size to 2)
still getting errors from the tf.kras.losses.Reduction

Code: Select all

05/09/2022 21:01:00 INFO     Error reported to Coordinator: in user code:\n\n    File "C:\Users\PC\faceswap\lib\model\losses_tf.py", line 531, in call  *\n        loss += (func(n_true, n_pred) * weight)\n    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **\n        losses, sample_weight, reduction=self._get_reduction())\n    File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction\n        raise ValueError(\n\n    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:\n    ```\n    with strategy.scope():\n        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)\n    ....\n        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)\n    ```\n    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.\n
Traceback (most recent call last):
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\training\coordinator.py", line 293, in stop_on_exception
    yield
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\distribute\mirrored_run.py", line 342, in run
    self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 692, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

File "C:\Users\PC\faceswap\lib\model\losses_tf.py", line 531, in call  *
    loss += (func(n_true, n_pred) * weight)
File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **
    losses, sample_weight, reduction=self._get_reduction())
File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.

05/09/2022 21:01:00 CRITICAL Error caught! Exiting...
05/09/2022 21:01:00 ERROR    Caught exception in thread: '_training_0'
05/09/2022 21:01:04 ERROR    Got Exception on main handler:
Traceback (most recent call last):
  File "C:\Users\PC\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\PC\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\PC\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\PC\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\PC\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\PC\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\PC\faceswap\scripts\train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\PC\faceswap\scripts\train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\PC\faceswap\plugins\train\trainer\_base.py", line 194, in train_one_step
    loss = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2093, in train_on_batch
    logs = self.train_function(iterator)
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1147, in autograph_handler
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1021, in train_function  *
    return step_function(self, iterator)
File "C:\Users\PC\faceswap\lib\model\losses_tf.py", line 531, in call  *
    loss += (func(n_true, n_pred) * weight)
File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **
    losses, sample_weight, reduction=self._get_reduction())
File "C:\Users\PC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

I honestly don't know then. There is nothing implicitly different about that model than any of the others which would cause this issue. The loss is calculated the same regardless of whichever model you use. The only other suggestions I have is to: 1) make sure that you have no overclocks (even stock overclocks) enabled on your GPU, or 2) try using the dfl-sae preset within the Phaze-A model (it is the same model).

My word is final

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

I hope my GPU will handle it :D
thanks I'll start checking the phaze A docs to see what's what
I truly appreciate all your help! many thanks!

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

welp - phaze-a returned the same exception as the dfl-sae (I ran it with the default setup just to see if it starts)
really depressing as I truly loves faceswap but now I can't use it :(
perhaps my GPU is not strong enough after the last update (which as I understood automatically updates tenorflow etc). I even tried to downgrade the python to 3.9 and tensorflow to 2.7 but nothing worked and I couldn't find a way to downgrade faceswap in widows and since I'm the only one experiencing it after the update from 2 days ago, any hope for a "fix" are gone :D

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Yeah, this is a weird one, and it has not been reported elsewhere.

You can install an earlier version of Tensorflow (the last version we used was 2.6) as Faceswap supports Tensorflow 2.2 - 2.8.

The easiest way to do it (without messing up your conda environment) would be to do the following (not tested, but should work)...

In a text editor open up the following file:
faceswap/setup.py

and edit line 23 to:

Code: Select all

                           ">=2.5.0,<2.7.0": ["11.2", "8.1"]}

(i.e. change 2.9.0 to 2.7.0)

Next, open up an anaconda prompt,
Start > Anaconda Prompt

delete the faceswap environment, create a new one and activate it:

Code: Select all

conda env remove -n faceswap
conda create -n faceswap python=3.8
conda activate faceswap

Within the same environment navigate to your faceswap folder then run the auto version of setup.py with the following command:

Code: Select all

python setup.py --installer --nvidia

This should recreate the environment with Tensorflow 2.6, and you desktop shortcut should still work.

My word is final

User avatar
alexbloch8
Posts: 12
Joined: Tue Nov 09, 2021 2:06 pm

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by alexbloch8 »

WORKED!!! (sort of :D )
I followed your steps but it still didn't work but I understood the general line of thought.
so I've tried starting from scratch again, but this time - clone the git repo and install it manually (i.e dl anaconda, create a 3.8 env and run setup.py with the 2.7 change) - I also saved the current faceswap folder for backup
after everything was finished I was able to run dfl-sae!

I decided to try and use the current setup running faceswap.py from the backup folder but I got the same error when I tried to run so perhaps something in the exe setup affected my system

anyway [mention]torzdf[/mention] as I've said before and will again - many MANY thanks for all the help and time you've took to help me out!

User avatar
coosy77
Posts: 4
Joined: Wed May 18, 2022 5:18 pm
Has thanked: 3 times

Caught exception in thread: '_training_0'

Post by coosy77 »

So...... I had a power blackout that corrupted my model anyway and so I thought I might as well reinstall Faceswap, since there were some error messages earlier.
Now, when starting training with an old model I have been working on, I get this: Caught exception in thread: '_training_0'

Error Log is attached. Would LOVE some help. :D
Besides this: any way to recover an old model that the recover function apparently has overwritten?

Code: Select all

05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/18/2022 19:16:45 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 266, side: 'b', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/18/2022 19:16:45 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 266, side: 'b', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'a')
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
05/18/2022 19:16:45 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], coverage_ratio: 0.875, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: True, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/18/2022 19:16:45 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
05/18/2022 19:16:45 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 1047, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.875, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [64]
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/18/2022 19:16:45 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 1047, side: 'a', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/18/2022 19:16:45 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 1047, side: 'a', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'b')
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
05/18/2022 19:16:45 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], coverage_ratio: 0.875, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: True, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/18/2022 19:16:45 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
05/18/2022 19:16:45 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 266, batchsize: 14, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.875, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [64]
05/18/2022 19:16:45 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/18/2022 19:16:45 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 266, side: 'b', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/18/2022 19:16:45 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 266, side: 'b', do_shuffle: True)
05/18/2022 19:16:45 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Set preview feed. Batchsize: 14
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Feeder:
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Enabling TensorBoard Logging
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Setting up TensorBoard Logging
05/18/2022 19:16:45 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
05/18/2022 19:16:45 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(24, 360, None), 'warp_mapx': '[[[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]]', 'warp_mapy': '[[[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
05/18/2022 19:16:45 MainProcess     _training_0                    _base           _set_tensorboard               VERBOSE  Enabled TensorBoard Logging
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x00000124031D7DC0>', coverage_ratio: 0.875)
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Timelapse: model: <plugins.train.model.original.Model object at 0x00000124031D7DC0>, coverage_ratio: 0.875, image_count: 14, feeder: '<plugins.train.trainer._base._Feeder object at 0x0000012403309B80>', image_paths: 2)
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.original.Model object at 0x00000124031D7DC0>', coverage_ratio: 0.875)
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Timelapse
05/18/2022 19:16:45 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized Trainer
05/18/2022 19:16:45 MainProcess     _training_0                    train           _load_trainer                  DEBUG    Loaded Trainer
05/18/2022 19:16:45 MainProcess     _training_0                    train           _run_training_cycle            DEBUG    Running Training Cycle
05/18/2022 19:16:45 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
05/18/2022 19:16:45 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(24, 360, None), 'warp_mapx': '[[[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]]', 'warp_mapy': '[[[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
05/18/2022 19:16:46 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
05/18/2022 19:16:46 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(24, 360, None), 'warp_mapx': '[[[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]]', 'warp_mapy': '[[[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
05/18/2022 19:16:46 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
05/18/2022 19:16:46 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(24, 360, None), 'warp_mapx': '[[[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]\n\n [[ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]\n  [ 24. 108. 192. 276. 360.]]]', 'warp_mapy': '[[[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]\n\n [[ 24.  24.  24.  24.  24.]\n  [108. 108. 108. 108. 108.]\n  [192. 192. 192. 192. 192.]\n  [276. 276. 276. 276. 276.]\n  [360. 360. 360. 360. 360.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
05/18/2022 19:16:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['XXNEW_vlcsnap-2022-04-21-18h36m26s909_0.png', 'XXNEW_vlcsnap-2022-04-21-18h45m58s966_0.png', 'XXNEW_vlcsnap-2022-04-21-18h52m31s072_0.png', 'zzzzzzz (65)_0.png', 'XXNEW_vlcsnap-2022-04-21-19h12m16s474_0.png', 'XXNEW_vlcsnap-2022-04-21-19h13m09s965_0.png', 'new (1)_1.png', 'lia (69)_0.png', 'XXNEW_vlcsnap-2022-04-21-18h53m12s987_0.png', 'XXNEW_vlcsnap-2022-04-21-18h44m41s581_0.png', 'XXNEW_vlcsnap-2022-04-21-19h05m21s525_0.png', 'zzzzzzz (115)_0.png', 'XXNEW_vlcsnap-2022-04-21-18h51m20s652_0.png', 'vlcsnap-2022-02-27-13h33m10s645_0.png', 'vlcsnap-2022-03-07-19h29m35s313_0.png', 'vlcsnap-2022-02-27-13h38m35s330_0.png']
05/18/2022 19:16:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['20180609_174813_9.png', 'vlcsnap-2022-02-27-13h00m32s348_0.png', '20180609_194616_11.png', '20190427_223157_2.png', 'Screenshot_20200322-152935_Houseparty_0.png', '20180609_174813_14.png', '45800847_10155941697533634_2689583944675885056_n_0.png', 'IMG-20190308-WA0040_2.png', '20190309_191410_2.png', '20180623_190830_0.png', '20180609_174813_19.png', '20200307_001104_1.png', 'vlcsnap-2022-02-27-13h01m26s284_0.png', '20190927_224445_2.png', 'IMG-20180430-WA0007_0.png', '12768175_10207406761620449_8001461663820957124_o_0.png']
05/18/2022 19:16:48 MainProcess     Thread-6                       api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x0000012403309F70>, weight: 1.0, mask_channel: 3)
05/18/2022 19:16:49 MainProcess     Thread-6                       api             converted_call                 DEBUG    Applying mask from channel 3
05/18/2022 19:16:49 MainProcess     Thread-6                       coordinator     request_stop                   INFO     Error reported to Coordinator: in user code:\n\n    File "C:\Users\AGC\faceswap\lib\model\losses_tf.py", line 531, in call  *\n        loss += (func(n_true, n_pred) * weight)\n    File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **\n        losses, sample_weight, reduction=self._get_reduction())\n    File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction\n        raise ValueError(\n\n    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:\n    ```\n    with strategy.scope():\n        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)\n    ....\n        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)\n    ```\n    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.\n
Traceback (most recent call last):
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\training\coordinator.py", line 293, in stop_on_exception
    yield
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\distribute\mirrored_run.py", line 342, in run
    self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 692, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

File "C:\Users\AGC\faceswap\lib\model\losses_tf.py", line 531, in call  *
    loss += (func(n_true, n_pred) * weight)
File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **
    losses, sample_weight, reduction=self._get_reduction())
File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
05/18/2022 19:16:49 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): in user code:\n\n    File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1021, in train_function  *\n        return step_function(self, iterator)\n    File "C:\Users\AGC\faceswap\lib\model\losses_tf.py", line 531, in call  *\n        loss += (func(n_true, n_pred) * weight)\n    File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **\n        losses, sample_weight, reduction=self._get_reduction())\n    File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction\n        raise ValueError(\n\n    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:\n    ```\n    with strategy.scope():\n        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)\n    ....\n        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)\n    ```\n    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.\n
05/18/2022 19:16:50 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
05/18/2022 19:16:50 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
05/18/2022 19:16:50 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
05/18/2022 19:16:50 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
05/18/2022 19:16:50 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
05/18/2022 19:16:50 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
05/18/2022 19:16:50 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\AGC\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\AGC\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\AGC\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\AGC\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\AGC\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\AGC\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\AGC\faceswap\scripts\train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\AGC\faceswap\scripts\train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\AGC\faceswap\plugins\train\trainer\_base.py", line 194, in train_one_step
    loss = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 2093, in train_on_batch
    logs = self.train_function(iterator)
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1147, in autograph_handler
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1021, in train_function  *
    return step_function(self, iterator)
File "C:\Users\AGC\faceswap\lib\model\losses_tf.py", line 531, in call  *
    loss += (func(n_true, n_pred) * weight)
File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 143, in __call__  **
    losses, sample_weight, reduction=self._get_reduction())
File "C:\Users\AGC\MiniConda3\envs\faceswap\lib\site-packages\keras\losses.py", line 183, in _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

This is the same issue as reported above, where I tested the exact settings used and couldn't replicate. I know of many who have updated and not hit this issue, which leads me to believe it is a false positive and, potentially, a system/vram issue.

Unfortunately your output has not provided me with either the command run, nor your system information which I would need to look into this further.

However, I suggest reading through this thread and following any instructions there before reporting further.

My word is final

User avatar
EvilSupahFly
Posts: 2
Joined: Sat May 07, 2022 5:58 am
Been thanked: 1 time

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by EvilSupahFly »

Interestingly, I'm having the same issue on Linux.

To save VRAM, I boot into runlevel 3 - this loads all drivers, services, etc, but doesn't start the X server so it's just the command line.

To train my most recent model, I used the following command:

Code: Select all

/home/evilsupahfly/miniconda3/envs/faceswap/bin/python /home/evilsupahfly/faceswap/faceswap.py train -A /home/evilsupahfly/Deep_Fakes/In/Trudeau/Rick_v2 -B /home/evilsupahfly/Deep_Fakes/In/Trudeau/Trudeau_v2 -m /home/evilsupahfly/Deep_Fakes/Models.Dlight -t dlight -bs 4 -it 10000 -d -s 50 -ss 100 -ps 100 -w -wl -L DEBUG -LF /home/evilsupahfly/Deep_Fakes/In/Trudeau/training.log

I get the same tf.keras error when using dlight, villain, and Phaze-A - the three I've tried so far.

It starts up just fine, caches the images, but eventually fails with the same tf.keras error OP listed initially.

Full system specs on TermBin for those who might be helped by such details, and crashlog is as follows:

Code: Select all

05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/20/2022 15:32:55 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 67888, side: 'b', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/20/2022 15:32:55 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 67888, side: 'b', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'a')
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 128, output_shapes: [(128, 128, 3), (128, 128, 1)]
05/20/2022 15:32:55 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3), (128, 128, 1)], coverage_ratio: 1.0, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: True, config: {'centering': 'face', 'coverage': 100.0, 'icnr_init': True, 'conv_aware_init': True, 'optimizer': 'adabelief', 'learning_rate': 5e-05, 'epsilon_exponent': -16, 'reflect_padding': True, 'allow_growth': True, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 2, 'loss_function': 'pixel_gradient_diff', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 3, 'penalized_mask_loss': True, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'preview_images': 4, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/20/2022 15:32:55 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
05/20/2022 15:32:55 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 3942, batchsize: 4, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 4, is_display: True, input_size: 128, output_shapes: [(128, 128, 3), (128, 128, 1)], coverage_ratio: 1.0, config: {'centering': 'face', 'coverage': 100.0, 'icnr_init': True, 'conv_aware_init': True, 'optimizer': 'adabelief', 'learning_rate': 5e-05, 'epsilon_exponent': -16, 'reflect_padding': True, 'allow_growth': True, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 2, 'loss_function': 'pixel_gradient_diff', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 3, 'penalized_mask_loss': True, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'preview_images': 4, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/20/2022 15:32:55 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 3942, side: 'a', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/20/2022 15:32:55 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 3942, side: 'a', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'b')
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
05/20/2022 15:32:55 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 128, output_shapes: [(128, 128, 3), (128, 128, 1)]
05/20/2022 15:32:55 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3), (128, 128, 1)], coverage_ratio: 1.0, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: True, config: {'centering': 'face', 'coverage': 100.0, 'icnr_init': True, 'conv_aware_init': True, 'optimizer': 'adabelief', 'learning_rate': 5e-05, 'epsilon_exponent': -16, 'reflect_padding': True, 'allow_growth': True, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 2, 'loss_function': 'pixel_gradient_diff', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 3, 'penalized_mask_loss': True, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'preview_images': 4, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/20/2022 15:32:55 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
05/20/2022 15:32:55 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 67888, batchsize: 4, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 4, is_display: True, input_size: 128, output_shapes: [(128, 128, 3), (128, 128, 1)], coverage_ratio: 1.0, config: {'centering': 'face', 'coverage': 100.0, 'icnr_init': True, 'conv_aware_init': True, 'optimizer': 'adabelief', 'learning_rate': 5e-05, 'epsilon_exponent': -16, 'reflect_padding': True, 'allow_growth': True, 'mixed_precision': True, 'nan_protection': True, 'convert_batchsize': 2, 'loss_function': 'pixel_gradient_diff', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 3, 'penalized_mask_loss': True, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': True, 'preview_images': 4, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
05/20/2022 15:32:55 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
05/20/2022 15:32:55 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 67888, side: 'b', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
05/20/2022 15:32:55 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 67888, side: 'b', do_shuffle: True)
05/20/2022 15:32:55 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
05/20/2022 15:32:56 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Set preview feed. Batchsize: 4
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Feeder:
05/20/2022 15:32:56 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Enabling TensorBoard Logging
05/20/2022 15:32:56 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Setting up TensorBoard Logging
05/20/2022 15:32:56 MainProcess     _training_0                    _base           _set_tensorboard               VERBOSE  Enabled TensorBoard Logging
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.dlight.Model object at 0x7fe931324d30>', coverage_ratio: 1.0)
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Timelapse: model: <plugins.train.model.dlight.Model object at 0x7fe931324d30>, coverage_ratio: 1.0, image_count: 4, feeder: '<plugins.train.trainer._base._Feeder object at 0x7fe930f34e80>', image_paths: 2)
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.dlight.Model object at 0x7fe931324d30>', coverage_ratio: 1.0)
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Timelapse
05/20/2022 15:32:56 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized Trainer
05/20/2022 15:32:56 MainProcess     _training_0                    train           _load_trainer                  DEBUG    Loaded Trainer
05/20/2022 15:32:56 MainProcess     _training_0                    train           _run_training_cycle            DEBUG    Running Training Cycle
05/20/2022 15:32:56 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 192
05/20/2022 15:32:56 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 1, 'tgt_slices': slice(0, 192, None), 'warp_mapx': '[[[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]]', 'warp_mapy': '[[[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [189. 189. 189. ... 189. 189. 189.]\n  [190. 190. 190. ... 190. 190. 190.]\n  [191. 191. 191. ... 191. 191. 191.]]\n\n [[  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  ...\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]]]'}
05/20/2022 15:41:43 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 192
05/20/2022 15:41:43 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 1, 'tgt_slices': slice(0, 192, None), 'warp_mapx': '[[[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]]', 'warp_mapy': '[[[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [189. 189. 189. ... 189. 189. 189.]\n  [190. 190. 190. ... 190. 190. 190.]\n  [191. 191. 191. ... 191. 191. 191.]]\n\n [[  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  ...\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]]]'}
05/20/2022 15:41:44 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 192
05/20/2022 15:41:44 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 1, 'tgt_slices': slice(0, 192, None), 'warp_mapx': '[[[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]\n\n [[  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]\n  [  0.  48.  96. 144. 192.]]]', 'warp_mapy': '[[[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]\n\n [[  0.   0.   0.   0.   0.]\n  [ 48.  48.  48.  48.  48.]\n  [ 96.  96.  96.  96.  96.]\n  [144. 144. 144. 144. 144.]\n  [192. 192. 192. 192. 192.]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]\n\n [[  0   0]\n  [  0 191]\n  [191 191]\n  [191   0]\n  [ 95   0]\n  [ 95 191]\n  [191  95]\n  [  0  95]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [189. 189. 189. ... 189. 189. 189.]\n  [190. 190. 190. ... 190. 190. 190.]\n  [191. 191. 191. ... 191. 191. 191.]]\n\n [[  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  ...\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]\n  [  0.   1.   2. ... 189. 190. 191.]]]'}
05/20/2022 15:42:16 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Astley_002174_0.png', 'Astley_003891_0.png', 'Astley_006310_0.png', 'Astley_000025_0.png']
05/20/2022 15:42:16 MainProcess     _run_0                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Astley_002174_0.png', 'Astley_003891_0.png', 'Astley_006310_0.png', 'Astley_000025_0.png']
05/20/2022 15:42:16 MainProcess     _run_0                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['JT01_010813_0.png', 'JT02_006094_2.png', 'JT02_003977_3.png', 'JT01_007802_0.png']
05/20/2022 15:42:16 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['JT01_010813_0.png', 'JT02_006094_2.png', 'JT02_003977_3.png', 'JT01_007802_0.png']
05/20/2022 15:42:19 MainProcess     Thread-6                       api             converted_call                 DEBUG    Processing loss function: (func: <tensorflow.python.keras.engine.compile_utils.LossesContainer object at 0x7fe92c055280>, weight: 1.0, mask_channel: 3)
05/20/2022 15:42:20 MainProcess     Thread-6                       api             converted_call                 DEBUG    Applying mask from channel 3
05/20/2022 15:42:20 MainProcess     Thread-6                       coordinator     request_stop                   INFO     Error reported to Coordinator: in user code:\n\n    /home/evilsupahfly/faceswap/lib/model/losses_tf.py:560 call  *\n        loss += (func(n_true, n_pred) * weight)\n    /home/evilsupahfly/faceswap/lib/model/losses_tf.py:309 call  *\n        loss += tv_weight * (self.generalized_loss(self._diff_x(y_true), self._diff_x(y_pred)) +\n    /home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:157 __call__  **\n        losses, sample_weight, reduction=self._get_reduction())\n    /home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:197 _get_reduction\n        raise ValueError(\n\n    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:\n    ```\n    with strategy.scope():\n        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)\n    ....\n        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)\n    ```\n    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.\n
Traceback (most recent call last):
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/training/coordinator.py", line 297, in stop_on_exception
    yield
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/distribute/mirrored_run.py", line 346, in run
    self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

/home/evilsupahfly/faceswap/lib/model/losses_tf.py:560 call  *
    loss += (func(n_true, n_pred) * weight)
/home/evilsupahfly/faceswap/lib/model/losses_tf.py:309 call  *
    loss += tv_weight * (self.generalized_loss(self._diff_x(y_true), self._diff_x(y_pred)) +
/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:157 __call__  **
    losses, sample_weight, reduction=self._get_reduction())
/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:197 _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
05/20/2022 15:42:20 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): in user code:\n\n    /home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:862 train_function  *\n        return step_function(self, iterator)\n    /home/evilsupahfly/faceswap/lib/model/losses_tf.py:560 call  *\n        loss += (func(n_true, n_pred) * weight)\n    /home/evilsupahfly/faceswap/lib/model/losses_tf.py:309 call  *\n        loss += tv_weight * (self.generalized_loss(self._diff_x(y_true), self._diff_x(y_pred)) +\n    /home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:157 __call__  **\n        losses, sample_weight, reduction=self._get_reduction())\n    /home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:197 _get_reduction\n        raise ValueError(\n\n    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:\n    ```\n    with strategy.scope():\n        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)\n    ....\n        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)\n    ```\n    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.\n
05/20/2022 15:42:21 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
05/20/2022 15:42:21 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
05/20/2022 15:42:21 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
05/20/2022 15:42:21 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
05/20/2022 15:42:21 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
05/20/2022 15:42:21 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
05/20/2022 15:42:21 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "/home/evilsupahfly/faceswap/lib/cli/launcher.py", line 182, in execute_script
    process.process()
  File "/home/evilsupahfly/faceswap/scripts/train.py", line 190, in process
    self._end_thread(thread, err)
  File "/home/evilsupahfly/faceswap/scripts/train.py", line 230, in _end_thread
    thread.join()
  File "/home/evilsupahfly/faceswap/lib/multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "/home/evilsupahfly/faceswap/lib/multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "/home/evilsupahfly/faceswap/scripts/train.py", line 252, in _training
    raise err
  File "/home/evilsupahfly/faceswap/scripts/train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "/home/evilsupahfly/faceswap/scripts/train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "/home/evilsupahfly/faceswap/plugins/train/trainer/_base.py", line 193, in train_one_step
    loss = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1854, in train_on_batch
    logs = self.train_function(iterator)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 885, in __call__
    result = self._call(*args, **kwds)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 933, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 759, in _initialize
    self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3066, in _get_concrete_function_internal_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3298, in _create_graph_function
    func_graph_module.func_graph_from_py_func(
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 668, in wrapped_fn
    out = weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 994, in wrapper
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:862 train_function  *
    return step_function(self, iterator)
/home/evilsupahfly/faceswap/lib/model/losses_tf.py:560 call  *
    loss += (func(n_true, n_pred) * weight)
/home/evilsupahfly/faceswap/lib/model/losses_tf.py:309 call  *
    loss += tv_weight * (self.generalized_loss(self._diff_x(y_true), self._diff_x(y_pred)) +
/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:157 __call__  **
    losses, sample_weight, reduction=self._get_reduction())
/home/evilsupahfly/miniconda3/envs/faceswap/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:197 _get_reduction
    raise ValueError(

ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
```
with strategy.scope():
    loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
....
    loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
```
Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.


============ System Information ============
encoding:            UTF-8
git_branch:          master
git_commits:         cda49b3 Bugfix - Fix graphing not always showing loss for both sides
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce GTX 1660
gpu_devices_active:  GPU_0
gpu_driver:          510.73.05
gpu_vram:            GPU_0: 6144MB
os_machine:          x86_64
os_platform:         Linux-5.13.0-41-generic-x86_64-with-glibc2.17
os_release:          5.13.0-41-generic
py_command:          /home/evilsupahfly/faceswap/faceswap.py train -A /home/evilsupahfly/Deep_Fakes/In/Trudeau/Rick_v2 -B /home/evilsupahfly/Deep_Fakes/In/Trudeau/Trudeau_v2 -m /home/evilsupahfly/Deep_Fakes/Models.Dlight -t dlight -bs 4 -it 10000 -d -s 50 -ss 100 -ps 100 -w -wl -L TRACE -LF /home/evilsupahfly/Deep_Fakes/In/Trudeau/training.log
py_conda_version:    conda 4.12.0
py_implementation:   CPython
py_version:          3.8.13
py_virtual_env:      True
sys_cores:           8
sys_processor:       x86_64
sys_ram:             Total: 32048MB, Available: 26762MB, Used: 4790MB, Free: 745MB

=============== Pip Packages ===============
absl-py==0.15.0
astunparse==1.6.3
cachetools==4.2.4
certifi==2021.10.8
charset-normalizer==2.0.12
clang==5.0
colorama @ file:///tmp/build/80754af9/colorama_1607707115595/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
fastcluster==1.1.26
ffmpy==0.2.3
flatbuffers==1.12
gast==0.4.0
google-auth==1.35.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.44.0
h5py==3.1.0
idna==3.3
imageio @ file:///tmp/build/80754af9/imageio_1617700267927/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==4.11.3
joblib @ file:///tmp/build/80754af9/joblib_1635411271373/work
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///opt/conda/conda-bld/kiwisolver_1638569886207/work
Markdown==3.3.6
matplotlib @ file:///tmp/build/80754af9/matplotlib-base_1592846008246/work
mkl-fft==1.3.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy @ file:///tmp/build/80754af9/numpy_and_numpy_base_1603570489231/work
nvidia-ml-py==11.495.46
oauthlib==3.2.0
opencv-python==4.5.5.64
opt-einsum==3.3.0
Pillow==9.0.1
protobuf==3.20.0
psutil @ file:///tmp/build/80754af9/psutil_1612298023621/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
requests==2.27.1
requests-oauthlib==1.3.1
rsa==4.8
scikit-learn @ file:///tmp/build/80754af9/scikit-learn_1642617107864/work
scipy @ file:///tmp/build/80754af9/scipy_1616703172749/work
sip==4.19.13
six==1.15.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.6.0
tensorflow-gpu==2.6.3
termcolor==1.1.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
tornado @ file:///tmp/build/80754af9/tornado_1606942300299/work
tqdm @ file:///opt/conda/conda-bld/tqdm_1647339053476/work
typing-extensions==3.10.0.2
urllib3==1.26.9
Werkzeug==2.1.1
wrapt==1.12.1
zipp==3.8.0

============== Conda Packages ==============
# packages in environment at /home/evilsupahfly/miniconda3:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex 4.5 1_gnu
brotlipy 0.7.0 py39h27cfd23_1003
ca-certificates 2022.3.29 h06a4308_0
certifi 2021.10.8 py39h06a4308_2
cffi 1.15.0 py39hd667e15_1
charset-normalizer 2.0.4 pyhd3eb1b0_0
colorama 0.4.4 pyhd3eb1b0_0
conda 4.12.0 py39h06a4308_0
conda-package-handling 1.8.1 py39h7f8727e_0
cryptography 36.0.0 py39h9ce1e76_0
idna 3.3 pyhd3eb1b0_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgomp 9.3.0 h5101ec6_17
libstdcxx-ng 9.3.0 hd4cf53a_17
ncurses 6.3 h7f8727e_2
openssl 1.1.1n h7f8727e_0
pip 21.2.4 py39h06a4308_0
pycosat 0.6.3 py39h27cfd23_0
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 22.0.0 pyhd3eb1b0_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.7 h12debd9_1
readline 8.1.2 h7f8727e_1
requests 2.27.1 pyhd3eb1b0_0
ruamel_yaml 0.15.100 py39h27cfd23_0
setuptools 61.2.0 py39h06a4308_0
sqlite 3.38.2 hc218d9a_0
tk 8.6.11 h1ccaba5_0
tqdm 4.63.0 pyhd3eb1b0_0
tzdata 2022a hda174b7_0
urllib3 1.26.8 pyhd3eb1b0_0
wheel 0.37.1 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
yaml 0.2.5 h7b6447c_0
zlib 1.2.11 h7f8727e_4 ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 40 icon_size: 14 font: newspaper font_size: 10 autosave_last_session: always timeout: 120 auto_load_model_stats: True --------- convert.ini --------- [writer.ffmpeg] container: mp4 codec: libx264 crf: 0 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 95 png_compress_level: 0 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 0 tif_compression: tiff_deflate [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [color.color_transfer] clip: False preserve_paper: False [mask.mask_blend] type: normalized kernel_size: 5 passes: 4 threshold: 4 erosion: 0.0 [mask.box_blend] type: normalized distance: 5.0 radius: 5.0 passes: 3 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 --------- extract.ini --------- [global] allow_growth: True [mask.vgg_obstructed] batch-size: 1 [mask.unet_dfl] batch-size: 1 [mask.bisenet_fp] batch-size: 1 weights: faceswap include_ears: False include_hair: False include_glasses: False [mask.vgg_clear] batch-size: 1 [align.fan] batch-size: 2 [detect.cv2_dnn] confidence: 75 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 4 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 [detect.s3fd] confidence: 90 batch-size: 1 --------- train.ini --------- [global] centering: face coverage: 100.0 icnr_init: True conv_aware_init: True optimizer: adabelief learning_rate: 5e-05 epsilon_exponent: -16 reflect_padding: True allow_growth: True mixed_precision: True nan_protection: True convert_batchsize: 2 [global.loss] loss_function: pixel_gradient_diff mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 3 penalized_mask_loss: True mask_type: vgg-obstructed mask_blur_kernel: 3 mask_threshold: 4 learn_mask: True [trainer.original] preview_images: 4 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4 [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.dfl_h128] lowmem: False [model.villain] lowmem: False [model.original] lowmem: False [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.dfaker] output_size: 128 [model.phaze_a] output_size: 128 shared_fc: full enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: xception enc_scaling: 20 enc_load_weights: True bottleneck_type: dense bottleneck_norm: layer bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: resize_images fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: resize_images dec_norm: group dec_min_filters: 64 dec_max_filters: 512 dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers: keras_encoder load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 [model.dfl_sae] input_size: 256 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: True [model.dlight] features: best details: good output_size: 128

Addendum: I used "Output System Information" from the Faceswap GUI, and this is what I got - which I suppose is alreadly mostly present in the crash log:

Code: Select all

============ System Information ============
encoding:            UTF-8
git_branch:          master
git_commits:         c2595c4 bugfix - add missing mask key to alignments on legacy update. dbcd507 pin nvidia-ml-py for breaking change. a5a5985 Manual tool - More robust handling of videos with duped frames. 0d23714 bugfix: extract - stop progress bar from going over max value. ea3dd93 windows installer: Remove stale conda environment files
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce GTX 1660
gpu_devices_active:  GPU_0
gpu_driver:          510.73.05
gpu_vram:            GPU_0: 6144MB
os_machine:          x86_64
os_platform:         Linux-5.13.0-41-generic-x86_64-with-glibc2.17
os_release:          5.13.0-41-generic
py_command:          /home/evilsupahfly/faceswap/faceswap.py gui
py_conda_version:    conda 4.12.0
py_implementation:   CPython
py_version:          3.8.13
py_virtual_env:      True
sys_cores:           8
sys_processor:       x86_64
sys_ram:             Total: 32048MB, Available: 30065MB, Used: 1458MB, Free: 17036MB

=============== Pip Packages ===============
absl-py==0.15.0
astunparse==1.6.3
cachetools==4.2.4
certifi==2021.10.8
charset-normalizer==2.0.12
clang==5.0
colorama @ file:///tmp/build/80754af9/colorama_1607707115595/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
fastcluster==1.1.26
ffmpy==0.2.3
flatbuffers==1.12
gast==0.4.0
google-auth==1.35.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.44.0
h5py==3.1.0
idna==3.3
imageio @ file:///tmp/build/80754af9/imageio_1617700267927/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==4.11.3
joblib @ file:///tmp/build/80754af9/joblib_1635411271373/work
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///opt/conda/conda-bld/kiwisolver_1638569886207/work
Markdown==3.3.6
matplotlib @ file:///tmp/build/80754af9/matplotlib-base_1592846008246/work
mkl-fft==1.3.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy @ file:///tmp/build/80754af9/numpy_and_numpy_base_1603570489231/work
nvidia-ml-py==11.495.46
oauthlib==3.2.0
opencv-python==4.5.5.64
opt-einsum==3.3.0
Pillow==9.0.1
protobuf==3.20.0
psutil @ file:///tmp/build/80754af9/psutil_1612298023621/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///tmp/build/80754af9/pyparsing_1635766073266/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
requests==2.27.1
requests-oauthlib==1.3.1
rsa==4.8
scikit-learn @ file:///tmp/build/80754af9/scikit-learn_1642617107864/work
scipy @ file:///tmp/build/80754af9/scipy_1616703172749/work
sip==4.19.13
six==1.15.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.6.0
tensorflow-gpu==2.6.3
termcolor==1.1.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
tornado @ file:///tmp/build/80754af9/tornado_1606942300299/work
tqdm @ file:///opt/conda/conda-bld/tqdm_1647339053476/work
typing-extensions==3.10.0.2
urllib3==1.26.9
Werkzeug==2.1.1
wrapt==1.12.1
zipp==3.8.0

============== Conda Packages ==============
# packages in environment at /home/evilsupahfly/miniconda3/envs/faceswap:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex 4.5 1_gnu
absl-py 0.15.0 pypi_0 pypi astunparse 1.6.3 pypi_0 pypi blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
c-ares 1.18.1 h7f8727e_0
ca-certificates 2021.10.8 ha878542_0 conda-forge cachetools 4.2.4 pypi_0 pypi certifi 2021.10.8 py38h578d9bd_2 conda-forge charset-normalizer 2.0.12 pypi_0 pypi clang 5.0 pypi_0 pypi colorama 0.4.4 pyhd3eb1b0_0
cudatoolkit 11.2.2 he111cf0_8 conda-forge cudnn 8.1.0.77 h90431f1_0 conda-forge curl 7.82.0 h7f8727e_0
cycler 0.11.0 pyhd3eb1b0_0
dbus 1.13.18 hb2f20db_0
expat 2.4.4 h295c915_0
fastcluster 1.1.26 py38hc5bc63f_2 conda-forge ffmpeg 4.3.2 hca11adc_0 conda-forge ffmpy 0.2.3 pypi_0 pypi flatbuffers 1.12 pypi_0 pypi fontconfig 2.13.1 h6c09931_0
freetype 2.11.0 h70c0345_0
gast 0.4.0 pypi_0 pypi gettext 0.21.0 hf68c758_0
giflib 5.2.1 h7b6447c_0
git 2.34.1 pl5262hc120c5b_0
glib 2.69.1 h4ff587b_1
gmp 6.2.1 h58526e2_0 conda-forge gnutls 3.6.13 h85f3911_1 conda-forge google-auth 1.35.0 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.44.0 pypi_0 pypi gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
h5py 3.1.0 pypi_0 pypi icu 58.2 he6710b0_3
idna 3.3 pypi_0 pypi imageio 2.9.0 pyhd3eb1b0_0
imageio-ffmpeg 0.4.7 pyhd8ed1ab_0 conda-forge importlib-metadata 4.11.3 pypi_0 pypi intel-openmp 2022.0.1 h06a4308_3633
joblib 1.1.0 pyhd3eb1b0_0
jpeg 9d h7f8727e_0
keras 2.6.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.3.2 py38h295c915_0
krb5 1.19.2 hac12032_0
lame 3.100 h7f98852_1001 conda-forge lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libcurl 7.82.0 h0b77cf5_0
libedit 3.1.20210910 h7f8727e_0
libev 4.33 h7f8727e_1
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 9.3.0 h5101ec6_17
libnghttp2 1.46.0 hce63b2e_0
libpng 1.6.37 hbc83047_0
libssh2 1.9.0 h1ba5d50_1
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h7f8727e_2
libwebp 1.2.2 h55f646e_0
libwebp-base 1.2.2 h7f8727e_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h03d6c58_0
lz4-c 1.9.3 h295c915_1
markdown 3.3.6 pypi_0 pypi matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38hef1b27d_0
mkl 2020.2 256
mkl-service 2.3.0 py38he904b0f_0
mkl_fft 1.3.0 py38h54f3939_0
mkl_random 1.1.1 py38h0573a6f_0
ncurses 6.3 h7f8727e_2
nettle 3.6 he412f7d_0 conda-forge numpy 1.19.2 py38h54aff64_0
numpy-base 1.19.2 py38hfa32c7d_0
nvidia-ml-py 11.495.46 pypi_0 pypi oauthlib 3.2.0 pypi_0 pypi opencv-python 4.5.5.64 pypi_0 pypi openh264 2.1.1 h780b84a_0 conda-forge openssl 1.1.1n h7f8727e_0
opt-einsum 3.3.0 pypi_0 pypi pcre 8.45 h295c915_0
pcre2 10.37 he7ceb23_1
perl 5.26.2 h14c3975_0
pillow 9.0.1 py38h22f2fdc_0
pip 21.2.4 py38h06a4308_0
protobuf 3.20.0 pypi_0 pypi psutil 5.8.0 py38h27cfd23_1
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.4 pyhd3eb1b0_0
pyqt 5.9.2 py38h05f1152_4
python 3.8.13 h12debd9_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.8 2_cp38 conda-forge qt 5.9.7 h5867ecd_1
readline 8.1.2 h7f8727e_1
requests 2.27.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.8 pypi_0 pypi scikit-learn 1.0.2 py38h51133e4_1
scipy 1.6.2 py38h91f5cce_0
setuptools 61.2.0 py38h06a4308_0
sip 4.19.13 py38h295c915_0
six 1.15.0 pypi_0 pypi sqlite 3.38.2 hc218d9a_0
tensorboard 2.6.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow-estimator 2.6.0 pypi_0 pypi tensorflow-gpu 2.6.3 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.11 h1ccaba5_0
tornado 6.1 py38h27cfd23_0
tqdm 4.63.0 pyhd3eb1b0_0
typing-extensions 3.10.0.2 pypi_0 pypi urllib3 1.26.9 pypi_0 pypi werkzeug 2.1.1 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0
wrapt 1.12.1 pypi_0 pypi x264 1!161.3030 h7f98852_1 conda-forge xz 5.2.5 h7b6447c_0
zipp 3.8.0 pypi_0 pypi zlib 1.2.11 h7f8727e_4
zstd 1.4.9 haebb681_0 ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 40 icon_size: 14 font: newspaper font_size: 10 autosave_last_session: always timeout: 120 auto_load_model_stats: True --------- convert.ini --------- [writer.ffmpeg] container: mp4 codec: libx264 crf: 0 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 95 png_compress_level: 0 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 0 tif_compression: tiff_deflate [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [color.color_transfer] clip: False preserve_paper: False [mask.mask_blend] type: normalized kernel_size: 5 passes: 4 threshold: 4 erosion: 0.0 [mask.box_blend] type: normalized distance: 5.0 radius: 5.0 passes: 3 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 --------- extract.ini --------- [global] allow_growth: True [mask.vgg_obstructed] batch-size: 1 [mask.unet_dfl] batch-size: 1 [mask.bisenet_fp] batch-size: 1 weights: faceswap include_ears: False include_hair: False include_glasses: False [mask.vgg_clear] batch-size: 1 [align.fan] batch-size: 2 [detect.cv2_dnn] confidence: 75 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 4 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 [detect.s3fd] confidence: 90 batch-size: 1 --------- train.ini --------- [global] centering: face coverage: 100.0 icnr_init: True conv_aware_init: True optimizer: adabelief learning_rate: 5e-05 epsilon_exponent: -16 reflect_padding: True allow_growth: True mixed_precision: True nan_protection: True convert_batchsize: 2 [global.loss] loss_function: pixel_gradient_diff mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 3 penalized_mask_loss: True mask_type: vgg-obstructed mask_blur_kernel: 3 mask_threshold: 4 learn_mask: True [trainer.original] preview_images: 4 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4 [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.dfl_h128] lowmem: False [model.villain] lowmem: False [model.original] lowmem: False [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.dfaker] output_size: 128 [model.phaze_a] output_size: 128 shared_fc: full enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: xception enc_scaling: 20 enc_load_weights: True bottleneck_type: dense bottleneck_norm: layer bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: resize_images fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: resize_images dec_norm: group dec_min_filters: 64 dec_max_filters: 512 dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers:
load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 [model.dfl_sae] input_size: 256 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: True [model.dlight] features: best details: good output_size: 128
Last edited by EvilSupahFly on Sat May 21, 2022 8:05 am, edited 1 time in total.
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Ok, I see a common factor here. The -d switch is enabled. This is for distributed training over multiple gpu's. distributed should be unchecked.

Try with distributed unchecked, let me know if it works and I will investigate further.

Ultimately, this error should not be occurring, but it gives me something to look into

My word is final

User avatar
EvilSupahFly
Posts: 2
Joined: Sat May 07, 2022 5:58 am
Been thanked: 1 time

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by EvilSupahFly »

I tried running Phaze-A, DFL-SAE, and DLight, making sure "Distributed" wasn't checked, and I don't get this error any longer, though when I load a project after closing the GUI, "Distributed" is checked again, and I have to manually uncheck it, despite saving the project with it unchecked before closing. Not sure why that is, but it's only a minor thing. With Distributed off, the issue goes away - at least, in my case.

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Ok, thanks for the feedback... Seem to be 2 issues here. 1) Distributed should not be checked by default. 2) Distributed should not cause a failure. Will look into when I can.

My word is final

User avatar
coosy77
Posts: 4
Joined: Wed May 18, 2022 5:18 pm
Has thanked: 3 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by coosy77 »

Thanks, guys. It was the distributed setting. :)

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Ok, the bug that causes this error when distributed is selected has been fixed.

I cannot recreate that option getting auto-enabled in the GUI :/

My word is final

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Bug: ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE`

Post by torzdf »

Removing the bug tag from this as -d, --distributed is deprecated in favour of -D, --distribution-strategy

My word is final

Locked