[Resource] Google Colab Notebook

Want to use Faceswap in The Cloud? This is not directly supported by the Devs, but you may find community support here


Forum rules

Read the FAQs and search the forum before posting a new topic.

NB: The Devs do not directly support using Cloud based services, but you can find community support here.

Please mark any answers that fixed your problems so others can find the solutions.

User avatar
Andentze
Posts: 9
Joined: Mon Dec 06, 2021 6:34 pm
Has thanked: 1 time
Been thanked: 15 times

[Resource] Google Colab Notebook

Post by Andentze »

[mod=torzdf]
This is the latest working Colab Notebook provided by the Faceswap community.

The old topic can be found here for reference: viewtopic.php?f=23&t=744
[/mod]


Hello! I was working on a notebook of mine to provide full functionality of Faceswap in the Google Colab just because I feel bad for people that can't wait for the exctract of convert to finish, but is able to run the GUI perfectly fine(or not, if your CPU isn't supporting the AVX instructions(in which case I do feel bad for you))

So, I'll just stop that useless rambling and give away my Colab notebook that I have been working on for a while. It will be updated in case of compability issue or Faceswap bringing out something cool that my current notebook won't be able to run. But for now, the main functionality(and a couple of tools) are present here. Any feedback would be appreciated :P

Here's the notebook itself. I sure hope you can do something with it, low-end PC/laptop users :D
https://github.com/andentze/FaceColab_Unofficial


User avatar
torzdf
Posts: 1844
Joined: Fri Jul 12, 2019 12:53 am
Answers: 136
Has thanked: 79 times
Been thanked: 371 times

Re: Google Colab Notebook

Post by torzdf »

I don't use Colab, but if people test this notebook and feedback that it all works well, I will pin it as I have a feeling that the currently pinned notebook is now a little out of date.

My word is final


User avatar
Robb96
Posts: 1
Joined: Wed Feb 02, 2022 6:41 pm
Has thanked: 2 times
Been thanked: 3 times

Re: Google Colab Notebook

Post by Robb96 »

Andentze wrote: Tue Jan 18, 2022 3:18 pm

Hello! I was working on a notebook of mine to provide full functionality of Faceswap in the Google Colab just because I feel bad for people that can't wait for the exctract of convert to finish, but is able to run the GUI perfectly fine(or not, if your CPU isn't supporting the AVX instructions(in which case I do feel bad for you))

So, I'll just stop that useless rambling and give away my Colab notebook that I have been working on for a while. It will be updated in case of compability issue or Faceswap bringing out something cool that my current notebook won't be able to run. But for now, the main functionality(and a couple of tools) are present here. Any feedback would be appreciated :P

Here's the notebook itself. I sure hope you can do something with it, low-end PC/laptop users :D
https://github.com/andentze/FaceColab_Unofficial

Made an account to thank you for the working notebook. Haven't been able find one without errors in a very long time.

Last edited by Robb96 on Thu Feb 03, 2022 7:43 pm, edited 1 time in total.

User avatar
pilipinoguy
Posts: 9
Joined: Tue Mar 08, 2022 5:14 am
Has thanked: 8 times

Re: [Resource] Google Colab Notebook

Post by pilipinoguy »

Hi,

Started training my model using this free google colab. My question is how do you change the parameters of the trainer (like loss, choice of trainer like adabelief, etc) since you can only change the main ones like iterations, batch size, save every.. etc

BTW its much faster than my videocard lol.

Thanks


User avatar
Andentze
Posts: 9
Joined: Mon Dec 06, 2021 6:34 pm
Has thanked: 1 time
Been thanked: 15 times

Re: [Resource] Google Colab Notebook

Post by Andentze »

pilipinoguy wrote: Sun Mar 20, 2022 3:26 pm

Hi,

Started training my model using this free google colab. My question is how do you change the parameters of the trainer (like loss, choice of trainer like adabelief, etc) since you can only change the main ones like iterations, batch size, save every.. etc

BTW its much faster than my videocard lol.

Thanks

Hello! Unfortunately, I was unable to implement the configuration setup yet. I am planning to do so, so please. Stay patient for now.


User avatar
pilipinoguy
Posts: 9
Joined: Tue Mar 08, 2022 5:14 am
Has thanked: 8 times

Re: [Resource] Google Colab Notebook

Post by pilipinoguy »

Thanks for that, you are simply awesome.

Also, why not place Phaze-A on as well since you will be implementing the configuration setup. Im planning to upgrade my free colab to colab pro for faster gpu. Im kinda a laptop guy so ur colab is a great help. Just want my mug to look good on my fav music videos, lol. Or swap my mug on a whole movie :-)


User avatar
Andentze
Posts: 9
Joined: Mon Dec 06, 2021 6:34 pm
Has thanked: 1 time
Been thanked: 15 times

Re: [Resource] Google Colab Notebook

Post by Andentze »

pilipinoguy wrote: Wed Mar 23, 2022 6:36 am

Thanks for that, you are simply awesome.

Also, why not place Phaze-A on as well since you will be implementing the configuration setup. Im planning to upgrade my free colab to colab pro for faster gpu. Im kinda a laptop guy so ur colab is a great help. Just want my mug to look good on my fav music videos, lol. Or swap my mug on a whole movie :-)

I didn't realize I forgot to put that there. Thanks for the notice. I'll put Phaze-A as one of the choices for the model when I implement the config setup.


User avatar
Andentze
Posts: 9
Joined: Mon Dec 06, 2021 6:34 pm
Has thanked: 1 time
Been thanked: 15 times

Notebook Update

Post by Andentze »

I probably shouldn't leave my messages like this, but eh, doesn't matter. Just pushed a really nice update which completely remade the way Faceswap is being installed to Colab and it works flawlessly. It now utilizes Conda for packages. So now, Tensorflow 2.5+ is supported. I've spent the last few weeks figuring this one thing out, and I finally did.

Enjoy, I guess?


User avatar
aolvera27
Posts: 26
Joined: Thu May 27, 2021 3:53 am
Answers: 1
Has thanked: 4 times
Been thanked: 4 times

Re: [Resource] Google Colab Notebook

Post by aolvera27 »

Thanks a lot!

It actually worked flawlessly, as you said. Only today I started getting this message when trying to run trainning:

Code: Select all

Setting Faceswap backend to NVIDIA
Traceback (most recent call last):
  File "faceswap/faceswap.py", line 6, in <module>
    from lib.cli import args as cli_args
  File "/content/faceswap/lib/cli/args.py", line 13, in <module>
    from lib.gpu_stats import GPUStats
  File "/content/faceswap/lib/gpu_stats/__init__.py", line 9, in <module>
    from ._base import set_exclude_devices  # noqa
  File "/content/faceswap/lib/gpu_stats/_base.py", line 8, in <module>
    from typing import List, Optional, TypedDict
ImportError: cannot import name 'TypedDict' from 'typing' (/usr/lib/python3.7/typing.py)

User avatar
torzdf
Posts: 1844
Joined: Fri Jul 12, 2019 12:53 am
Answers: 136
Has thanked: 79 times
Been thanked: 371 times

Re: [Resource] Google Colab Notebook

Post by torzdf »

This was down to an update I pushed today not supporting Python 3.7 (the Colab default). I have now fixed this, so hopefully the notebook should be working again now.

My word is final


User avatar
aolvera27
Posts: 26
Joined: Thu May 27, 2021 3:53 am
Answers: 1
Has thanked: 4 times
Been thanked: 4 times

Re: [Resource] Google Colab Notebook

Post by aolvera27 »

I was excited because that's exactly what I thought, that there must have been an update running, affecting Python. However, I'm getting an almost identical error now:

Code: Select all

Setting Faceswap backend to NVIDIA
Traceback (most recent call last):
  File "faceswap/faceswap.py", line 6, in <module>
    from lib.cli import args as cli_args
  File "/content/faceswap/lib/cli/args.py", line 13, in <module>
    from lib.gpu_stats import GPUStats
  File "/content/faceswap/lib/gpu_stats/__init__.py", line 9, in <module>
    from ._base import set_exclude_devices  # noqa
  File "/content/faceswap/lib/gpu_stats/_base.py", line 14, in <module>
    from typing import TypedDict
ImportError: cannot import name 'TypedDict' from 'typing' (/usr/lib/python3.7/typing.py)

User avatar
torzdf
Posts: 1844
Joined: Fri Jul 12, 2019 12:53 am
Answers: 136
Has thanked: 79 times
Been thanked: 371 times

Re: [Resource] Google Colab Notebook

Post by torzdf »

:oops: My mistake, I put the imports in the wrong way around. Should work now.

My word is final


User avatar
aolvera27
Posts: 26
Joined: Thu May 27, 2021 3:53 am
Answers: 1
Has thanked: 4 times
Been thanked: 4 times

Re: [Resource] Google Colab Notebook

Post by aolvera27 »

I'm sorry. I feel like it's always me having these issues. I haven't changed my workflow but I'm getting this problem now when startina trainning session: Caught exception in thread: '_training_0'
AttributeError: 'LossWrapper' object has no attribute 'name'

Code: Select all

06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
06/08/2022 11:00:57 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 6595, batchsize: 12, side: 'a', do_shuffle: True, is_preview, False, is_timelapse: False)
06/08/2022 11:00:57 MainProcess     _training_0                    generator       _get_cache                     DEBUG    Creating cache. Side: a
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 12, is_display: False, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 6595, side: 'a', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
06/08/2022 11:00:57 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 6595, side: 'a', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 128, output_shapes: [(128, 128, 3)]
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: False, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
06/08/2022 11:00:57 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 5640, batchsize: 12, side: 'b', do_shuffle: True, is_preview, False, is_timelapse: False)
06/08/2022 11:00:57 MainProcess     _training_0                    generator       _get_cache                     DEBUG    Creating cache. Side: b
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 12, is_display: False, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 5640, side: 'b', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
06/08/2022 11:00:57 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 5640, side: 'b', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'a')
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 128, output_shapes: [(128, 128, 3)]
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: False, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
06/08/2022 11:00:57 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 6595, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 6595, side: 'a', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
06/08/2022 11:00:57 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 6595, side: 'a', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Setting preview feed: (side: 'b')
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 128, output_shapes: [(128, 128, 3)]
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: False, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
06/08/2022 11:00:57 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 5640, batchsize: 14, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.85, config: {'centering': 'face', 'coverage': 85.0, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [128]
06/08/2022 11:00:57 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 5640, side: 'b', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
06/08/2022 11:00:57 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 5640, side: 'b', do_shuffle: True)
06/08/2022 11:00:57 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_preview_feed              DEBUG    Set preview feed. Batchsize: 14
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Feeder:
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Enabling TensorBoard Logging
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_tensorboard               DEBUG    Setting up TensorBoard Logging
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _validate_version              DEBUG    Setting initial extract version: 2.2
06/08/2022 11:00:57 MainProcess     _run_0                         generator       _validate_version              DEBUG    Setting initial extract version: 2.2
06/08/2022 11:00:57 MainProcess     _training_0                    _base           _set_tensorboard               VERBOSE  Enabled TensorBoard Logging
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.villain.Model object at 0x7f784af6ce10>', coverage_ratio: 0.85)
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Timelapse: model: <plugins.train.model.villain.Model object at 0x7f784af6ce10>, coverage_ratio: 0.85, image_count: 14, feeder: '<plugins.train.trainer._base._Feeder object at 0x7f78a6b9ae10>', image_paths: 2)
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Samples: model: '<plugins.train.model.villain.Model object at 0x7f784af6ce10>', coverage_ratio: 0.85)
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Samples
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Timelapse
06/08/2022 11:00:57 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized Trainer
06/08/2022 11:00:57 MainProcess     _training_0                    train           _load_trainer                  DEBUG    Loaded Trainer
06/08/2022 11:00:57 MainProcess     _training_0                    train           _run_training_cycle            DEBUG    Running Training Cycle
06/08/2022 11:00:57 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
06/08/2022 11:00:57 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(29, 355, None), 'warp_mapx': '[[[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]]', 'warp_mapy': '[[[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
06/08/2022 11:00:57 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
06/08/2022 11:00:57 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(29, 355, None), 'warp_mapx': '[[[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]]', 'warp_mapy': '[[[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
06/08/2022 11:00:58 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
06/08/2022 11:00:58 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(29, 355, None), 'warp_mapx': '[[[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]]', 'warp_mapy': '[[[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
06/08/2022 11:00:59 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
06/08/2022 11:00:59 MainProcess     _run_1                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(29, 355, None), 'warp_mapx': '[[[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]\n\n [[ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]\n  [ 29.  110.5 192.  273.5 355. ]]]', 'warp_mapy': '[[[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]\n\n [[ 29.   29.   29.   29.   29. ]\n  [110.5 110.5 110.5 110.5 110.5]\n  [192.  192.  192.  192.  192. ]\n  [273.5 273.5 273.5 273.5 273.5]\n  [355.  355.  355.  355.  355. ]]]', 'warp_pad': 160, 'warp_slices': slice(16, -16, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
06/08/2022 11:01:00 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['C(514).png', 'Live06(2232).png', 'Live06(195).png', 'F(611).png', 'A23(5).png', 'F(27).png', 'C(631).png', 'C(351).png', 'F(345).png', 'C(201).png', 'L05(491).png', 'F(505).png']
06/08/2022 11:01:01 MainProcess     _run_0                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['video_004567_0.png', 'video_004573_0.png', 'video_000129_0.png', 'video_005461_0.png', 'video_000334_0.png', 'video_002604_0.png', 'video_006039_0.png', 'video_002397_0.png', 'video_004437_0.png', 'video_004608_0.png', 'video_002632_0.png', 'video_002276_0.png']
06/08/2022 11:01:01 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['video_004567_0.png', 'video_004573_0.png', 'video_000129_0.png', 'video_005461_0.png', 'video_000334_0.png', 'video_002604_0.png', 'video_006039_0.png', 'video_002397_0.png', 'video_004437_0.png', 'video_004608_0.png', 'video_002632_0.png', 'video_002276_0.png']
06/08/2022 11:01:01 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): in user code:\n\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:800 train_function  *\n        return step_function(self, iterator)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:790 step_function  **\n        outputs = model.distribute_strategy.run(run_step, args=(data,))\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run\n        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica\n        return self._call_for_each_replica(fn, args, kwargs)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica\n        return fn(*args, **kwargs)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:783 run_step  **\n        outputs = model.train_step(data)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:751 train_step\n        y, y_pred, sample_weight, regularization_losses=self.losses)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:186 __call__\n        self.build(y_pred)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:139 build\n        self._losses = nest.map_structure(self._get_loss_object, self._losses)\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:659 map_structure\n        structure[0], [func(*x) for x in entries],\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:659 <listcomp>\n        structure[0], [func(*x) for x in entries],\n    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:264 _get_loss_object\n        loss_name = loss.__name__\n\n    AttributeError: 'LossWrapper' object has no attribute '__name__'\n
06/08/2022 11:01:02 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
06/08/2022 11:01:02 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
06/08/2022 11:01:02 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
06/08/2022 11:01:02 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
06/08/2022 11:01:02 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
06/08/2022 11:01:02 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
06/08/2022 11:01:02 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "/content/faceswap/lib/cli/launcher.py", line 188, in execute_script
    process.process()
  File "/content/faceswap/scripts/train.py", line 190, in process
    self._end_thread(thread, err)
  File "/content/faceswap/scripts/train.py", line 230, in _end_thread
    thread.join()
  File "/content/faceswap/lib/multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "/content/faceswap/lib/multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "/content/faceswap/scripts/train.py", line 252, in _training
    raise err
  File "/content/faceswap/scripts/train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "/content/faceswap/scripts/train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "/content/faceswap/plugins/train/trainer/_base.py", line 194, in train_one_step
    loss = self._model.model.train_on_batch(model_inputs, y=model_targets)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 1722, in train_on_batch
    logs = self.train_function(iterator)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 871, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
    *args, **kwds))
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
    graph_function, _ = self._maybe_define_function(args, kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
    capture_by_value=self._capture_by_value),
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
    out = weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
    raise e.ag_error_metadata.to_exception(e)
AttributeError: in user code:

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:800 train_function  *
    return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:790 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:783 run_step  **
    outputs = model.train_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:751 train_step
    y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:186 __call__
    self.build(y_pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:139 build
    self._losses = nest.map_structure(self._get_loss_object, self._losses)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:659 map_structure
    structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:659 <listcomp>
    structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:264 _get_loss_object
    loss_name = loss.__name__

AttributeError: 'LossWrapper' object has no attribute '__name__'


============ System Information ============
encoding:            UTF-8
git_branch:          Not Found
git_commits:         Not Found
gpu_cuda:            11.1
gpu_cudnn:           No global version found
gpu_devices:         GPU_0: Tesla T4
gpu_devices_active:  GPU_0
gpu_driver:          460.32.03
gpu_vram:            GPU_0: 15109MB
os_machine:          x86_64
os_platform:         Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
os_release:          5.4.188+
py_command:          faceswap/faceswap.py train -A face_a -B face_b -m /content/drive/My Drive/colab_files/faceswap/models/IkmQry -t villain -bs 12 -it 630000 -s 500 -ss 50000 -tia face_a -tib face_b -to /content/drive/My Drive/colab_files/faceswap/output/timelapse
py_conda_version:    N/A
py_implementation:   CPython
py_version:          3.7.13
py_virtual_env:      False
sys_cores:           2
sys_processor:       x86_64
sys_ram:             Total: 12986MB, Available: 10696MB, Used: 2743MB, Free: 175MB

=============== Pip Packages ===============
absl-py==0.15.0
alabaster==0.7.12
albumentations==0.1.12
altair==4.2.0
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arviz==0.12.1
astor==0.8.1
astropy==4.3.1
astunparse==1.6.3
atari-py==0.2.9
atomicwrites==1.4.0
attrs==21.4.0
audioread==2.1.9
autograd==1.4
Babel==2.10.1
backcall==0.2.0
beautifulsoup4==4.6.3
bleach==5.0.0
blis==0.4.1
bokeh==2.3.3
Bottleneck==1.3.4
branca==0.5.0
bs4==0.0.1
CacheControl==0.12.11
cached-property==1.5.2
cachetools==4.2.4
catalogue==1.0.0
certifi==2022.5.18.1
cffi==1.15.0
cftime==1.6.0
chardet==3.0.4
charset-normalizer==2.0.12
click==7.1.2
cloudpickle==1.3.0
cmake==3.22.4
cmdstanpy==0.9.5
colorcet==3.0.0
colorlover==0.3.0
community==1.0.0b1
contextlib2==0.5.5
convertdate==2.4.0
coverage==3.7.1
coveralls==0.5
crcmod==1.7
cufflinks==0.17.3
cupy-cuda111==9.4.0
cvxopt==1.2.7
cvxpy==1.0.31
cycler==0.11.0
cymem==2.0.6
Cython==0.29.30
daft==0.0.4
dask==2.12.0
datascience==0.10.6
debugpy==1.0.0
decorator==4.4.2
defusedxml==0.7.1
descartes==1.1.0
dill==0.3.5.1
distributed==1.25.3
dlib==19.18.0+zzzcolab20220513001918
dm-tree==0.1.7
docopt==0.6.2
docutils==0.17.1
dopamine-rl==1.0.5
earthengine-api==0.1.311
easydict==1.9
ecos==2.0.10
editdistance==0.5.3
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
entrypoints==0.4
ephem==4.1.3
et-xmlfile==1.1.0
fa2==0.3.5
fastai==1.0.61
fastcluster==1.1.26
fastdtw==0.3.4
fastjsonschema==2.15.3
fastprogress==1.0.2
fastrlock==0.8
fbprophet==0.7.1
feather-format==0.4.1
ffmpy==0.2.3
filelock==3.7.0
firebase-admin==4.4.0
fix-yahoo-finance==0.0.22
Flask==1.1.4
flatbuffers==1.12
folium==0.8.3
future==0.16.0
gast==0.3.3
GDAL==2.2.2
gdown==4.4.0
gensim==3.6.0
geographiclib==1.52
geopy==1.17.0
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-api-core==1.31.6
google-api-python-client==1.12.11
google-auth==1.35.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.6
google-cloud-bigquery==1.21.0
google-cloud-bigquery-storage==1.1.1
google-cloud-core==1.0.3
google-cloud-datastore==1.8.0
google-cloud-firestore==1.7.0
google-cloud-language==1.2.0
google-cloud-storage==1.18.1
google-cloud-translate==1.5.0
google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz
google-pasta==0.2.0
google-resumable-media==0.4.1
googleapis-common-protos==1.56.2
googledrivedownloader==0.4
GPUtil==1.4.0
graphviz==0.10.1
greenlet==1.1.2
grpcio==1.32.0
gspread==3.4.2
gspread-dataframe==3.0.8
gym==0.17.3
h5py==2.10.0
HeapDict==1.0.1
hijri-converter==2.2.4
holidays==0.10.5.2
holoviews==1.14.9
html5lib==1.0.1
httpimport==0.5.18
httplib2==0.17.4
httplib2shim==0.0.3
humanize==0.5.1
hyperopt==0.1.2
ideep4py==2.0.0.post3
idna==2.10
imageio==2.19.3
imageio-ffmpeg==0.4.7
imagesize==1.3.0
imbalanced-learn==0.8.1
imblearn==0.0
imgaug==0.2.9
importlib-metadata==4.11.4
importlib-resources==5.7.1
imutils==0.5.4
inflect==2.1.0
iniconfig==1.1.1
intel-openmp==2022.1.0
intervaltree==2.1.0
ipykernel==4.10.1
ipython==5.5.0
ipython-genutils==0.2.0
ipython-sql==0.3.9
ipywidgets==7.7.0
itsdangerous==1.1.0
jax==0.3.8
jaxlib @ https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.3.7+cuda11.cudnn805-cp37-none-manylinux2014_x86_64.whl
jedi==0.18.1
jieba==0.42.1
Jinja2==2.11.3
joblib==1.1.0
jpeg4py==0.1.4
jsonschema==4.3.3
jupyter==1.0.0
jupyter-client==5.3.5
jupyter-console==5.2.0
jupyter-core==4.10.0
jupyterlab-pygments==0.2.2
jupyterlab-widgets==1.1.0
kaggle==1.5.12
kapre==0.3.7
keras==2.8.0
Keras-Preprocessing==1.1.2
keras-vis==0.4.1
kiwisolver==1.4.2
korean-lunar-calendar==0.2.1
libclang==14.0.1
librosa==0.8.1
lightgbm==2.2.3
llvmlite==0.34.0
lmdb==0.99
LunarCalendar==0.0.9
lxml==4.2.6
Markdown==3.3.7
MarkupSafe==2.0.1
matplotlib==3.2.2
matplotlib-inline==0.1.3
matplotlib-venn==0.11.7
missingno==0.5.1
mistune==0.8.4
mizani==0.6.0
mkl==2019.0
mlxtend==0.14.0
more-itertools==8.13.0
moviepy==0.2.3.5
mpmath==1.2.1
msgpack==1.0.3
multiprocess==0.70.13
multitasking==0.0.10
murmurhash==1.0.7
music21==5.5.0
natsort==5.5.0
nbclient==0.6.4
nbconvert==5.6.1
nbformat==5.4.0
nest-asyncio==1.5.5
netCDF4==1.5.8
networkx==2.6.3
nibabel==3.0.2
nltk==3.2.5
notebook==5.3.1
numba==0.51.2
numexpr==2.8.1
numpy==1.19.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauth2client==4.1.3
oauthlib==3.2.0
okgrade==0.4.3
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
openpyxl==3.0.10
opt-einsum==3.3.0
osqp==0.6.2.post0
packaging==21.3
palettable==3.3.0
pandas==1.3.5
pandas-datareader==0.9.0
pandas-gbq==0.13.3
pandas-profiling==1.4.1
pandocfilters==1.5.0
panel==0.12.1
param==1.12.1
parso==0.8.3
pathlib==1.0.1
patsy==0.5.2
pep517==0.12.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.1.1
pip-tools==6.2.0
plac==1.1.3
plotly==5.5.0
plotnine==0.6.0
pluggy==0.7.1
pooch==1.6.0
portpicker==1.3.9
prefetch-generator==1.0.1
preshed==3.0.6
prettytable==3.3.0
progressbar2==3.38.0
prometheus-client==0.14.1
promise==2.3
prompt-toolkit==1.0.18
protobuf==3.17.3
psutil==5.9.1
psycopg2==2.7.6.1
ptyprocess==0.7.0
py==1.11.0
pyarrow==6.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycocotools==2.0.4
pycparser==2.21
pyct==0.4.8
pydata-google-auth==1.4.0
pydot==1.3.0
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyemd==0.5.1
pyerfa==2.0.0.1
pyglet==1.5.0
Pygments==2.6.1
pygobject==3.26.1
pymc3==3.11.4
PyMeeus==0.5.11
pymongo==4.1.1
pymystem3==0.2.0
PyOpenGL==3.1.6
pyparsing==3.0.9
pyrsistent==0.18.1
pysndfile==1.3.8
PySocks==1.7.1
pystan==2.19.1.1
pytest==3.6.4
python-apt==0.0.0
python-chess==0.23.11
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==6.1.2
python-utils==3.2.3
pytz==2022.1
pyviz-comms==2.2.0
PyWavelets==1.3.0
PyYAML==3.13
pyzmq==23.0.0
qdldl==0.1.5.post2
qtconsole==5.3.0
QtPy==2.1.0
regex==2019.12.20
requests==2.23.0
requests-oauthlib==1.3.1
resampy==0.2.2
rpy2==3.4.5
rsa==4.8
scikit-image==0.18.3
scikit-learn==1.0.2
scipy==1.4.1
screen-resolution-extra==0.0.0
scs==3.2.0
seaborn==0.11.2
semver==2.13.0
Send2Trash==1.8.0
setuptools-git==1.2
Shapely==1.8.2
simplegeneric==0.8.1
six==1.15.0
sklearn==0.0
sklearn-pandas==1.8.0
smart-open==6.0.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
SoundFile==0.10.3.post1
soupsieve==2.3.2.post1
spacy==2.2.4
Sphinx==1.8.6
sphinxcontrib-serializinghtml==1.1.5
sphinxcontrib-websupport==1.2.4
SQLAlchemy==1.4.36
sqlparse==0.4.2
srsly==1.0.5
statsmodels==0.10.2
sympy==1.7.1
tables==3.7.0
tabulate==0.8.9
tblib==1.7.0
tenacity==8.0.1
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-datasets==4.0.1
tensorflow-estimator==2.4.0
tensorflow-gcs-config==2.8.0
tensorflow-gpu==2.4.4
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.26.0
tensorflow-metadata==1.8.0
tensorflow-probability==0.16.0
termcolor==1.1.0
terminado==0.13.3
testpath==0.6.0
text-unidecode==1.3
textblob==0.15.3
Theano-PyMC==1.1.2
thinc==7.4.0
threadpoolctl==3.1.0
tifffile==2021.11.2
tinycss2==1.1.1
tomli==2.0.1
toolz==0.11.2
torch @ https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/cu113/torchaudio-0.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
torchsummary==1.5.1
torchtext==0.12.0
torchvision @ https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl
tornado==5.1.1
tqdm==4.64.0
traitlets==5.1.1
tweepy==3.10.0
typeguard==2.7.1
typing-extensions==3.7.4.3
tzlocal==1.5.1
uritemplate==3.0.1
urllib3==1.24.3
vega-datasets==0.9.0
wasabi==0.9.1
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.6.0
wordcloud==1.5.0
wrapt==1.12.1
xarray==0.20.2
xarray-einstats==0.2.2
xgboost==0.90
xkit==0.0.0
xlrd==1.1.0
xlwt==1.3.0
yellowbrick==1.4
zict==2.2.0
zipp==3.8.0
[\code]

User avatar
aolvera27
Posts: 26
Joined: Thu May 27, 2021 3:53 am
Answers: 1
Has thanked: 4 times
Been thanked: 4 times

Re: [Resource] Google Colab Notebook

Post by aolvera27 »

Code: Select all

=============== State File =================
{
  "name": "villain",
  "sessions": {
    "1": {
      "timestamp": 1651689293.0022888,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "2": {
      "timestamp": 1651716732.3646872,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "3": {
      "timestamp": 1651735350.2566671,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "4": {
      "timestamp": 1651776155.0898001,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "5": {
      "timestamp": 1651800023.6516602,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1532,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "6": {
      "timestamp": 1651805529.5389862,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1968,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "7": {
      "timestamp": 1651812445.5683017,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "8": {
      "timestamp": 1651838959.2282043,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 6500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "9": {
      "timestamp": 1651879197.192251,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "10": {
      "timestamp": 1651893229.0111692,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2529,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "11": {
      "timestamp": 1651958353.3528628,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2771,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "12": {
      "timestamp": 1651970509.759633,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "13": {
      "timestamp": 1652042714.8004472,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 11500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "14": {
      "timestamp": 1652062041.6493256,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "15": {
      "timestamp": 1652098425.758038,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "16": {
      "timestamp": 1652112777.493132,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "17": {
      "timestamp": 1652137890.5809624,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 8000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "18": {
      "timestamp": 1652152713.3583393,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "19": {
      "timestamp": 1652184457.8716378,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 8500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "20": {
      "timestamp": 1652198499.9201207,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3200,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "21": {
      "timestamp": 1652274318.561867,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "22": {
      "timestamp": 1652283391.5006235,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "23": {
      "timestamp": 1652374097.0041842,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "24": {
      "timestamp": 1652383009.5159283,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 700,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "25": {
      "timestamp": 1652400797.6049325,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 7000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "26": {
      "timestamp": 1652411774.8229468,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "27": {
      "timestamp": 1652412806.131401,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 7000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "28": {
      "timestamp": 1652444029.1448076,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "29": {
      "timestamp": 1652466677.0501719,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1600,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "30": {
      "timestamp": 1652484177.3774493,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 8000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "31": {
      "timestamp": 1652499739.8278904,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "32": {
      "timestamp": 1652676777.4008546,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2200,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "33": {
      "timestamp": 1652680539.4426527,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1150,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "34": {
      "timestamp": 1652710993.0247912,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "35": {
      "timestamp": 1652807627.1182382,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2859,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "36": {
      "timestamp": 1652817401.9484005,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 59,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "37": {
      "timestamp": 1652822140.931361,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "38": {
      "timestamp": 1652829521.2359154,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 281,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "39": {
      "timestamp": 1652830585.3633387,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2535,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "40": {
      "timestamp": 1652845042.9317653,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "41": {
      "timestamp": 1652875894.1165903,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 6000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "42": {
      "timestamp": 1652901668.2758937,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1065,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "43": {
      "timestamp": 1652936812.3559206,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "44": {
      "timestamp": 1652962432.2124135,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "45": {
      "timestamp": 1652979618.634039,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1050,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "46": {
      "timestamp": 1652983182.6720092,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2507,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "47": {
      "timestamp": 1652994792.9223847,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "48": {
      "timestamp": 1653005443.662191,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2294,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "49": {
      "timestamp": 1653009228.2475197,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 6000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "50": {
      "timestamp": 1653019964.0197885,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5499,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "51": {
      "timestamp": 1653100638.3457983,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1510,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "52": {
      "timestamp": 1653103198.0830362,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5180,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "53": {
      "timestamp": 1653111757.2778986,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "54": {
      "timestamp": 1653136606.4207375,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2509,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "55": {
      "timestamp": 1653166158.6181242,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2201,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "56": {
      "timestamp": 1653169586.4802525,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2300,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "57": {
      "timestamp": 1653173190.0276456,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2110,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "58": {
      "timestamp": 1653185306.383126,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5890,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "59": {
      "timestamp": 1653194659.7918572,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "60": {
      "timestamp": 1653198839.720594,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3900,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "61": {
      "timestamp": 1653205856.0273051,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1701,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "62": {
      "timestamp": 1653210395.408393,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "63": {
      "timestamp": 1653364654.2457354,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1369,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "64": {
      "timestamp": 1653397515.679175,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 347,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "65": {
      "timestamp": 1653398175.1410089,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 364,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "66": {
      "timestamp": 1653398844.6377409,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3819,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "67": {
      "timestamp": 1653404836.1051157,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3025,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "68": {
      "timestamp": 1653409522.1831665,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 125,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "69": {
      "timestamp": 1653409792.610065,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4050,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "70": {
      "timestamp": 1653415990.5462806,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "71": {
      "timestamp": 1653425359.1031485,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "72": {
      "timestamp": 1653449329.1563957,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "73": {
      "timestamp": 1653454370.6266906,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 200,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "74": {
      "timestamp": 1653455221.5014799,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "75": {
      "timestamp": 1653480529.1800327,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5415,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "76": {
      "timestamp": 1653489427.256157,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 885,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "77": {
      "timestamp": 1653491204.5337708,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "78": {
      "timestamp": 1653495327.1150038,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2820,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "79": {
      "timestamp": 1653500218.716553,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 4032,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "80": {
      "timestamp": 1653536278.0548172,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "81": {
      "timestamp": 1653536680.2179592,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 7500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "82": {
      "timestamp": 1653567296.8441625,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 5447,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "83": {
      "timestamp": 1653576535.1040874,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "84": {
      "timestamp": 1653578770.7583907,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 9000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "85": {
      "timestamp": 1653590612.4893246,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "86": {
      "timestamp": 1653598115.0985003,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 5600,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "87": {
      "timestamp": 1653605627.6177325,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 2100,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "88": {
      "timestamp": 1653623505.6879919,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 1306,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "89": {
      "timestamp": 1653625360.8121805,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "90": {
      "timestamp": 1653653789.1465983,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 494,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "91": {
      "timestamp": 1653654617.9396813,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 7500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "92": {
      "timestamp": 1653665275.9185812,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 4000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "93": {
      "timestamp": 1653673426.5718946,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 5000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "94": {
      "timestamp": 1653686493.2910888,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "95": {
      "timestamp": 1653687235.8874726,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 5000,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "96": {
      "timestamp": 1653693915.005373,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 1472,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "97": {
      "timestamp": 1653695953.2463233,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 12,
      "iterations": 2500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    }
  },
  "lowest_avg_loss": {
    "a": 0.007339549023119441,
    "b": 0.01085482928340873
  },
  "iterations": 316472,
  "config": {
    "centering": "face",
    "coverage": 85.0,
    "optimizer": "adam",
    "learning_rate": 5e-05,
    "epsilon_exponent": -7,
    "allow_growth": false,
    "mixed_precision": false,
    "nan_protection": true,
    "convert_batchsize": 16,
    "loss_function": "ssim",
    "mask_loss_function": "mse",
    "l2_reg_term": 100,
    "eye_multiplier": 3,
    "mouth_multiplier": 2,
    "penalized_mask_loss": true,
    "mask_type": "extended",
    "mask_blur_kernel": 3,
    "mask_threshold": 4,
    "learn_mask": false,
    "lowmem": false
  }
}

================= Configs ==================
--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               12

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               4

[detect.cv2_dnn]
confidence:               50

[mask.bisenet_fp]
batch-size:               8
weights:                  faceswap
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- convert.ini ---------

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0
erosion_top:              0.0
erosion_bottom:           0.0
erosion_left:             0.0
erosion_right:            0.0

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.match_hist]
threshold:                99.0

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.pillow]
format:                   png
draw_transparent:         False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

[writer.opencv]
format:                   png
draw_transparent:         False
jpg_quality:              75
png_compress_level:       3

--------- .faceswap ---------
backend:                  nvidia

--------- train.ini ---------

[global]
centering:                face
coverage:                 85.0
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -7
reflect_padding:          False
allow_growth:             False
mixed_precision:          False
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            ssim
mask_loss_function:       mse
l2_reg_term:              100
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                extended
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

[model.original]
lowmem:                   False

[model.dfaker]
output_size:              128

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.villain]
lowmem:                   False

[model.unbalanced]
input_size:               128
lowmem:                   False
clipnorm:                 True
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.dfl_h128]
lowmem:                   False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.phaze_a]
output_size:              128
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         fs_original
enc_scaling:              7
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          1024
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1024
fc_max_filters:           1024
fc_dimensions:            4
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      512
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       subpixel
dec_upscales_in_fc:       0
dec_norm:                 none
dec_min_filters:          64
dec_max_filters:          512
dec_slope_mode:           full
dec_filter_slope:         -0.45
dec_res_blocks:           1
dec_output_kernel:        5
dec_gaussian:             True
dec_skip_last_residual:   True
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
fs_original_use_alt:      False
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001
mobilenet_minimalistic:   False

[model.dfl_sae]
input_size:               128
clipnorm:                 True
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

User avatar
torzdf
Posts: 1844
Joined: Fri Jul 12, 2019 12:53 am
Answers: 136
Has thanked: 79 times
Been thanked: 371 times

Re: [Resource] Google Colab Notebook

Post by torzdf »

aolvera27 wrote: Wed Jun 08, 2022 4:23 pm

I'm sorry. I feel like it's always me having these issues. I haven't changed my workflow but I'm getting this problem now when startina trainning session: Caught exception in thread: '_training_0'
AttributeError: 'LossWrapper' object has no attribute 'name'

Try with a more recent version of Tensorflow.

My word is final


User avatar
XxArTiuSxX
Posts: 8
Joined: Tue May 31, 2022 3:20 am
Has thanked: 18 times

Re: [Resource] Google Colab Notebook

Post by XxArTiuSxX »

Thanks to this wonderful community and phenomenal people!

From the notebook all runs fine:

  1. Mount my drive -- cleared
  2. Check GPU -- cleared
  3. Keep alive script - done
  4. Install faceswap (used Variant 1 and in another instance variant 2) -- both instances worked with the "good to go" message
  5. Execute training -- both data sets are loaded then I get this error:

python3: can't open file '{path}/faceswap.py': [Errno 2] No such file or directory

Any help is much appreciated, thank you.


User avatar
Andentze
Posts: 9
Joined: Mon Dec 06, 2021 6:34 pm
Has thanked: 1 time
Been thanked: 15 times

Re: [Resource] Google Colab Notebook

Post by Andentze »

XxArTiuSxX wrote: Tue Jun 14, 2022 9:28 pm

Thanks to this wonderful community and phenomenal people!

From the notebook all runs fine:

  1. Mount my drive -- cleared
  2. Check GPU -- cleared
  3. Keep alive script - done
  4. Install faceswap (used Variant 1 and in another instance variant 2) -- both instances worked with the "good to go" message
  5. Execute training -- both data sets are loaded then I get this error:

python3: can't open file '{path}/faceswap.py': [Errno 2] No such file or directory

Any help is much appreciated, thank you.

Oh whoops. I forgot to remove that part when removing the "drive_install" thing. Ok, this time, it should be fixed.


User avatar
aolvera27
Posts: 26
Joined: Thu May 27, 2021 3:53 am
Answers: 1
Has thanked: 4 times
Been thanked: 4 times

Re: [Resource] Google Colab Notebook

Post by aolvera27 »

Tried several things (again) but to no avail (once again). I'm getting this error when trying to convert.

Code: Select all

ImportError: cannot import name 'Literal' from 'typing' (/usr/local/lib/python3.7/typing.py)

I've tried downgrading and upgrading Tensorflow, and even Python, but that doesn't fix the issue, producing even weirder error messages.

This is what I get with the standard installation:

Code: Select all

06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   upscale_256_1_pixelshuffler (Pi  (None, 64, 64, 256)  0           ['upscale_256_1_conv2d_conv2d[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   xelShuffler)                                                      0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   leaky_re_lu_6 (LeakyReLU)       (None, 64, 64, 256)   0           ['upscale_256_1_pixelshuffler[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE                                                                     0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_256_1_conv2d_0 (Conv2D  (None, 64, 64, 256)  590080      ['leaky_re_lu_6[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   )
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_256_1_leakyrelu_1 (Lea  (None, 64, 64, 256)  0           ['residual_256_1_conv2d_0[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   kyReLU)
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_256_1_conv2d_1 (Conv2D  (None, 64, 64, 256)  590080      ['residual_256_1_leakyrelu_1[0][0
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   )                                                                 ]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   add_21 (Add)                    (None, 64, 64, 256)   0           ['residual_256_1_conv2d_1[0][0]',
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE                                                                      'leaky_re_lu_6[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_256_1_leakyrelu_3 (Lea  (None, 64, 64, 256)  0           ['add_21[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   kyReLU)
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   upscale_128_1_conv2d_conv2d (Co  (None, 64, 64, 512)  1180160     ['residual_256_1_leakyrelu_3[0][0
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   nv2D)                                                             ]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   upscale_128_1_pixelshuffler (Pi  (None, 128, 128, 128  0          ['upscale_128_1_conv2d_conv2d[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   xelShuffler)                    )                                 0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   leaky_re_lu_7 (LeakyReLU)       (None, 128, 128, 128  0           ['upscale_128_1_pixelshuffler[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE                                   )                                 0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_128_17_conv2d_0 (Conv2  (None, 128, 128, 128  147584     ['leaky_re_lu_7[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   D)                              )
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_128_17_leakyrelu_1 (Le  (None, 128, 128, 128  0          ['residual_128_17_conv2d_0[0][0]'
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   akyReLU)                        )                                 ]
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_128_17_conv2d_1 (Conv2  (None, 128, 128, 128  147584     ['residual_128_17_leakyrelu_1[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   D)                              )                                 0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   add_22 (Add)                    (None, 128, 128, 128  0           ['residual_128_17_conv2d_1[0][0]'
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE                                   )                                 , 'leaky_re_lu_7[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   residual_128_17_leakyrelu_3 (Le  (None, 128, 128, 128  0          ['add_22[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   akyReLU)                        )
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   face_out_b_conv2d (Conv2D)      (None, 128, 128, 3)   9603        ['residual_128_17_leakyrelu_3[0][
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE                                                                     0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   face_out_b (Activation)         (None, 128, 128, 3)   0           ['face_out_b_conv2d[0][0]']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ====================================================================================================
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Total params: 21,543,555
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Trainable params: 21,543,555
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Non-trainable params: 0
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ____________________________________________________________________________________________________
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Model: "villain_inference"
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ____________________________________________________________________________________________________
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   Layer (type)                                Output Shape                            Param #
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ====================================================================================================
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   face_in_a (InputLayer)                      [(None, 128, 128, 3)]                   0
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   encoder (Functional)                        (None, 16, 16, 512)                     112027904
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE   decoder_b (Functional)                      (None, 128, 128, 3)                     21543555
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ====================================================================================================
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Total params: 133,571,459
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Trainable params: 133,571,459
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  Non-trainable params: 0
06/15/2022 20:35:22 MainProcess     MainThread                     _base           <lambda>                       VERBOSE  ____________________________________________________________________________________________________
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _load_model                    DEBUG    Loaded Model
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _get_batchsize                 DEBUG    Getting batchsize
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    Initializing NvidiaStats
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    Initializing PyNVML for Nvidia GPU.
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU Device count: 1
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    Active GPU Devices: [0]
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU Handles found: 1
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU Driver: 460.32.03
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU Devices: ['Tesla T4']
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU VRAM: [15109.75]
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    GPU VRAM free: [12827.75]
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    Shutting down NVML
06/15/2022 20:35:22 MainProcess     MainThread                     _base           _log                           DEBUG    Initialized NvidiaStats
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _get_batchsize                 DEBUG    Batchsize: 16
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _get_batchsize                 DEBUG    Got batchsize: 16
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _get_io_sizes                  DEBUG    {'input': 128, 'output': 128}
06/15/2022 20:35:22 MainProcess     MainThread                     multithreading  __init__                       DEBUG    Initializing MultiThread: (target: '_predict_faces', thread_count: 1)
06/15/2022 20:35:22 MainProcess     MainThread                     multithreading  __init__                       DEBUG    Initialized MultiThread: '_predict_faces'
06/15/2022 20:35:22 MainProcess     MainThread                     multithreading  start                          DEBUG    Starting thread(s): '_predict_faces'
06/15/2022 20:35:22 MainProcess     MainThread                     multithreading  start                          DEBUG    Starting thread 1 of 1: '_predict_faces_0'
06/15/2022 20:35:22 MainProcess     MainThread                     multithreading  start                          DEBUG    Started all threads '_predict_faces': 1
06/15/2022 20:35:22 MainProcess     MainThread                     convert         __init__                       DEBUG    Initialized Predict: (out_queue: <queue.Queue object at 0x7ff91b8f85d0>)
06/15/2022 20:35:22 MainProcess     MainThread                     alignments      mask_is_valid                  DEBUG    True
06/15/2022 20:35:22 MainProcess     MainThread                     utils           get_folder                     DEBUG    Requested path: '/content/drive/MyDrive/colab_files/faceswap/convert'
06/15/2022 20:35:22 MainProcess     MainThread                     utils           get_folder                     DEBUG    Returning: '/content/drive/MyDrive/colab_files/faceswap/convert'
06/15/2022 20:35:22 MainProcess     MainThread                     convert         pre_encode                     DEBUG    Writer pre_encode function: None
06/15/2022 20:35:22 MainProcess     MainThread                     convert         __init__                       DEBUG    Initializing Converter: (output_size: 128,  coverage_ratio: 0.85, centering: face, draw_transparent: False, pre_encode: None, arguments: Namespace(alignments_path='/content/drive/MyDrive/colab_files/faceswap/faces/ikmx2_01_alignments.fsa', colab=False, color_adjustment='avg-color', configfile=None, exclude_gpus=None, filter=None, frame_ranges=None, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x7ff99204ab50>>, input_aligned_dir=None, input_dir='/content/drive/MyDrive/colab_files/faceswap/faces/ikmx2_01.mp4', jobs=0, keep_unchanged=False, logfile=None, loglevel='INFO', mask_type='bisenet-fp_face', model_dir='/content/drive/MyDrive/colab_files/faceswap/models/IkmQry', nfilter=None, on_the_fly=False, output_dir='/content/drive/MyDrive/colab_files/faceswap/convert', output_scale=100, redirect_gui=False, ref_threshold=0.4, reference_video=None, singleprocess=False, swap_model=False, trainer=None, writer='ffmpeg'), configfile: None)
06/15/2022 20:35:22 MainProcess     MainThread                     convert         _load_plugins                  DEBUG    Loading plugins. config: None
06/15/2022 20:35:22 MainProcess     MainThread                     plugin_loader   _import                        INFO     Loading Mask from Mask_Blend plugin...
Traceback (most recent call last):
  File "/content/faceswap/lib/cli/launcher.py", line 187, in execute_script
    process = script(arguments)
  File "/content/faceswap/scripts/convert.py", line 76, in __init__
    configfile=configfile)
  File "/content/faceswap/lib/convert.py", line 57, in __init__
    self._load_plugins()
  File "/content/faceswap/lib/convert.py", line 101, in _load_plugins
    disable_logging=disable_logging)(self._args.mask_type,
  File "/content/faceswap/plugins/plugin_loader.py", line 139, in get_converter
    return PluginLoader._import("convert.{}".format(category), name, disable_logging)
  File "/content/faceswap/plugins/plugin_loader.py", line 163, in _import
    module = import_module(mod)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/content/faceswap/plugins/convert/mask/mask_blend.py", line 4, in <module>
    from typing import List, Literal, Optional, Tuple
ImportError: cannot import name 'Literal' from 'typing' (/usr/local/lib/python3.7/typing.py)

============ System Information ============
encoding:            UTF-8
git_branch:          Not Found
git_commits:         Not Found
gpu_cuda:            11.1
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Tesla T4
gpu_devices_active:  GPU_0
gpu_driver:          460.32.03
gpu_vram:            GPU_0: 15109MB
os_machine:          x86_64
os_platform:         Linux-5.4.188+-x86_64-with-debian-buster-sid
os_release:          5.4.188+
py_command:          faceswap/faceswap.py convert -i /content/drive/MyDrive/colab_files/faceswap/faces/ikmx2_01.mp4 -o /content/drive/MyDrive/colab_files/faceswap/convert/ -al /content/drive/MyDrive/colab_files/faceswap/faces/ikmx2_01_alignments.fsa -m /content/drive/MyDrive/colab_files/faceswap/models/IkmQry -c avg-color -M bisenet-fp_face -w ffmpeg -osc 100 -l 0.4 -j 0 -L INFO
py_conda_version:    conda 4.12.0
py_implementation:   CPython
py_version:          3.7.10
py_virtual_env:      False
sys_cores:           2
sys_processor:       x86_64
sys_ram:             Total: 12986MB, Available: 11388MB, Used: 2354MB, Free: 800MB

=============== Pip Packages ===============
absl-py==0.15.0
astunparse==1.6.3
brotlipy==0.7.0
cachetools==5.2.0
certifi==2022.6.15
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1613413867554/work
chardet @ file:///home/conda/feedstock_root/build_artifacts/chardet_1610093487176/work
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1602866480661/work
conda==4.12.0
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1602876795648/work
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1615330556836/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
fastcluster==1.1.26
ffmpy==0.2.3
flatbuffers==1.12
gast==0.3.3
google-auth==2.8.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1593328102638/work
imageio @ file:///home/conda/feedstock_root/build_artifacts/imageio_1594044661732/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==4.11.4
install==1.3.5
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1633637554808/work
keras==2.7.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1655141583606/work
libclang==14.0.1
mamba @ file:///home/conda/feedstock_root/build_artifacts/mamba_1615043240478/work
Markdown==3.3.7
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-base_1594091695977/work
numpy==1.19.5
nvidia-ml-py==11.510.69
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.2.0
opencv-python==4.6.0.66
opt-einsum==3.3.0
pathlib==1.0.1
Pillow==9.0.1
protobuf==3.19.4
psutil==5.9.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycosat @ file:///home/conda/feedstock_root/build_artifacts/pycosat_1610094791171/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1608055815057/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1610291444829/work
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1608156231189/work
requests-oauthlib==1.3.1
rsa==4.8
ruamel-yaml-conda @ file:///home/conda/feedstock_root/build_artifacts/ruamel_yaml_1611943443937/work
scikit-learn @ file:///home/conda/feedstock_root/build_artifacts/scikit-learn_1640464152916/work
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1637806658031/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1590081179328/work
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.7.0+zzzcolab20220506150900
tensorflow-estimator==2.7.0
tensorflow-gpu==2.4.4
tensorflow-io-gcs-filesystem==0.26.0
termcolor==1.1.0
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1648827244717/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1649051611147/work
typing-extensions==3.7.4.3
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1611695416663/work
Werkzeug==2.1.2
wrapt==1.12.1
zipp==3.8.0

============== Conda Packages ==============
# packages in environment at /usr/local:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       1_gnu    conda-forge
absl-py                   0.15.0                   pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
brotlipy                  0.7.0           py37h5e8e339_1001    conda-forge
bzip2                     1.0.8                h7f98852_4    conda-forge
c-ares                    1.17.1               h7f98852_1    conda-forge
ca-certificates           2022.5.18.1          ha878542_0    conda-forge
cachetools                5.2.0                    pypi_0    pypi
certifi                   2022.6.15        py37h89c1867_0    conda-forge
cffi                      1.14.5           py37hc58025e_0    conda-forge
chardet                   4.0.0            py37h89c1867_1    conda-forge
colorama                  0.4.4              pyh9f0ad1d_0    conda-forge
conda                     4.12.0           py37h89c1867_0    conda-forge
conda-package-handling    1.7.2            py37hb5d75c8_0    conda-forge
cryptography              3.4.5            py37h5d9358c_1    conda-forge
cudatoolkit               11.2.2              hbe64b41_10    conda-forge
cudnn                     8.1.0.77             h90431f1_0    conda-forge
cycler                    0.11.0             pyhd8ed1ab_0    conda-forge
fastcluster               1.1.26                   pypi_0    pypi
ffmpeg                    4.3.2                hca11adc_0    conda-forge
ffmpy                     0.2.3                    pypi_0    pypi
flatbuffers               1.12                     pypi_0    pypi
freetype                  2.10.4               h0708190_1    conda-forge
gast                      0.3.3                    pypi_0    pypi
giflib                    5.2.1                h36c2ea0_2    conda-forge
gmp                       6.2.1                h58526e2_0    conda-forge
gnutls                    3.6.13               h85f3911_1    conda-forge
google-auth               2.8.0                    pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.32.0                   pypi_0    pypi
h5py                      2.10.0                   pypi_0    pypi
icu                       67.1                 he1b5a44_0    conda-forge
idna                      2.10               pyh9f0ad1d_0    conda-forge
imageio                   2.9.0                      py_0    conda-forge
imageio-ffmpeg            0.4.7              pyhd8ed1ab_0    conda-forge
importlib-metadata        4.11.4                   pypi_0    pypi
install                   1.3.5                    pypi_0    pypi
joblib                    1.1.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h166bdaf_1    conda-forge
keras                     2.7.0                    pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.3            py37h7cecad7_0    conda-forge
krb5                      1.17.2               h926e7f8_0    conda-forge
lame                      3.100             h7f98852_1001    conda-forge
lcms2                     2.12                 hddcbb42_0    conda-forge
ld_impl_linux-64          2.35.1               hea4e1c9_2    conda-forge
libarchive                3.5.1                h3f442fb_1    conda-forge
libblas                   3.9.0           15_linux64_openblas    conda-forge
libcblas                  3.9.0           15_linux64_openblas    conda-forge
libclang                  14.0.1                   pypi_0    pypi
libcurl                   7.75.0               hc4aaa36_0    conda-forge
libedit                   3.1.20191231         he28a2e2_2    conda-forge
libev                     4.33                 h516909a_1    conda-forge
libffi                    3.3                  h58526e2_2    conda-forge
libgcc-ng                 12.1.0              h8d9b700_16    conda-forge
libgfortran-ng            12.1.0              h69a702a_16    conda-forge
libgfortran5              12.1.0              hdcd56e2_16    conda-forge
libgomp                   12.1.0              h8d9b700_16    conda-forge
libiconv                  1.16                 h516909a_0    conda-forge
liblapack                 3.9.0           15_linux64_openblas    conda-forge
libnghttp2                1.43.0               h812cca2_0    conda-forge
libopenblas               0.3.20          pthreads_h78a6416_0    conda-forge
libpng                    1.6.37               h21135ba_2    conda-forge
libsolv                   0.7.17               h780b84a_0    conda-forge
libssh2                   1.9.0                ha56f1ee_6    conda-forge
libstdcxx-ng              12.1.0              ha89aaad_16    conda-forge
libtiff                   4.2.0                hbd63e13_2    conda-forge
libwebp                   1.2.0                h3452ae3_0    conda-forge
libwebp-base              1.2.0                h7f98852_2    conda-forge
libxml2                   2.9.10               h68273f3_2    conda-forge
lz4-c                     1.9.3                h9c3ff4c_0    conda-forge
lzo                       2.10              h516909a_1000    conda-forge
mamba                     0.8.0            py37h7f483ca_0    conda-forge
markdown                  3.3.7                    pypi_0    pypi
matplotlib                3.2.2                         1    conda-forge
matplotlib-base           3.2.2            py37h1d35a4c_1    conda-forge
ncurses                   6.2                  h58526e2_4    conda-forge
nettle                    3.6                  he412f7d_0    conda-forge
numpy                     1.19.5                   pypi_0    pypi
nvidia-ml-py              11.510.69                pypi_0    pypi
nvidia-ml-py3             7.352.1                  pypi_0    pypi
oauthlib                  3.2.0                    pypi_0    pypi
opencv-python             4.6.0.66                 pypi_0    pypi
openh264                  2.1.1                h780b84a_0    conda-forge
openssl                   1.1.1o               h166bdaf_0    conda-forge
opt-einsum                3.3.0                    pypi_0    pypi
pathlib                   1.0.1                    pypi_0    pypi
pillow                    9.0.1            py37h22f2fdc_0  
pip 21.0.1 pyhd8ed1ab_0 conda-forge protobuf 3.19.4 pypi_0 pypi psutil 5.9.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycosat 0.6.3 py37h5e8e339_1006 conda-forge pycparser 2.20 pyh9f0ad1d_2 conda-forge pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 py37h89c1867_3 conda-forge python 3.7.10 hffdb5ce_100_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python_abi 3.7 2_cp37m conda-forge readline 8.0 he28a2e2_2 conda-forge reproc 14.2.1 h36c2ea0_0 conda-forge reproc-cpp 14.2.1 h58526e2_0 conda-forge requests 2.25.1 pyhd3deb0d_0 conda-forge requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.8 pypi_0 pypi ruamel_yaml 0.15.80 py37h5e8e339_1004 conda-forge scikit-learn 1.0.2 py37hf9e9bfc_0 conda-forge scipy 1.7.3 py37hf2a6cf1_0 conda-forge setuptools 49.6.0 py37h89c1867_3 conda-forge six 1.15.0 pyh9f0ad1d_0 conda-forge sqlite 3.34.0 h74cdb3f_0 conda-forge tensorboard 2.8.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow 2.7.0+zzzcolab20220506150900 pypi_0 pypi tensorflow-estimator 2.7.0 pypi_0 pypi tensorflow-gpu 2.4.4 pypi_0 pypi tensorflow-io-gcs-filesystem 0.26.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge tk 8.6.10 h21135ba_1 conda-forge tornado 6.1 py37h540881e_3 conda-forge tqdm 4.64.0 pyhd8ed1ab_0 conda-forge typing-extensions 3.7.4.3 pypi_0 pypi urllib3 1.26.3 pyhd8ed1ab_0 conda-forge werkzeug 2.1.2 pypi_0 pypi wheel 0.36.2 pyhd3deb0d_0 conda-forge wrapt 1.12.1 pypi_0 pypi x264 1!161.3030 h7f98852_1 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge zipp 3.8.0 pypi_0 pypi zlib 1.2.11 h516909a_1010 conda-forge zstd 1.4.9 ha95c52a_0 conda-forge ================= Configs ================== --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [mask.vgg_obstructed] batch-size: 2 [mask.unet_dfl] batch-size: 8 [mask.bisenet_fp] batch-size: 8 weights: faceswap include_ears: False include_hair: False include_glasses: True [mask.vgg_clear] batch-size: 6 [detect.cv2_dnn] confidence: 50 [detect.s3fd] confidence: 70 batch-size: 4 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 8 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 --------- convert.ini --------- [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 erosion_top: 0.0 erosion_bottom: 0.0 erosion_left: 0.0 erosion_right: 0.0 [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- .faceswap --------- backend: nvidia --------- train.ini --------- [global] centering: face coverage: 87.5 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 epsilon_exponent: -7 reflect_padding: False allow_growth: False mixed_precision: False nan_protection: True convert_batchsize: 16 [global.loss] loss_function: ssim mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.original] lowmem: False [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.phaze_a] output_size: 128 shared_fc: none enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: fs_original enc_scaling: 7 enc_load_weights: True bottleneck_type: dense bottleneck_norm: none bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: upsample2d fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: subpixel dec_upscales_in_fc: 0 dec_norm: none dec_min_filters: 64 dec_max_filters: 512 dec_slope_mode: full dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers: keras_encoder load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 fs_original_use_alt: False mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 mobilenet_minimalistic: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.dfl_h128] lowmem: False [model.dlight] features: best details: good output_size: 256 [model.villain] lowmem: False [model.dfaker] output_size: 128 [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4

User avatar
torzdf
Posts: 1844
Joined: Fri Jul 12, 2019 12:53 am
Answers: 136
Has thanked: 79 times
Been thanked: 371 times

Re: [Resource] Google Colab Notebook

Post by torzdf »

This was a python version bug. Should be fixed now. Try again.

My word is final


User avatar
y2k_netizen
Posts: 4
Joined: Wed Nov 10, 2021 3:51 am
Has thanked: 6 times

Re: [Resource] Google Colab Notebook

Post by y2k_netizen »

Today morning I started getting this message while using faceswap in Colab,

Warning
You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.

Looks like end of era for deepfakes using Google Colab :cry: :cry: :cry:


Post Reply