When using the preset stojo file in Phaze-A, the error is reported as Unable to serialize [2.09 2.113 2.107] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
When training with another preset dny512, everything works fine. please help me.
When using the preset stojo file in Phaze-A, the error is reported
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
When using the preset stojo file in Phaze-A, the error is reported
- bryanlyon
- Site Admin
- Posts: 789
- Joined: Fri Jul 12, 2019 12:49 am
- Location: San Francisco
- Has thanked: 4 times
- Been thanked: 213 times
- Contact:
Re: When using the preset stojo file in Phaze-A, the error is reported
This is a known bug in the current version(s) of Keras. See https://github.com/keras-team/keras/issues/17199
This unfortunately means it's out of our hands to fix. Your best bet if you want to stick with EfficientNet is to install an older version of Tensorflow which doesn't have the bug.
We are currently aware of a potential workaround and may be implementing it in the near future.
Re: When using the preset stojo file in Phaze-A, the error is reported
The other alternative is to switch out the encoder for an alternative one. The EfficientNetV2 encoders work, but you may need to play around with encoder scaling to get the correct input size.
My word is final
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.fram
bryanlyon wrote: ↑Tue May 09, 2023 5:33 pmThis is a known bug in the current version(s) of Keras. See https://github.com/keras-team/keras/issues/17199
This unfortunately means it's out of our hands to fix. Your best bet if you want to stick with EfficientNet is to install an older version of Tensorflow which doesn't have the bug.
We are currently aware of a potential workaround and may be implementing it in the near future.
Same issue, I have tried to use an older version of tensorflow (2.9.1) based on this suggestion: https://discuss.tensorflow.org/t/using- ... nsor/12518
But, then the gui will not launch due to tensorflow being below 2.10.
I have also tried changing the enc to efficientnet_v2_m but run into the error "ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x0000013483CD3340>"
Any suggestions on how to get Phaze-A + StoJo presets working would be appreciated, thanks!
Example using StoJo presets:
GUI Output:
Code: Select all
11/02/2023 23:32:19 INFO ===================================================
11/02/2023 23:32:19 INFO Starting
11/02/2023 23:32:19 INFO ===================================================
11/02/2023 23:32:19 INFO Loading data, this may take a while...
11/02/2023 23:32:19 INFO Loading Model from Phaze_A plugin...
11/02/2023 23:32:19 INFO No existing state file found. Generating.
11/02/2023 23:32:19 INFO Storing Mixed Precision compatible layers. Please ignore any following warnings about using mixed precision.
11/02/2023 23:32:20 INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: NVIDIA GeForce RTX 4090, compute capability 8.9
11/02/2023 23:32:30 INFO Loading Trainer from Original plugin...
11/02/2023 23:32:30 WARNING Model failed to serialize as JSON. Ignoring... Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
11/02/2023 23:33:02 CRITICAL Error caught! Exiting...
11/02/2023 23:33:02 ERROR Caught exception in thread: '_training'
11/02/2023 23:33:05 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\foo\ai\faceswap\lib\cli\launcher.py", line 225, in execute_script
process.process()
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 209, in process
self._end_thread(thread, err)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 249, in _end_thread
thread.join()
File "C:\Users\foo\ai\faceswap\lib\multithreading.py", line 224, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\foo\ai\faceswap\lib\multithreading.py", line 100, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 274, in _training
raise err
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 264, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 366, in _run_training_cycle
model.io.save(is_exit=False)
File "C:\Users\foo\ai\faceswap\plugins\train\model\_base\io.py", line 203, in save
self._plugin.model.save(self.filename, include_optimizer=include_optimizer)
File "C:\Users\foo\AppData\Roaming\Python\Python310\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
11/02/2023 23:33:05 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\foo\ai\faceswap\crash_report.2023.11.02.233302670795.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.
Crash log:
Code: Select all
11/02/2023 23:32:57 MainProcess _training generator set_timelapse_feed DEBUG Setting preview feed: (side: 'a', images: 1952)
11/02/2023 23:32:57 MainProcess _training generator _load_generator DEBUG Loading generator, side: a, is_display: True, batch_size: 14
11/02/2023 23:32:57 MainProcess _training generator __init__ DEBUG Initializing PreviewDataGenerator: (model: phaze_a, side: a, images: 1952 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'save_optimizer': 'exit', 'lr_finder_iterations': 1000, 'lr_finder_mode': 'set', 'lr_finder_strength': 'default', 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'bisenet-fp_face', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
11/02/2023 23:32:57 MainProcess _training generator _get_output_sizes DEBUG side: a, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
11/02/2023 23:32:57 MainProcess _training cache __init__ DEBUG Initializing: RingBuffer (batch_size: 14, image_shape: (256, 256, 6), buffer_size: 2, dtype: uint8
11/02/2023 23:32:57 MainProcess _training cache __init__ DEBUG Initialized: RingBuffer
11/02/2023 23:32:57 MainProcess _training generator __init__ DEBUG Initialized PreviewDataGenerator
11/02/2023 23:32:57 MainProcess _training generator minibatch_ab DEBUG do_shuffle: False
11/02/2023 23:32:57 MainProcess _training multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run_3', thread_count: 1)
11/02/2023 23:32:57 MainProcess _training multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run_3'
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Starting thread(s): '_run_3'
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_3'
11/02/2023 23:32:57 MainProcess _run_3 generator _minibatch DEBUG Loading minibatch generator: (image_count: 1952, do_shuffle: False)
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Started all threads '_run_3': 1
11/02/2023 23:32:57 MainProcess _training generator set_timelapse_feed DEBUG Setting preview feed: (side: 'b', images: 1958)
11/02/2023 23:32:57 MainProcess _training generator _load_generator DEBUG Loading generator, side: b, is_display: True, batch_size: 14
11/02/2023 23:32:57 MainProcess _training generator __init__ DEBUG Initializing PreviewDataGenerator: (model: phaze_a, side: b, images: 1958 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'save_optimizer': 'exit', 'lr_finder_iterations': 1000, 'lr_finder_mode': 'set', 'lr_finder_strength': 'default', 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'bisenet-fp_face', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'mask_opacity': 30, 'mask_color': '#ff0000', 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
11/02/2023 23:32:57 MainProcess _training generator _get_output_sizes DEBUG side: b, model output shapes: [(None, 256, 256, 3), (None, 256, 256, 3)], output sizes: [256]
11/02/2023 23:32:57 MainProcess _training cache __init__ DEBUG Initializing: RingBuffer (batch_size: 14, image_shape: (256, 256, 6), buffer_size: 2, dtype: uint8
11/02/2023 23:32:57 MainProcess _training cache __init__ DEBUG Initialized: RingBuffer
11/02/2023 23:32:57 MainProcess _training generator __init__ DEBUG Initialized PreviewDataGenerator
11/02/2023 23:32:57 MainProcess _training generator minibatch_ab DEBUG do_shuffle: False
11/02/2023 23:32:57 MainProcess _training multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run_4', thread_count: 1)
11/02/2023 23:32:57 MainProcess _training multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run_4'
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Starting thread(s): '_run_4'
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_4'
11/02/2023 23:32:57 MainProcess _run_4 generator _minibatch DEBUG Loading minibatch generator: (image_count: 1958, do_shuffle: False)
11/02/2023 23:32:57 MainProcess _training multithreading start DEBUG Started all threads '_run_4': 1
11/02/2023 23:32:57 MainProcess _training generator set_timelapse_feed DEBUG Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x0000022863C04C80>, 'b': <generator object BackgroundGenerator.iterator at 0x0000022863C05BD0>}
11/02/2023 23:32:57 MainProcess _training _base _setup DEBUG Set up time-lapse
11/02/2023 23:32:57 MainProcess _training _base output_timelapse DEBUG Getting time-lapse samples
11/02/2023 23:32:57 MainProcess _training generator generate_preview DEBUG Generating preview (is_timelapse: True)
11/02/2023 23:32:57 MainProcess _training generator generate_preview DEBUG Generated samples: is_timelapse: True, images: {'feed': {'a': (14, 256, 256, 3), 'b': (14, 256, 256, 3)}, 'samples': {'a': (14, 292, 292, 3), 'b': (14, 292, 292, 3)}, 'sides': {'a': (14, 256, 256, 1), 'b': (14, 256, 256, 1)}}
11/02/2023 23:32:57 MainProcess _training generator compile_sample DEBUG Compiling samples: (side: 'a', samples: 14)
11/02/2023 23:32:57 MainProcess _training generator compile_sample DEBUG Compiling samples: (side: 'b', samples: 14)
11/02/2023 23:32:57 MainProcess _training generator compile_sample DEBUG Compiled Samples: {'a': [(14, 256, 256, 3), (14, 292, 292, 3), (14, 256, 256, 1)], 'b': [(14, 256, 256, 3), (14, 292, 292, 3), (14, 256, 256, 1)]}
11/02/2023 23:32:57 MainProcess _training _base output_timelapse DEBUG Got time-lapse samples: {'a': 3, 'b': 3}
11/02/2023 23:32:57 MainProcess _training _base show_sample DEBUG Showing sample
11/02/2023 23:32:57 MainProcess _training _base _resize_sample DEBUG Resizing sample: (side: 'a', sample.shape: (14, 256, 256, 3), target_size: 224, scale: 0.875)
11/02/2023 23:32:57 MainProcess _training _base _resize_sample DEBUG Resized sample: (side: 'a' shape: (14, 224, 224, 3))
11/02/2023 23:32:57 MainProcess _training _base _resize_sample DEBUG Resizing sample: (side: 'b', sample.shape: (14, 256, 256, 3), target_size: 224, scale: 0.875)
11/02/2023 23:32:57 MainProcess _training _base _resize_sample DEBUG Resized sample: (side: 'b' shape: (14, 224, 224, 3))
11/02/2023 23:32:57 MainProcess _training _base _get_predictions DEBUG Getting Predictions
11/02/2023 23:33:01 MainProcess _training _base _get_predictions DEBUG Returning predictions: {'a_a': (14, 256, 256, 3), 'b_b': (14, 256, 256, 3), 'a_b': (14, 256, 256, 3), 'b_a': (14, 256, 256, 3)}
11/02/2023 23:33:01 MainProcess _training _base _to_full_frame DEBUG side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 256, 256, 3), (14, 256, 256, 3)])
11/02/2023 23:33:01 MainProcess _training _base _process_full DEBUG full_size: 292, prediction_size: 256, color: (0.0, 0.0, 1.0)
11/02/2023 23:33:01 MainProcess _training _base _process_full DEBUG Overlayed background. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _compile_masked DEBUG masked shapes: [(14, 256, 256, 3), (14, 256, 256, 3), (14, 256, 256, 3)]
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG side: 'a', width: 292
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG height: 64, total_width: 876
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG texts: ['Original (A)', 'Original > Original', 'Original > Swap'], text_sizes: [(163, 20), (264, 20), (231, 20)], text_x: [64, 306, 614], text_y: 42
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG header_box.shape: (64, 876, 3)
11/02/2023 23:33:01 MainProcess _training _base _to_full_frame DEBUG side: 'b', number of sample arrays: 3, prediction.shapes: [(14, 256, 256, 3), (14, 256, 256, 3)])
11/02/2023 23:33:01 MainProcess _training _base _process_full DEBUG full_size: 292, prediction_size: 256, color: (0.0, 0.0, 1.0)
11/02/2023 23:33:01 MainProcess _training _base _process_full DEBUG Overlayed background. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _compile_masked DEBUG masked shapes: [(14, 256, 256, 3), (14, 256, 256, 3), (14, 256, 256, 3)]
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _overlay_foreground DEBUG Overlayed foreground. Shape: (14, 292, 292, 3)
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG side: 'b', width: 292
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG height: 64, total_width: 876
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG texts: ['Swap (B)', 'Swap > Swap', 'Swap > Original'], text_sizes: [(133, 20), (198, 20), (231, 20)], text_x: [79, 339, 614], text_y: 42
11/02/2023 23:33:01 MainProcess _training _base _get_headers DEBUG header_box.shape: (64, 876, 3)
11/02/2023 23:33:01 MainProcess _training _base _duplicate_headers DEBUG side: a header.shape: (64, 876, 3)
11/02/2023 23:33:01 MainProcess _training _base _duplicate_headers DEBUG side: b header.shape: (64, 876, 3)
11/02/2023 23:33:01 MainProcess _training _base _stack_images DEBUG Stack images
11/02/2023 23:33:01 MainProcess _training _base get_transpose_axes DEBUG Even number of images to stack
11/02/2023 23:33:01 MainProcess _training _base _stack_images DEBUG Stacked images
11/02/2023 23:33:01 MainProcess _training _base _compile_preview DEBUG Compiled sample
11/02/2023 23:33:01 MainProcess _training _base output_timelapse DEBUG Created time-lapse: 'W:\model\timelapse\1698985981.jpg'
11/02/2023 23:33:01 MainProcess _training train _run_training_cycle DEBUG Saving (save_iterations: True, save_now: False) Iteration: (iteration: 1)
11/02/2023 23:33:01 MainProcess _training io save DEBUG Backing up and saving models
11/02/2023 23:33:01 MainProcess _training io _get_save_averages DEBUG Getting save averages
11/02/2023 23:33:01 MainProcess _training io _get_save_averages DEBUG Average losses since last save: [0.43416038155555725, 0.5303509831428528]
11/02/2023 23:33:01 MainProcess _training io _should_backup DEBUG Set initial save iteration loss average for 'a': 0.43416038155555725
11/02/2023 23:33:01 MainProcess _training io _should_backup DEBUG Set initial save iteration loss average for 'b': 0.5303509831428528
11/02/2023 23:33:01 MainProcess _training io _should_backup DEBUG Updated lowest historical save iteration averages from: {'a': 0.43416038155555725, 'b': 0.5303509831428528} to: {'a': 0.43416038155555725, 'b': 0.5303509831428528}
11/02/2023 23:33:01 MainProcess _training io _should_backup DEBUG Should backup: True
11/02/2023 23:33:02 MainProcess _training attrs create DEBUG Creating converter from 5 to 3
11/02/2023 23:33:02 MainProcess _training multithreading run DEBUG Error in thread (_training): Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
11/02/2023 23:33:02 MainProcess MainThread train _monitor DEBUG Thread error detected
11/02/2023 23:33:02 MainProcess MainThread train _monitor DEBUG Closed Monitor
11/02/2023 23:33:02 MainProcess MainThread train _end_thread DEBUG Ending Training thread
11/02/2023 23:33:02 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
11/02/2023 23:33:02 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
11/02/2023 23:33:02 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training'
11/02/2023 23:33:02 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training'
Traceback (most recent call last):
File "C:\Users\foo\ai\faceswap\lib\cli\launcher.py", line 225, in execute_script
process.process()
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 209, in process
self._end_thread(thread, err)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 249, in _end_thread
thread.join()
File "C:\Users\foo\ai\faceswap\lib\multithreading.py", line 224, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\foo\ai\faceswap\lib\multithreading.py", line 100, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 274, in _training
raise err
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 264, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\foo\ai\faceswap\scripts\train.py", line 366, in _run_training_cycle
model.io.save(is_exit=False)
File "C:\Users\foo\ai\faceswap\plugins\train\model\_base\io.py", line 203, in save
self._plugin.model.save(self.filename, include_optimizer=include_optimizer)
File "C:\Users\foo\AppData\Roaming\Python\Python310\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\ProgramData\anaconda3\envs\faceswapgui\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
============ System Information ============
backend: nvidia
encoding: cp1252
git_branch: master
git_commits: 8e6c6c3 patch writer: Sort the json file by key
gpu_cuda: 11.8
gpu_cudnn: 8.9.5
gpu_devices: GPU_0: NVIDIA GeForce RTX 4090
gpu_devices_active: GPU_0
gpu_driver: 545.84
gpu_vram: GPU_0: 24564MB (678MB free)
os_machine: AMD64
os_platform: Windows-10-10.0.22621-SP0
os_release: 10
py_command: C:\Users\foo\ai\faceswap\faceswap.py train -A W:/fa -B W:/fb -m W:/model -t phaze-a -bs 8 -it 1000000 -D default -s 250 -ss 25000 -tia W:/fa -tib W:/fb -to W:/model/timelapse -L INFO -gui
py_conda_version: conda 23.9.0
py_implementation: CPython
py_version: 3.10.13
py_virtual_env: True
sys_cores: 32
sys_processor: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
sys_ram: Total: 130776MB, Available: 113851MB, Used: 16924MB, Free: 113851MB
=============== Pip Packages ===============
absl-py==2.0.0
astunparse==1.6.3
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.3.0
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
contourpy @ file:///C:/b/abs_d5rpy288vc/croots/recipe/contourpy_1663827418189/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
fastcluster @ file:///D:/bld/fastcluster_1695650232190/work
ffmpy @ file:///home/conda/feedstock_root/build_artifacts/ffmpy_1659474992694/work
flatbuffers==23.5.26
fonttools==4.25.0
gast==0.4.0
google-auth==2.23.3
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.59.0
h5py==3.10.0
idna==3.4
imageio @ file:///C:/b/abs_3eijmwdodc/croot/imageio_1695996500830/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1694632425602/work
joblib @ file:///C:/b/abs_1anqjntpan/croot/joblib_1685113317150/work
keras==2.10.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/b/abs_88mdhvtahm/croot/kiwisolver_1672387921783/work
libclang==16.0.6
Markdown==3.5
MarkupSafe==2.1.3
matplotlib @ file:///C:/b/abs_085jhivdha/croot/matplotlib-suite_1693812524572/work
mkl-fft @ file:///C:/b/abs_19i1y8ykas/croot/mkl_fft_1695058226480/work
mkl-random @ file:///C:/b/abs_edwkj1_o69/croot/mkl_random_1695059866750/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/b/abs_5fucrty5dc/croot/numexpr_1696515448831/work
numpy @ file:///C:/b/abs_9fu2cs2527/croot/numpy_and_numpy_base_1695830496596/work/dist/numpy-1.26.0-cp310-cp310-win_amd64.whl#sha256=11367989d61b64039738e0c68c95c6b797a41c4c75ec2147c0541b21163786eb
nvidia-ml-py @ file:///home/conda/feedstock_root/build_artifacts/nvidia-ml-py_1693425331741/work
oauthlib==3.2.2
opencv-python==4.8.1.78
opt-einsum==3.3.0
packaging @ file:///C:/b/abs_28t5mcoltc/croot/packaging_1693575224052/work
Pillow @ file:///C:/b/abs_153xikw91n/croot/pillow_1695134603563/work
ply==3.11
protobuf==3.19.6
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.5.0
pyasn1-modules==0.3.0
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==305.1
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp310-none-win_amd64.whl
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///C:/b/abs_55olq_4gzc/croot/scikit-learn_1690978955123/work
scipy==1.11.3
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.10.0
tensorflow-estimator==2.10.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.3.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/b/abs_0cbrstidzg/croot/tornado_1696937003724/work
tqdm @ file:///C:/b/abs_f76j9hg7pv/croot/tqdm_1679561871187/work
typing_extensions==4.8.0
urllib3==2.0.6
Werkzeug==3.0.0
wrapt==1.15.0
============== Conda Packages ==============
# packages in environment at C:\ProgramData\anaconda3\envs\faceswapgui:
#
# Name Version Build Channel
absl-py 2.0.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
blas 1.0 mkl
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
bzip2 1.0.8 he774522_0
ca-certificates 2023.7.22 h56e8100_0 conda-forge
cachetools 5.3.1 pypi_0 pypi
certifi 2023.7.22 pypi_0 pypi
charset-normalizer 3.3.0 pypi_0 pypi
colorama 0.4.6 py310haa95532_0
contourpy 1.0.5 py310h59b6b97_0
cudatoolkit 11.8.0 hd77b12b_0
cudnn 8.9.2.26 cuda11_0
cycler 0.11.0 pyhd3eb1b0_0
fastcluster 1.2.6 py310hecd3228_3 conda-forge
ffmpeg 4.3.1 ha925a31_0 conda-forge
ffmpy 0.3.0 pyhb6f538c_0 conda-forge
flatbuffers 23.5.26 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
gast 0.4.0 pypi_0 pypi
giflib 5.2.1 h8cc25b3_3
git 2.40.1 haa95532_1
glib 2.69.1 h5dc1a3c_2
google-auth 2.23.3 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.59.0 pypi_0 pypi
h5py 3.10.0 pypi_0 pypi
icc_rt 2022.1.0 h6049295_2
icu 58.2 ha925a31_3
idna 3.4 pypi_0 pypi
imageio 2.31.4 py310haa95532_0
imageio-ffmpeg 0.4.9 pyhd8ed1ab_0 conda-forge
intel-openmp 2023.1.0 h59b6b97_46319
joblib 1.2.0 py310haa95532_0
jpeg 9e h2bbff1b_1
keras 2.10.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.4.4 py310hd77b12b_0
krb5 1.20.1 h5b6d351_0
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libclang 16.0.6 pypi_0 pypi
libclang13 14.0.6 default_h8e68704_1
libdeflate 1.17 h2bbff1b_1
libffi 3.4.4 hd77b12b_0
libiconv 1.16 h2bbff1b_2
libpng 1.6.39 h8cc25b3_0
libpq 12.15 h906ac69_1
libtiff 4.5.1 hd77b12b_0
libwebp 1.3.2 hbc33d0d_0
libwebp-base 1.3.2 h2bbff1b_0
libxml2 2.10.4 h0ad7f3c_1
libxslt 1.1.37 h2bbff1b_1
libzlib 1.2.13 hcfcfb64_5 conda-forge
libzlib-wapi 1.2.13 hcfcfb64_5 conda-forge
lz4-c 1.9.4 h2bbff1b_0
markdown 3.5 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 py310haa95532_0
matplotlib-base 3.7.2 py310h4ed8f06_0
mkl 2023.1.0 h6b88ed4_46357
mkl-service 2.4.0 py310h2bbff1b_1
mkl_fft 1.3.8 py310h2bbff1b_0
mkl_random 1.2.4 py310h59b6b97_0
munkres 1.1.4 py_0
numexpr 2.8.7 py310h2cd9be0_0
numpy 1.26.0 py310h055cbcc_0
numpy-base 1.26.0 py310h65a83cf_0
nvidia-ml-py 12.535.108 pyhd8ed1ab_0 conda-forge
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.8.1.78 pypi_0 pypi
openssl 3.1.3 hcfcfb64_0 conda-forge
opt-einsum 3.3.0 pypi_0 pypi
packaging 23.1 py310haa95532_0
pcre 8.45 hd77b12b_0
pillow 9.4.0 py310hd77b12b_1
pip 23.2.1 py310haa95532_0
ply 3.11 py310haa95532_0
protobuf 3.19.6 pypi_0 pypi
psutil 5.9.0 py310h2bbff1b_0
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pyparsing 3.0.9 py310haa95532_0
pyqt 5.15.7 py310hd77b12b_0
pyqt5-sip 12.11.0 py310hd77b12b_0
python 3.10.13 he1021f5_0
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.10 2_cp310 conda-forge
pywin32 305 py310h2bbff1b_0
pywinpty 2.0.2 py310h5da7b33_0
qt-main 5.15.2 h879a1e9_9
qt-webengine 5.15.9 h5bd16bc_7
qtwebkit 5.212 h2bbfb41_5
requests 2.31.0 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-learn 1.3.0 py310h4ed8f06_0
scipy 1.11.3 py310h309d312_0
setuptools 68.0.0 py310haa95532_0
sip 6.6.2 py310hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h2bbff1b_0
tbb 2021.8.0 h59b6b97_0
tensorboard 2.10.1 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow 2.10.1 pypi_0 pypi
tensorflow-estimator 2.10.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.31.0 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0
tk 8.6.12 h2bbff1b_0
toml 0.10.2 pyhd3eb1b0_0
tornado 6.3.3 py310h2bbff1b_0
tqdm 4.65.0 py310h9909e9c_0
typing-extensions 4.8.0 pypi_0 pypi
tzdata 2023c h04d1e81_0
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 2.0.6 pypi_0 pypi
vc 14.2 h21ff451_1
vc14_runtime 14.36.32532 hdcecf7f_17 conda-forge
vs2015_runtime 14.36.32532 h05e6639_17 conda-forge
werkzeug 3.0.0 pypi_0 pypi
wheel 0.41.2 py310haa95532_0
winpty 0.4.3 4
wrapt 1.15.0 pypi_0 pypi
xz 5.4.2 h8cc25b3_0
zlib 1.2.13 hcfcfb64_5 conda-forge
zlib-wapi 1.2.13 hcfcfb64_5 conda-forge
zstd 1.5.5 hd43e919_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
erosion_top: 0.0
erosion_bottom: 0.0
erosion_left: 0.0
erosion_right: 0.0
[scaling.sharpen]
method: None
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: None
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: jpg
draw_transparent: False
separate_mask: False
jpg_quality: 90
png_compress_level: 3
[writer.patch]
start_index: 0
index_offset: 0
number_padding: 6
include_filename: True
face_index_location: before
origin: bottom-left
empty_frames: blank
json_output: False
separate_mask: False
bit_depth: 16
format: png
png_compress_level: 3
tiff_compression_method: lzw
[writer.pillow]
format: png
draw_transparent: False
separate_mask: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
aligner_min_scale: 0.07
aligner_max_scale: 2.0
aligner_distance: 22.5
aligner_roll: 45.0
aligner_features: True
filter_refeed: True
save_filtered: False
realign_refeeds: True
filter_realign: True
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
scalefactor: 0.709
batch-size: 8
cpu: True
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.bisenet_fp]
batch-size: 8
cpu: False
weights: faceswap
include_ears: False
include_hair: False
include_glasses: True
[mask.custom]
batch-size: 8
centering: face
fill: False
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
[recognition.vgg_face2]
batch-size: 16
cpu: False
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
centering: face
coverage: 87.5
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
epsilon_exponent: -7
save_optimizer: exit
lr_finder_iterations: 1000
lr_finder_mode: set
lr_finder_strength: default
autoclip: False
reflect_padding: False
allow_growth: False
mixed_precision: False
nan_protection: True
convert_batchsize: 16
[global.loss]
loss_function: ssim
loss_function_2: mse
loss_weight_2: 100
loss_function_3: None
loss_weight_3: 0
loss_function_4: None
loss_weight_4: 0
mask_loss_function: mse
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: bisenet-fp_face
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfaker]
output_size: 128
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.phaze_a]
output_size: 256
shared_fc: none
enable_gblock: True
split_fc: True
split_gblock: False
split_decoders: False
enc_architecture: efficientnet_b4
enc_scaling: 60
enc_load_weights: True
bottleneck_type: dense
bottleneck_norm: none
bottleneck_size: 512
bottleneck_in_encoder: True
fc_depth: 1
fc_min_filters: 1280
fc_max_filters: 1280
fc_dimensions: 8
fc_filter_slope: -0.5
fc_dropout: 0.0
fc_upsampler: upsample2d
fc_upsamples: 1
fc_upsample_filters: 1280
fc_gblock_depth: 3
fc_gblock_min_nodes: 512
fc_gblock_max_nodes: 512
fc_gblock_filter_slope: -0.5
fc_gblock_dropout: 0.0
dec_upscale_method: resize_images
dec_upscales_in_fc: 0
dec_norm: none
dec_min_filters: 160
dec_max_filters: 640
dec_slope_mode: full
dec_filter_slope: -0.33
dec_res_blocks: 1
dec_output_kernel: 3
dec_gaussian: True
dec_skip_last_residual: False
freeze_layers: keras_encoder
load_layers: encoder
fs_original_depth: 4
fs_original_min_filters: 128
fs_original_max_filters: 1024
fs_original_use_alt: False
mobilenet_width: 1.0
mobilenet_depth: 1
mobilenet_dropout: 0.001
mobilenet_minimalistic: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
mask_opacity: 30
mask_color: #ff0000
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
Re: When using the preset stojo file in Phaze-A, the error is reported
Yes, unfortunately that advice on earlier version of TF is now out dated.
What I advise is:
- load the StoJo preset
- switch the Encoder to EfficientNetV2-S
This should work fine without needing to adjust encoder scaling (make sure you are creating a new model and not resuming an existing one)
My word is final
Re: When using the preset stojo file in Phaze-A, the error is reported
Thank you, EfficientNetV2-S does "work" without a stacktrace.
I'm still curious how folks are making StoJo work with default settings, do you have any suggestions?
Re: When using the preset stojo file in Phaze-A, the error is reported
I believe it works if you use Mixed Precision. The issue comes when switching from mixed precision to full precision.
When you start a new model at full precision, Mixed Precision is activated (as we need to store which layers are compatible with Mixed Precision and this is the only way to do it). When the model switches back to full precision, the bug is hit.
My word is final