Hello all,
Now that I have trained with Dlight which yielded satisfactory results, I began training with Realface. However, I get a crash after I go to start it a second time. The initial start up with a clean slate is perfectly fine, this seems very odd to me.
Code: Select all
03/27/2020 15:06:32 MainProcess _training_0 _base _set_preview_feed DEBUG Setting preview feed: (side: 'a')
03/27/2020 15:06:32 MainProcess _training_0 _base _load_generator DEBUG Loading generator: a
03/27/2020 15:06:32 MainProcess _training_0 _base _load_generator DEBUG input_size: 128, output_shapes: [(128, 128, 3)]
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Jim\\Documents\\DF\\A\\A Faceset\\Alignments.fsa', 'b': 'C:\\Users\\Jim\\Documents\\DF\\B\\B Faceset\\B_Alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.75, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': True}, landmarks: {}, masks: {'a': 3951, 'b': 7507}, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator
03/27/2020 15:06:32 MainProcess _training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 3951, batchsize: 14, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.75, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Output sizes: [128]
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initialized ImageAugmentation
03/27/2020 15:06:32 MainProcess _training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
03/27/2020 15:06:32 MainProcess _training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run'
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Starting thread(s): '_run'
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0'
03/27/2020 15:06:32 MainProcess _run_0 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 3951, side: 'a', do_shuffle: True)
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1'
03/27/2020 15:06:32 MainProcess _run_1 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 3951, side: 'a', do_shuffle: True)
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Started all threads '_run': 2
03/27/2020 15:06:32 MainProcess _training_0 _base _set_preview_feed DEBUG Set preview feed. Batchsize: 14
03/27/2020 15:06:32 MainProcess _training_0 _base _use_mask DEBUG True
03/27/2020 15:06:32 MainProcess _training_0 _base __init__ DEBUG Initializing Batcher: side: 'b', num_images: 7507, use_mask: True, batch_size: 6, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:32 MainProcess _training_0 _base _load_generator DEBUG Loading generator: b
03/27/2020 15:06:32 MainProcess _training_0 _base _load_generator DEBUG input_size: 128, output_shapes: [(128, 128, 3)]
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Jim\\Documents\\DF\\A\\A Faceset\\Alignments.fsa', 'b': 'C:\\Users\\Jim\\Documents\\DF\\B\\B Faceset\\B_Alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.75, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': True}, landmarks: {}, masks: {'a': 3951, 'b': 7507}, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator
03/27/2020 15:06:32 MainProcess _training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 7507, batchsize: 6, side: 'b', do_shuffle: True, is_preview, False, is_timelapse: False)
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initializing ImageAugmentation: (batchsize: 6, is_display: False, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.75, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Output sizes: [128]
03/27/2020 15:06:32 MainProcess _training_0 training_data __init__ DEBUG Initialized ImageAugmentation
03/27/2020 15:06:32 MainProcess _training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
03/27/2020 15:06:32 MainProcess _training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run'
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Starting thread(s): '_run'
03/27/2020 15:06:32 MainProcess _training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0'
03/27/2020 15:06:32 MainProcess _run_0 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 7507, side: 'b', do_shuffle: True)
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1'
03/27/2020 15:06:33 MainProcess _run_1 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 7507, side: 'b', do_shuffle: True)
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Started all threads '_run': 2
03/27/2020 15:06:33 MainProcess _training_0 _base _set_preview_feed DEBUG Setting preview feed: (side: 'b')
03/27/2020 15:06:33 MainProcess _training_0 _base _load_generator DEBUG Loading generator: b
03/27/2020 15:06:33 MainProcess _training_0 _base _load_generator DEBUG input_size: 128, output_shapes: [(128, 128, 3)]
03/27/2020 15:06:33 MainProcess _training_0 training_data __init__ DEBUG Initializing TrainingDataGenerator: (model_input_size: 128, model_output_shapes: [(128, 128, 3)], training_opts: {'alignments': {'a': 'C:\\Users\\Jim\\Documents\\DF\\A\\A Faceset\\Alignments.fsa', 'b': 'C:\\Users\\Jim\\Documents\\DF\\B\\B Faceset\\B_Alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.75, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': True}, landmarks: {}, masks: {'a': 3951, 'b': 7507}, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:33 MainProcess _training_0 training_data __init__ DEBUG Initialized TrainingDataGenerator
03/27/2020 15:06:33 MainProcess _training_0 training_data minibatch_ab DEBUG Queue batches: (image_count: 7507, batchsize: 14, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
03/27/2020 15:06:33 MainProcess _training_0 training_data __init__ DEBUG Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 128, output_shapes: [(128, 128, 3)], coverage_ratio: 0.75, config: {'coverage': 75.0, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': False, 'conv_aware_init': True, 'subpixel_upscaling': False, 'reflect_padding': False, 'penalized_mask_loss': True, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
03/27/2020 15:06:33 MainProcess _training_0 training_data __init__ DEBUG Output sizes: [128]
03/27/2020 15:06:33 MainProcess _training_0 training_data __init__ DEBUG Initialized ImageAugmentation
03/27/2020 15:06:33 MainProcess _training_0 multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
03/27/2020 15:06:33 MainProcess _training_0 multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run'
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Starting thread(s): '_run'
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Starting thread 1 of 2: '_run_0'
03/27/2020 15:06:33 MainProcess _run_0 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 7507, side: 'b', do_shuffle: True)
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Starting thread 2 of 2: '_run_1'
03/27/2020 15:06:33 MainProcess _run_1 training_data _minibatch DEBUG Loading minibatch generator: (image_count: 7507, side: 'b', do_shuffle: True)
03/27/2020 15:06:33 MainProcess _training_0 multithreading start DEBUG Started all threads '_run': 2
03/27/2020 15:06:33 MainProcess _training_0 _base _set_preview_feed DEBUG Set preview feed. Batchsize: 14
03/27/2020 15:06:33 MainProcess _training_0 _base _set_tensorboard DEBUG Enabling TensorBoard Logging
03/27/2020 15:06:33 MainProcess _training_0 _base _set_tensorboard DEBUG Setting up TensorBoard Logging. Side: a
03/27/2020 15:06:33 MainProcess _training_0 _base name DEBUG model name: 'realface'
03/27/2020 15:06:33 MainProcess _training_0 _base _tensorboard_kwargs DEBUG Tensorflow version: [1, 15, 0]
03/27/2020 15:06:33 MainProcess _training_0 _base _tensorboard_kwargs DEBUG {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0}
03/27/2020 15:06:33 MainProcess _training_0 _base _set_tensorboard DEBUG Setting up TensorBoard Logging. Side: b
03/27/2020 15:06:33 MainProcess _training_0 _base name DEBUG model name: 'realface'
03/27/2020 15:06:33 MainProcess _training_0 _base _tensorboard_kwargs DEBUG Tensorflow version: [1, 15, 0]
03/27/2020 15:06:33 MainProcess _training_0 _base _tensorboard_kwargs DEBUG {'histogram_freq': 0, 'batch_size': 64, 'write_graph': True, 'write_grads': True, 'update_freq': 'batch', 'profile_batch': 0}
03/27/2020 15:06:33 MainProcess _training_0 _base _set_tensorboard INFO Enabled TensorBoard Logging
03/27/2020 15:06:33 MainProcess _training_0 _base _use_mask DEBUG True
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initializing Samples: model: '<plugins.train.model.realface.Model object at 0x000002A270AAB548>', use_mask: True, coverage_ratio: 0.75)
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initialized Samples
03/27/2020 15:06:33 MainProcess _training_0 _base _use_mask DEBUG True
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initializing Timelapse: model: <plugins.train.model.realface.Model object at 0x000002A270AAB548>, use_mask: True, coverage_ratio: 0.75, image_count: 14, batchers: '{'a': <plugins.train.trainer._base.Batcher object at 0x000002A23B5E08C8>, 'b': <plugins.train.trainer._base.Batcher object at 0x000002A23B5E0388>}')
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initializing Samples: model: '<plugins.train.model.realface.Model object at 0x000002A270AAB548>', use_mask: True, coverage_ratio: 0.75)
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initialized Samples
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initialized Timelapse
03/27/2020 15:06:33 MainProcess _training_0 _base __init__ DEBUG Initialized Trainer
03/27/2020 15:06:33 MainProcess _training_0 train _load_trainer DEBUG Loaded Trainer
03/27/2020 15:06:33 MainProcess _training_0 train _run_training_cycle DEBUG Running Training Cycle
03/27/2020 15:06:33 MainProcess _run_1 training_data initialize DEBUG Initializing constants. training_size: 256
03/27/2020 15:06:33 MainProcess _run_1 training_data initialize DEBUG Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(32, 224, None), 'warp_mapx': '[[[ .]]]'}
03/27/2020 15:06:33 MainProcess _run_0 training_data initialize DEBUG Initializing constants. training_size: 256
03/27/2020 15:06:33 MainProcess _run_0 training_data initialize DEBUG Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(32, 224, None), 'warp_mapx': '[[[ .]]]'}
03/27/2020 15:06:33 MainProcess _run_0 training_data initialize DEBUG Initializing constants. training_size: 256
03/27/2020 15:06:33 MainProcess _run_1 training_data initialize DEBUG Initializing constants. training_size: 256
03/27/2020 15:06:33 MainProcess _run_1 training_data initialize DEBUG Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(32, 224, None), 'warp_mapx': '[[[ .]]]'}
03/27/2020 15:06:33 MainProcess _run_0 training_data initialize DEBUG Initialized constants: {'clahe_base_contrast': 2, 'tgt_slices': slice(32, 224, None), 'warp_mapx': '[[[ .]]]'}
03/27/2020 15:06:35 MainProcess _training_0 multithreading run DEBUG Error in thread (_training_0): Cross device functions not supported
03/27/2020 15:06:35 MainProcess MainThread train _monitor DEBUG Thread error detected
03/27/2020 15:06:35 MainProcess MainThread train _monitor DEBUG Closed Monitor
03/27/2020 15:06:35 MainProcess MainThread train _end_thread DEBUG Ending Training thread
03/27/2020 15:06:35 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
03/27/2020 15:06:35 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
03/27/2020 15:06:35 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training_0'
03/27/2020 15:06:35 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools initialize DEBUG PlaidML already initialized
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools get_supported_devices DEBUG [<plaidml._DeviceConfig object at 0x000002A3A71A1D88>]
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools get_all_devices DEBUG Experimental Devices: [<plaidml._DeviceConfig object at 0x000002A3A7009FC8>]
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools get_all_devices DEBUG [<plaidml._DeviceConfig object at 0x000002A3A7009FC8>, <plaidml._DeviceConfig object at 0x000002A3A71A1D88>]
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools __init__ DEBUG Initialized: PlaidMLStats
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools supported_indices DEBUG [1]
03/27/2020 15:06:36 MainProcess MainThread plaidml_tools supported_indices DEBUG [1]
Traceback (most recent call last):
File "C:\Users\Jim\faceswap\lib\cli.py", line 128, in execute_script
process.process()
File "C:\Users\Jim\faceswap\scripts\train.py", line 159, in process
self._end_thread(thread, err)
File "C:\Users\Jim\faceswap\scripts\train.py", line 199, in _end_thread
thread.join()
File "C:\Users\Jim\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Jim\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Jim\faceswap\scripts\train.py", line 224, in _training
raise err
File "C:\Users\Jim\faceswap\scripts\train.py", line 214, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\Jim\faceswap\scripts\train.py", line 303, in _run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "C:\Users\Jim\faceswap\plugins\train\trainer\_base.py", line 316, in train_one_step
raise err
File "C:\Users\Jim\faceswap\plugins\train\trainer\_base.py", line 283, in train_one_step
loss[side] = batcher.train_one_batch()
File "C:\Users\Jim\faceswap\plugins\train\trainer\_base.py", line 424, in train_one_batch
loss = self._model.predictors[self._side].train_on_batch(model_inputs, model_targets)
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\plaidml\keras\backend.py", line 175, in __call__
self._invoker.invoke()
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\plaidml\__init__.py", line 1455, in invoke
return Invocation(self._ctx, self)
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\plaidml\__init__.py", line 1464, in __init__
self._as_parameter_ = _lib().plaidml_schedule_invocation(ctx, invoker)
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\plaidml\__init__.py", line 777, in _check_err
self.raise_last_status()
File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\plaidml\library.py", line 131, in raise_last_status
raise self.last_status()
plaidml.exceptions.Unknown: Cross device functions not supported
============ System Information ============
encoding: cp1252
git_branch: staging
git_commits: 9c2414a lib.alignments - Update video meta data to handle training alignments files/frame ranges. 08be32c GUI Sliders - Validate text box entry. 86a5921 Bugfix - Alignments Tool, DFL Conversion. 9fe4a65 Bugfix: Mask Tool. 3d88630 Core Update (#995)
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: Advanced Micro Devices, Inc. - gfx900 (experimental), GPU_1: Advanced Micro Devices, Inc. - gfx900 (supported)
gpu_devices_active: GPU_0, GPU_1
gpu_driver: ['3004.8 (PAL,HSAIL)', '3004.8 (PAL,HSAIL)']
gpu_vram: GPU_0: 8176MB, GPU_1: 8176MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: C:\Users\Jim\faceswap\faceswap.py train -A C:/Users/Jim/Documents/DF/A/A Faceset -ala C:/Users/Jim/Documents/DF/A/A Faceset/Alignments.fsa -B C:/Users/Jim/Documents/DF/B/B Faceset -alb C:/Users/Jim/Documents/DF/B/B Faceset/B_Alignments.fsa -m C:/Users/Jim/Documents/DF/Models/A X B/Realface/A X B (Realface) -t realface -bs 6 -it 1000000 -s 10 -ss 25000 -ps 50 -L INFO -gui
py_conda_version: conda 4.8.3
py_implementation: CPython
py_version: 3.7.6
py_virtual_env: True
sys_cores: 8
sys_processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
sys_ram: Total: 16344MB, Available: 9588MB, Used: 6756MB, Free: 9588MB
=============== Pip Packages ===============
absl-py==0.9.0
asn1crypto==1.3.0
astor==0.8.0
blinker==1.4
cachetools==3.1.1
certifi==2019.11.28
cffi==1.14.0
chardet==3.0.4
click==7.1.1
cloudpickle==1.3.0
cryptography==2.8
cycler==0.10.0
cytoolz==0.10.1
dask==2.12.0
decorator==4.4.2
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.2
gast==0.2.2
google-auth==1.11.2
google-auth-oauthlib==0.4.1
google-pasta==0.1.8
grpcio==1.27.2
h5py==2.9.0
idna==2.9
imageio==2.6.1
imageio-ffmpeg==0.4.1
joblib==0.14.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==3.1.3
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.17.4
nvidia-ml-py3==7.352.1
oauthlib==3.1.0
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==6.2.1
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.11.4
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.6
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
pytz==2019.3
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn==0.22.1
scipy==1.4.1
six==1.14.0
tensorboard==2.1.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm==4.43.0
urllib3==1.25.8
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1
============== Conda Packages ==============
# packages in environment at C:\Users\Jim\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.2.0 eigen
absl-py 0.9.0 py37_0
asn1crypto 1.3.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
ca-certificates 2020.1.1 0
cachetools 3.1.1 py_0
certifi 2019.11.28 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.1.1 py_0
cloudpickle 1.3.0 py_0
cryptography 2.8 py37h7a1dbc1_0
cycler 0.10.0 py37_0
cytoolz 0.10.1 py37he774522_0
dask-core 2.12.0 py_0
decorator 4.4.2 py_0
enum34 1.1.10 pypi_0 pypi
fastcluster 1.1.26 py37he350917_0 conda-forge
ffmpeg 4.2 h6538335_0 conda-forge
ffmpy 0.2.2 pypi_0 pypi
freetype 2.9.1 ha9979f8_1
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-auth 1.11.2 py_0
google-auth-oauthlib 0.4.1 py_2
google-pasta 0.1.8 py_0
grpcio 1.27.2 py37h351948d_0
h5py 2.9.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.9 py_1
imageio 2.6.1 py37_0
imageio-ffmpeg 0.4.1 py_0 conda-forge
intel-openmp 2020.0 166
joblib 0.14.1 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py37ha925a31_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.11.4 h7bd577a_0
libtiff 4.1.0 h56a325e_0
markdown 3.1.1 py37_0
matplotlib 3.1.1 py37hc8f65d3_0
matplotlib-base 3.1.3 py37h64f37c6_0
mkl 2020.0 166
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.15 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
networkx 2.4 py_0
numpy 1.17.4 py37h4320e6b_0
numpy-base 1.17.4 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi
oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv-python 4.1.2.30 pypi_0 pypi
openssl 1.1.1e he774522_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_1
pillow 6.2.1 py37hdc69c19_0
pip 20.0.2 py37_1
plaidml 0.6.4 pypi_0 pypi
plaidml-keras 0.6.4 pypi_0 pypi
protobuf 3.11.4 py37h33f27b4_0
psutil 5.7.0 py37he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.7 py_0
pycparser 2.20 py_0
pyjwt 1.7.1 py37_0
pyopenssl 19.1.0 py37_0
pyparsing 2.4.6 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pysocks 1.7.1 py37_0
python 3.7.6 h60c2a47_2
python-dateutil 2.8.1 py_0
python_abi 3.7 1_cp37m conda-forge
pytz 2019.3 py_0
pywavelets 1.1.1 py37he774522_0
pywin32 227 py37he774522_1
pyyaml 5.3 py37he774522_0
qt 5.9.7 vc14h73c81de_0
requests 2.23.0 py37_0
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.16.2 py37h47e9c7a_0
scikit-learn 0.22.1 py37h6288b17_0
scipy 1.4.1 py37h9439919_0
setuptools 46.0.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.14.0 py37_0
sqlite 3.31.1 he774522_0
tensorboard 2.1.0 py3_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge
tornado 6.0.4 py37he774522_1
tqdm 4.43.0 py_0
urllib3 1.25.8 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.12.1 py37he774522_1
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0
=============== State File =================
{
"name": "realface",
"sessions": {
"1": {
"timestamp": 1585334144.814402,
"no_logs": false,
"pingpong": false,
"loss_names": {
"a": [
"face_loss"
],
"b": [
"face_loss"
]
},
"batchsize": 2,
"iterations": 403,
"config": {
"learning_rate": 5e-05
}
},
"2": {
"timestamp": 1585335588.195469,
"no_logs": false,
"pingpong": false,
"loss_names": {
"a": [
"face_loss"
],
"b": [
"face_loss"
]
},
"batchsize": 8,
"iterations": 1,
"config": {
"learning_rate": 5e-05
}
}
},
"lowest_avg_loss": {
"a": 0.06745605170726776,
"b": 0.08948398381471634
},
"iterations": 404,
"inputs": {
"face_in": [
128,
128,
3
],
"mask_in": [
128,
128,
1
]
},
"training_size": 256,
"config": {
"coverage": 75.0,
"mask_type": "vgg-obstructed",
"mask_blur_kernel": 3,
"mask_threshold": 4,
"learn_mask": false,
"icnr_init": false,
"conv_aware_init": true,
"subpixel_upscaling": false,
"reflect_padding": false,
"penalized_mask_loss": true,
"loss_function": "mae",
"learning_rate": 5e-05,
"input_size": 128,
"output_size": 128,
"dense_nodes": 1536,
"complexity_encoder": 128,
"complexity_decoder": 512
}
}
================= Configs ==================
--------- .faceswap ---------
backend: amd
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[scaling.sharpen]
method: gaussian
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
coverage: 75.0
mask_type: vgg-obstructed
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
icnr_init: False
conv_aware_init: True
subpixel_upscaling: False
reflect_padding: False
penalized_mask_loss: True
loss_function: mae
learning_rate: 5e-05
[model.realface]
input_size: 128
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
NOTE: I have removed lines in the crash report related to "Initilisation coinstants" as the post surpassed the 60.000 character limit.
In the output window I do get a line saying something like "failure to load model" almost like it just doesn't now where to find the required files which is unusual since I didn't change anything.
Thanks for your help in advance.
Regards,
Jimmy