Everything seems to work fine until I hit the train button then ...
Code: Select all
Loading...
Setting Faceswap backend to NVIDIA
01/31/2021 21:46:06 INFO Log level set to: INFO
01/31/2021 21:46:08 INFO Model A Directory: D:\Video Projects\DazOut
01/31/2021 21:46:08 INFO Model B Directory: D:\Video Projects\MOutD
01/31/2021 21:46:08 INFO Training data directory: D:\Video Projects\MDazMod
01/31/2021 21:46:08 INFO ===================================================
01/31/2021 21:46:08 INFO Starting
01/31/2021 21:46:08 INFO Press 'Stop' to save and quit
01/31/2021 21:46:08 INFO ===================================================
01/31/2021 21:46:09 INFO Loading data, this may take a while...
01/31/2021 21:46:09 INFO Loading Model from Realface plugin...
01/31/2021 21:46:09 INFO No existing state file found. Generating.
01/31/2021 21:46:09 INFO Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
01/31/2021 21:46:09 INFO Enabling Mixed Precision Training.
01/31/2021 21:46:09 INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: GeForce RTX 2080 Ti, compute capability 7.5
01/31/2021 21:46:10 CRITICAL Error caught! Exiting...
01/31/2021 21:46:10 ERROR Caught exception in thread: '_training_0'
01/31/2021 21:46:11 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "D:\faceswap\lib\cli\launcher.py", line 182, in execute_script
process.process()
File "D:\faceswap\scripts\train.py", line 170, in process
self._end_thread(thread, err)
File "D:\faceswap\scripts\train.py", line 210, in _end_thread
thread.join()
File "D:\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "D:\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "D:\faceswap\scripts\train.py", line 232, in _training
raise err
File "D:\faceswap\scripts\train.py", line 220, in _training
model = self._load_model()
File "D:\faceswap\scripts\train.py", line 248, in _load_model
model.build()
File "D:\faceswap\plugins\train\model\_base.py", line 267, in build
self._model = self.build_model(inputs)
File "D:\faceswap\plugins\train\model\realface.py", line 69, in build_model
encoder = self.encoder()
File "D:\faceswap\plugins\train\model\realface.py", line 87, in encoder
var_x = ResidualBlock(encoder_complexity * 2**idx,
File "D:\faceswap\lib\model\nn_blocks.py", line 604, in __call__
var_x = Conv2D(self._filters,
File "D:\faceswap\lib\model\nn_blocks.py", line 116, in __init__
super().__init__(*args, padding=padding, kernel_initializer=initializer, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 655, in __init__
activation=activations.get(activation),
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\activations.py", line 529, in get
return deserialize(identifier)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\activations.py", line 488, in deserialize
return deserialize_keras_object(
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 377, in deserialize_keras_object
raise ValueError(
ValueError: Unknown activation function: leakyrelu
01/31/2021 21:46:11 CRITICAL An unexpected crash has occurred. Crash report written to 'D:\faceswap\crash_report.2021.01.31.214610300676.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.
Crash report is ...
Code: Select all
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Added defaults: model.original
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.realface_defaults
01/31/2021 21:46:09 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Added defaults: model.realface
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.unbalanced_defaults
01/31/2021 21:46:09 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Added defaults: model.unbalanced
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.villain_defaults
01/31/2021 21:46:09 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Don't try to run this if you have a small GPU.\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Added defaults: model.villain
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.trainer.original_defaults
01/31/2021 21:46:09 MainProcess _training_0 config add_section DEBUG Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'disable_warp', datatype: '<class 'bool'>', default: 'False', info: 'Disable warp augmentation. Warping is integral to the Neural Network training. If you decide to disable warping, you should only do so towards the end of a model's training session.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: False, group: image augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
01/31/2021 21:46:09 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
01/31/2021 21:46:09 MainProcess _training_0 _config load_module DEBUG Added defaults: trainer.original
01/31/2021 21:46:09 MainProcess _training_0 config handle_config DEBUG Handling config
01/31/2021 21:46:09 MainProcess _training_0 config check_exists DEBUG Config file exists: 'D:\faceswap\config\train.ini'
01/31/2021 21:46:09 MainProcess _training_0 config load_config VERBOSE Loading config: 'D:\faceswap\config\train.ini'
01/31/2021 21:46:09 MainProcess _training_0 config validate_config DEBUG Validating config
01/31/2021 21:46:09 MainProcess _training_0 config check_config_change DEBUG Default config has not changed
01/31/2021 21:46:09 MainProcess _training_0 config check_config_choices DEBUG Checking config choices
01/31/2021 21:46:09 MainProcess _training_0 config check_config_choices DEBUG Checked config choices
01/31/2021 21:46:09 MainProcess _training_0 config validate_config DEBUG Validated config
01/31/2021 21:46:09 MainProcess _training_0 config handle_config DEBUG Handled config
01/31/2021 21:46:09 MainProcess _training_0 config __init__ DEBUG Initialized: Config
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'learning_rate')
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'float'>, value: 5e-05)
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'allow_growth')
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'bool'>, value: True)
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'convert_batchsize')
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'int'>, value: 16)
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global.loss', option: 'eye_multiplier')
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'int'>, value: 3)
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global.loss', option: 'mouth_multiplier')
01/31/2021 21:46:09 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'int'>, value: 2)
01/31/2021 21:46:09 MainProcess _training_0 config changeable_items DEBUG Alterable for existing models: {'learning_rate': 5e-05, 'allow_growth': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initializing State: (model_dir: 'D:\Video Projects\MDazMod', model_name: 'realface', config_changeable_items: '{'learning_rate': 5e-05, 'allow_growth': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}', no_logs: False
01/31/2021 21:46:09 MainProcess _training_0 serializer get_serializer DEBUG <lib.serializer._JSONSerializer object at 0x0000016ECADB2CD0>
01/31/2021 21:46:09 MainProcess _training_0 _base _load DEBUG Loading State
01/31/2021 21:46:09 MainProcess _training_0 _base _load INFO No existing state file found. Generating.
01/31/2021 21:46:09 MainProcess _training_0 _base _new_session_id DEBUG 1
01/31/2021 21:46:09 MainProcess _training_0 _base _create_new_session DEBUG Creating new session. id: 1
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initialized State:
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initializing _Settings: (arguments: Namespace(alignments_path_a='D:\\Video Projects\\DazOut\\Dazzling blowjob with cum _alignments.fsa', alignments_path_b='D:\\Video Projects\\MOutD\\00003_alignments.fsa', batch_size=16, colab=False, configfile=None, distributed=False, exclude_gpus=None, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x0000016EC26037F0>>, input_a='D:\\Video Projects\\DazOut', input_b='D:\\Video Projects\\MOutD', iterations=1000000, logfile=None, loglevel='INFO', model_dir='D:\\Video Projects\\MDazMod', no_augment_color=False, no_flip=False, no_logs=False, preview=False, preview_scale=100, redirect_gui=True, save_interval=250, snapshot_interval=25000, timelapse_input_a=None, timelapse_input_b=None, timelapse_output=None, trainer='realface', warp_to_landmarks=False, write_image=False), mixed_precision: True, allow_growth: True, is_predict: False)
01/31/2021 21:46:09 MainProcess _training_0 _base _set_tf_settings DEBUG Setting Tensorflow 'allow_growth' option
01/31/2021 21:46:09 MainProcess _training_0 _base _set_tf_settings INFO Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
01/31/2021 21:46:09 MainProcess _training_0 _base _set_tf_settings DEBUG Set Tensorflow 'allow_growth' option
01/31/2021 21:46:09 MainProcess _training_0 _base _set_keras_mixed_precision DEBUG use_mixed_precision: True, exclude_gpus: False
01/31/2021 21:46:09 MainProcess _training_0 _base _set_keras_mixed_precision INFO Enabling Mixed Precision Training.
01/31/2021 21:46:09 MainProcess _training_0 device_compatibility_check _log_device_compatibility_check INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: GeForce RTX 2080 Ti, compute capability 7.5
01/31/2021 21:46:09 MainProcess _training_0 _base _set_keras_mixed_precision DEBUG Enabled mixed precision. (Compute dtype: float16, variable_dtype: float32)
01/31/2021 21:46:09 MainProcess _training_0 _base _get_strategy DEBUG Using strategy: <tensorflow.python.distribute.distribute_lib._DefaultDistributionStrategy object at 0x0000016ECAE33C40>
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initialized _Settings
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initializing _Loss
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initialized: _Loss
01/31/2021 21:46:09 MainProcess _training_0 _base __init__ DEBUG Initialized ModelBase (Model)
01/31/2021 21:46:09 MainProcess _training_0 realface check_input_output DEBUG Input and output sizes are valid
01/31/2021 21:46:09 MainProcess _training_0 realface get_dense_width_upscalers_numbers DEBUG dense_width: 4, upscalers_no: 5
01/31/2021 21:46:09 MainProcess _training_0 _base strategy_scope DEBUG Using strategy scope: <tensorflow.python.distribute.distribute_lib._DefaultDistributionContext object at 0x0000016ECAE552E0>
01/31/2021 21:46:09 MainProcess _training_0 _base _get_inputs DEBUG Getting inputs
01/31/2021 21:46:09 MainProcess _training_0 _base _get_inputs DEBUG inputs: [<tf.Tensor 'face_in_a:0' shape=(None, 64, 64, 3) dtype=float32>, <tf.Tensor 'face_in_b:0' shape=(None, 64, 64, 3) dtype=float32>]
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks _get_name DEBUG Generating block name: conv_128_0
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks __init__ DEBUG name: conv_128_0, filters: 128, kernel_size: 5, strides: 2, padding: same, normalization: None, activation: leakyrelu, use_depthwise: False, kwargs: {})
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks _get_default_initializer DEBUG Set default kernel_initializer: <tensorflow.python.keras.initializers.initializers_v2.HeUniform object at 0x0000016ECAEFAB20>
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks _get_name DEBUG Generating block name: residual_128_0
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks __init__ DEBUG name: residual_128_0, filters: 128, kernel_size: 3, padding: same, kwargs: {'use_bias': True, 'activation': 'leakyrelu'})
01/31/2021 21:46:09 MainProcess _training_0 nn_blocks _get_default_initializer DEBUG Set default kernel_initializer: <tensorflow.python.keras.initializers.initializers_v2.HeUniform object at 0x0000016ECAF26A60>
01/31/2021 21:46:09 MainProcess _training_0 multithreading run DEBUG Error in thread (_training_0): Unknown activation function: leakyrelu
01/31/2021 21:46:10 MainProcess MainThread train _monitor DEBUG Thread error detected
01/31/2021 21:46:10 MainProcess MainThread train _monitor DEBUG Closed Monitor
01/31/2021 21:46:10 MainProcess MainThread train _end_thread DEBUG Ending Training thread
01/31/2021 21:46:10 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
01/31/2021 21:46:10 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
01/31/2021 21:46:10 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training_0'
01/31/2021 21:46:10 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
Traceback (most recent call last):
File "D:\faceswap\lib\cli\launcher.py", line 182, in execute_script
process.process()
File "D:\faceswap\scripts\train.py", line 170, in process
self._end_thread(thread, err)
File "D:\faceswap\scripts\train.py", line 210, in _end_thread
thread.join()
File "D:\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "D:\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "D:\faceswap\scripts\train.py", line 232, in _training
raise err
File "D:\faceswap\scripts\train.py", line 220, in _training
model = self._load_model()
File "D:\faceswap\scripts\train.py", line 248, in _load_model
model.build()
File "D:\faceswap\plugins\train\model\_base.py", line 267, in build
self._model = self.build_model(inputs)
File "D:\faceswap\plugins\train\model\realface.py", line 69, in build_model
encoder = self.encoder()
File "D:\faceswap\plugins\train\model\realface.py", line 87, in encoder
var_x = ResidualBlock(encoder_complexity * 2**idx,
File "D:\faceswap\lib\model\nn_blocks.py", line 604, in __call__
var_x = Conv2D(self._filters,
File "D:\faceswap\lib\model\nn_blocks.py", line 116, in __init__
super().__init__(*args, padding=padding, kernel_initializer=initializer, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 655, in __init__
activation=activations.get(activation),
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\activations.py", line 529, in get
return deserialize(identifier)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\activations.py", line 488, in deserialize
return deserialize_keras_object(
File "C:\Users\Torkya\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 377, in deserialize_keras_object
raise ValueError(
ValueError: Unknown activation function: leakyrelu
============ System Information ============
encoding: cp1252
git_branch: master
git_commits: a62fddf lib.model - Maintenance - Add Depthwise option to Conv Block - Add Swish Activation function - remove res_block_follows and add_instance_norm_args - Add explicit normalization and activation args - Add K.resize_images layer plugins.train.model - Inference creation bugfix
gpu_cuda: 11.2
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: GeForce RTX 2080 Ti
gpu_devices_active: GPU_0
gpu_driver: 461.40
gpu_vram: GPU_0: 11264MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: D:\faceswap\faceswap.py train -A D:/Video Projects/DazOut -ala D:/Video Projects/DazOut/Dazzling blowjob with cum _alignments.fsa -B D:/Video Projects/MOutD -alb D:/Video Projects/MOutD/00003_alignments.fsa -m D:/Video Projects/MDazMod -t realface -bs 16 -it 1000000 -s 250 -ss 25000 -ps 100 -L INFO -gui
py_conda_version: conda 4.9.2
py_implementation: CPython
py_version: 3.8.5
py_virtual_env: True
sys_cores: 24
sys_processor: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram: Total: 32677MB, Available: 25054MB, Used: 7622MB, Free: 25054MB
=============== Pip Packages ===============
absl-py @ file:///tmp/build/80754af9/absl-py_1607439979954/work
aiohttp @ file:///C:/ci/aiohttp_1607109697839/work
astunparse==1.6.3
async-timeout==3.0.1
attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work
blinker==1.4
brotlipy==0.7.0
cachetools @ file:///tmp/build/80754af9/cachetools_1611600262290/work
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1606255208697/work
chardet @ file:///C:/ci/chardet_1605303225733/work
click @ file:///home/linux1/recipes/ci/click_1610990599742/work
cryptography==2.9.2
cycler==0.10.0
fastcluster==1.1.26
ffmpy==0.2.3
gast @ file:///tmp/build/80754af9/gast_1597433534803/work
google-auth @ file:///tmp/build/80754af9/google-auth_1607969906642/work
google-auth-oauthlib @ file:///tmp/build/80754af9/google-auth-oauthlib_1603929124518/work
google-pasta==0.2.0
grpcio @ file:///C:/ci/grpcio_1597406462198/work
h5py==2.10.0
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1609799311556/work
importlib-metadata @ file:///tmp/build/80754af9/importlib-metadata_1602276842396/work
joblib @ file:///tmp/build/80754af9/joblib_1607970656719/work
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing==1.1.0
kiwisolver @ file:///C:/ci/kiwisolver_1604014703538/work
Markdown @ file:///C:/ci/markdown_1605111189761/work
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
multidict @ file:///C:/ci/multidict_1600456481656/work
numpy @ file:///C:/ci/numpy_and_numpy_base_1603466732592/work
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.5.1.48
opt-einsum==3.1.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1609786840597/work
protobuf==3.13.0
psutil @ file:///C:/ci/psutil_1598370330503/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
PyJWT @ file:///C:/ci/pyjwt_1610893382614/work
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing @ file:///home/linux1/recipes/ci/pyparsing_1610983426697/work
pyreadline==2.1
PySocks @ file:///C:/ci/pysocks_1605287845585/work
python-dateutil @ file:///home/ktietz/src/ci/python-dateutil_1611928101742/work
pywin32==227
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
requests-oauthlib==1.3.0
rsa @ file:///tmp/build/80754af9/rsa_1610483308194/work
scikit-learn @ file:///C:/ci/scikit-learn_1598377018496/work
scipy @ file:///C:/ci/scipy_1604596260408/work
sip==4.19.13
six @ file:///C:/ci/six_1605187374963/work
tensorboard @ file:///home/builder/ktietz/conda/conda-bld/tensorboard_1604313476433/work/tmp_pip_dir
tensorboard-plugin-wit==1.6.0
tensorflow==2.3.0
tensorflow-estimator @ file:///tmp/build/80754af9/tensorflow-estimator_1599136169057/work/whl_temp/tensorflow_estimator-2.3.0-py2.py3-none-any.whl
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado @ file:///C:/ci/tornado_1606942392901/work
tqdm @ file:///tmp/build/80754af9/tqdm_1611857934208/work
typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1611751222202/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1611694770489/work
Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306167264/work
wincertstore==0.2
wrapt==1.12.1
yarl @ file:///C:/ci/yarl_1598045274898/work
zipp @ file:///tmp/build/80754af9/zipp_1604001098328/work
============== Conda Packages ==============
# packages in environment at C:\Users\Torkya\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.3.0 gpu
absl-py 0.11.0 pyhd3eb1b0_1
aiohttp 3.7.3 py38h2bbff1b_1
astunparse 1.6.3 py_0
async-timeout 3.0.1 py38haa95532_0
attrs 20.3.0 pyhd3eb1b0_0
blas 1.0 mkl
blinker 1.4 py38haa95532_0
brotlipy 0.7.0 py38h2bbff1b_1003
ca-certificates 2021.1.19 haa95532_0
cachetools 4.2.1 pyhd3eb1b0_0
certifi 2020.12.5 py38haa95532_0
cffi 1.14.4 py38hcd4344a_0
chardet 3.0.4 py38haa95532_1003
click 7.1.2 pyhd3eb1b0_0
cryptography 2.9.2 py38h7a1dbc1_0
cudatoolkit 10.1.243 h74a9793_0
cudnn 7.6.5 cuda10.1_0
cycler 0.10.0 py38_0
fastcluster 1.1.26 py38h251f6bf_2 conda-forge
ffmpeg 4.3.1 ha925a31_0 conda-forge
ffmpy 0.2.3 pypi_0 pypi
freetype 2.10.4 hd328e21_0
gast 0.4.0 py_0
git 2.23.0 h6bb4b03_0
google-auth 1.24.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.2 pyhd3eb1b0_2
google-pasta 0.2.0 py_0
grpcio 1.31.0 py38he7da953_0
h5py 2.10.0 py38h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pyhd3eb1b0_0
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.3 pyhd8ed1ab_0 conda-forge
importlib-metadata 2.0.0 py_1
intel-openmp 2020.2 254
joblib 1.0.0 pyhd3eb1b0_0
jpeg 9b hb83a4c4_2
keras-applications 1.0.8 py_1
keras-preprocessing 1.1.0 py_1
kiwisolver 1.3.0 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.13.0.1 h200bbdf_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.3 h2bbff1b_0
markdown 3.3.3 py38haa95532_0
matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38h196d8e1_0
mkl_fft 1.2.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
multidict 4.7.6 py38he774522_1
numpy 1.19.2 py38hadc3359_0
numpy-base 1.19.2 py38ha3acd2a_0
nvidia-ml-py3 7.352.1 pypi_0 pypi
oauthlib 3.1.0 py_0
olefile 0.46 py_0
opencv-python 4.5.1.48 pypi_0 pypi
openssl 1.1.1i h2bbff1b_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py_1
pillow 8.1.0 py38h4fa10fc_0
pip 20.3.3 py38haa95532_0
protobuf 3.13.0.1 py38ha925a31_1
psutil 5.7.2 py38he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.8 py_0
pycparser 2.20 py_2
pyjwt 2.0.1 py38haa95532_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.9.2 py38ha925a31_4
pyreadline 2.1 py38_1
pysocks 1.7.1 py38haa95532_0
python 3.8.5 h5fd99cc_1
python-dateutil 2.8.1 pyhd3eb1b0_0
python_abi 3.8 1_cp38 conda-forge
pywin32 227 py38he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.25.1 pyhd3eb1b0_0
requests-oauthlib 1.3.0 py_0
rsa 4.7 pyhd3eb1b0_1
scikit-learn 0.23.2 py38h47e9c7a_0
scipy 1.5.2 py38h14eb087_0
setuptools 52.0.0 py38haa95532_0
sip 4.19.13 py38ha925a31_0
six 1.15.0 py38haa95532_0
sqlite 3.33.0 h2a8f88b_0
tensorboard 2.3.0 pyh4dce500_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 2.3.0 mkl_py38h1fcfbd6_0
tensorflow-base 2.3.0 gpu_py38h7339f5a_0
tensorflow-estimator 2.3.0 pyheb71bc4_0
tensorflow-gpu 2.3.0 he13fc11_0
termcolor 1.1.0 py38_1
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.1 py38h2bbff1b_0
tqdm 4.56.0 pyhd3eb1b0_0
typing-extensions 3.7.4.3 hd3eb1b0_0
typing_extensions 3.7.4.3 pyh06a4308_0
urllib3 1.26.3 pyhd3eb1b0_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.36.2 pyhd3eb1b0_0
win_inet_pton 1.1.0 py38haa95532_0
wincertstore 0.2 py38_0
wrapt 1.12.1 py38he774522_1
xz 5.2.5 h62dcd97_0
yarl 1.5.1 py38he774522_0
zipp 3.4.0 pyhd3eb1b0_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 h04227a9_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[scaling.sharpen]
method: none
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
centering: face
coverage: 68.75
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
reflect_padding: False
allow_growth: True
mixed_precision: True
convert_batchsize: 16
[global.loss]
loss_function: ssim
mask_loss_function: mse
l2_reg_term: 100
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfaker]
output_size: 128
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
disable_warp: False
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
Any assistance would be appreciated
Thanks Torkya