I tried to start training however as it was starting up a critical error was caught... Any idea what's going on here?
Here's the crash report:
Code: Select all
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.dlight_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'model.dlight', info: 'A lightweight, high resolution Dfaker variant (Adapted from https://github.com/dfaker/df)\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'features', datatype: '<class 'str'>', default: 'best', info: 'Higher settings will allow learning more features such as tatoos, piercing,\nand wrinkles.\nStrongly affects VRAM usage.', rounding: 'None', min_max: None, choices: ['lowmem', 'fair', 'best'], gui_radio: True, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: model.dlight
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.original_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: model.original
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.realface_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: model.realface
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.unbalanced_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: model.unbalanced
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.villain_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Full model requires 9GB+ for batchsize 16\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: model.villain
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Importing defaults module: plugins.train.trainer.original_defaults
11/08/2019 03:44:03 MainProcess training_0 config add_section DEBUG Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
11/08/2019 03:44:03 MainProcess training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess training_0 _config load_module DEBUG Added defaults: trainer.original
11/08/2019 03:44:03 MainProcess training_0 config handle_config DEBUG Handling config
11/08/2019 03:44:03 MainProcess training_0 config check_exists DEBUG Config file exists: 'C:\Users\KB\faceswap\config\train.ini'
11/08/2019 03:44:03 MainProcess training_0 config load_config VERBOSE Loading config: 'C:\Users\KB\faceswap\config\train.ini'
11/08/2019 03:44:03 MainProcess training_0 config validate_config DEBUG Validating config
11/08/2019 03:44:03 MainProcess training_0 config check_config_change DEBUG Default config has not changed
11/08/2019 03:44:03 MainProcess training_0 config check_config_choices DEBUG Checking config choices
11/08/2019 03:44:03 MainProcess training_0 config check_config_choices DEBUG Checked config choices
11/08/2019 03:44:03 MainProcess training_0 config validate_config DEBUG Validated config
11/08/2019 03:44:03 MainProcess training_0 config handle_config DEBUG Handled config
11/08/2019 03:44:03 MainProcess training_0 config __init__ DEBUG Initialized: Config
11/08/2019 03:44:03 MainProcess training_0 config get DEBUG Getting config item: (section: 'global', option: 'learning_rate')
11/08/2019 03:44:03 MainProcess training_0 config get DEBUG Returning item: (type: <class 'float'>, value: 5e-05)
11/08/2019 03:44:03 MainProcess training_0 config changeable_items DEBUG Alterable for existing models: {'learning_rate': 5e-05}
11/08/2019 03:44:03 MainProcess training_0 _base __init__ DEBUG Initializing State: (model_dir: 'C:\Users\KB\Desktop\Jingle Jam 2019\deepfake\faceswap\Yogcats\Lewis\Models\Model 1', model_name: 'dfaker', config_changeable_items: '{'learning_rate': 5e-05}', no_logs: False, pingpong: False, training_image_size: '256'
11/08/2019 03:44:03 MainProcess training_0 serializer get_serializer DEBUG <lib.serializer._JSONSerializer object at 0x0000026594EB33C8>
11/08/2019 03:44:03 MainProcess training_0 _base load DEBUG Loading State
11/08/2019 03:44:03 MainProcess training_0 _base load INFO No existing state file found. Generating.
11/08/2019 03:44:03 MainProcess training_0 _base new_session_id DEBUG 1
11/08/2019 03:44:03 MainProcess training_0 _base create_new_session DEBUG Creating new session. id: 1
11/08/2019 03:44:03 MainProcess training_0 _base __init__ DEBUG Initialized State:
11/08/2019 03:44:03 MainProcess training_0 nn_blocks __init__ DEBUG Initializing NNBlocks: (use_subpixel: False, use_icnr_init: True, use_convaware_init: True, use_reflect_padding: False, first_run: True)
11/08/2019 03:44:03 MainProcess training_0 nn_blocks __init__ INFO Using Convolutional Aware Initialization. Model generation will take a few minutes...
11/08/2019 03:44:03 MainProcess training_0 nn_blocks __init__ DEBUG Initialized NNBlocks
11/08/2019 03:44:03 MainProcess training_0 _base name DEBUG model name: 'dfaker'
11/08/2019 03:44:03 MainProcess training_0 _base load_state_info DEBUG Loading Input Shape from State file
11/08/2019 03:44:03 MainProcess training_0 _base load_state_info DEBUG No input shapes saved. Using model config
11/08/2019 03:44:03 MainProcess training_0 _base multiple_models_in_folder DEBUG model_files: [], retval: False
11/08/2019 03:44:03 MainProcess training_0 original add_networks DEBUG Adding networks
11/08/2019 03:44:03 MainProcess training_0 nn_blocks upscale DEBUG inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 512, kernel_size: 3, use_instance_norm: False, kwargs: {})
11/08/2019 03:44:03 MainProcess training_0 nn_blocks get_name DEBUG Generating block name: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0
11/08/2019 03:44:03 MainProcess training_0 nn_blocks set_default_initializer DEBUG Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000026594EBC278>
11/08/2019 03:44:03 MainProcess training_0 nn_blocks switch_kernel_initializer DEBUG Switched kernel_initializer from <lib.model.initializers.ConvolutionAware object at 0x0000026594EBC278> to <lib.model.initializers.ICNR object at 0x0000026594EBC320>
11/08/2019 03:44:03 MainProcess training_0 nn_blocks conv2d DEBUG inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 2048, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d', 'kernel_initializer': <lib.model.initializers.ICNR object at 0x0000026594EBC320>})
11/08/2019 03:44:03 MainProcess training_0 nn_blocks set_default_initializer DEBUG Using model specified initializer: <lib.model.initializers.ICNR object at 0x0000026594EBC320>
11/08/2019 03:44:03 MainProcess training_0 initializers __call__ INFO Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
11/08/2019 03:44:03 MainProcess training_0 library _logger_callback INFO Opening device "opencl_amd_ellesmere.0"
11/08/2019 03:44:04 MainProcess training_0 multithreading run DEBUG Error in thread (training_0): Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.
11/08/2019 03:44:04 MainProcess MainThread train monitor DEBUG Thread error detected
11/08/2019 03:44:04 MainProcess MainThread train monitor DEBUG Closed Monitor
11/08/2019 03:44:04 MainProcess MainThread train end_thread DEBUG Ending Training thread
11/08/2019 03:44:04 MainProcess MainThread train end_thread CRITICAL Error caught! Exiting...
11/08/2019 03:44:04 MainProcess MainThread multithreading join DEBUG Joining Threads: 'training'
11/08/2019 03:44:04 MainProcess MainThread multithreading join DEBUG Joining Thread: 'training_0'
11/08/2019 03:44:04 MainProcess MainThread multithreading join ERROR Caught exception in thread: 'training_0'
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools initialize DEBUG PlaidML already initialized
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools get_supported_devices DEBUG [<plaidml._DeviceConfig object at 0x000002658AAB5390>]
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools get_all_devices DEBUG Experimental Devices: [<plaidml._DeviceConfig object at 0x000002658AAB5A20>]
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools get_all_devices DEBUG [<plaidml._DeviceConfig object at 0x000002658AAB5A20>, <plaidml._DeviceConfig object at 0x000002658AAB5390>]
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools __init__ DEBUG Initialized: PlaidMLStats
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools supported_indices DEBUG [1]
11/08/2019 03:44:04 MainProcess MainThread plaidml_tools supported_indices DEBUG [1]
Traceback (most recent call last):
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 558, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 558, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <tile.Value upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512)>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\KB\faceswap\lib\cli.py", line 128, in execute_script
process.process()
File "C:\Users\KB\faceswap\scripts\train.py", line 109, in process
self.end_thread(thread, err)
File "C:\Users\KB\faceswap\scripts\train.py", line 135, in end_thread
thread.join()
File "C:\Users\KB\faceswap\lib\multithreading.py", line 117, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\KB\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\KB\faceswap\scripts\train.py", line 160, in training
raise err
File "C:\Users\KB\faceswap\scripts\train.py", line 148, in training
model = self.load_model()
File "C:\Users\KB\faceswap\scripts\train.py", line 183, in load_model
predict=False)
File "C:\Users\KB\faceswap\plugins\train\model\dfaker.py", line 21, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\KB\faceswap\plugins\train\model\original.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\KB\faceswap\plugins\train\model\_base.py", line 115, in __init__
self.build()
File "C:\Users\KB\faceswap\plugins\train\model\_base.py", line 240, in build
self.add_networks()
File "C:\Users\KB\faceswap\plugins\train\model\original.py", line 31, in add_networks
self.add_network("decoder", "a", self.decoder(), is_output=True)
File "C:\Users\KB\faceswap\plugins\train\model\dfaker.py", line 29, in decoder
var_x = self.blocks.upscale(var_x, 512, res_block_follows=True)
File "C:\Users\KB\faceswap\lib\model\nn_blocks.py", line 137, in upscale
**kwargs)
File "C:\Users\KB\faceswap\lib\model\nn_blocks.py", line 90, in conv2d
**kwargs)(inp)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
self.build(unpack_singleton(input_shapes))
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 141, in build
constraint=self.kernel_constraint)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
weight = K.variable(initializer(shape),
File "C:\Users\KB\faceswap\lib\model\initializers.py", line 67, in __call__
var_x = tf.transpose(var_x, perm=[2, 0, 1, 3])
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1738, in transpose
ret = transpose_fn(a, perm, name=name)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11045, in transpose
"Transpose", x=x, perm=perm, name=name)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 530, in _apply_op_helper
raise err
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 527, in _apply_op_helper
preferred_dtype=default_dtype)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 246, in constant
allow_broadcast=True)
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 562, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.
============ System Information ============
encoding: cp1252
git_branch: Not Found
git_commits: Not Found
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: Advanced Micro Devices, Inc. - Ellesmere (experimental), GPU_1: Advanced Micro Devices, Inc. - Ellesmere (supported)
gpu_devices_active: GPU_0, GPU_1
gpu_driver: ['2906.10', '2906.10']
gpu_vram: GPU_0: 8192MB, GPU_1: 8192MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: C:\Users\KB\faceswap\faceswap.py train -A C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/DF images -ala C:/Users/KB/Videos/Lewis template_alignments.fsa -B C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/CATS/Cats DF sorted -alb C:/Users/KB/Videos/Lewis CAT_alignments.fsa -m C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/Models/Model 1 -t dfaker -bs 18 -it 1000000 -s 100 -ss 25000 -tia C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/DF images -tib C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/CATS/Cats DF sorted -to C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/time lapses/TL 1 -ps 50 -wl -L INFO -gui
py_conda_version: conda 4.7.12
py_implementation: CPython
py_version: 3.6.9
py_virtual_env: True
sys_cores: 4
sys_processor: Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
sys_ram: Total: 16246MB, Available: 8645MB, Used: 7601MB, Free: 8645MB
=============== Pip Packages ===============
absl-py==0.8.0
astor==0.8.0
certifi==2019.9.11
cffi==1.13.2
cloudpickle==1.2.2
cycler==0.10.0
cytoolz==0.10.0
dask==2.6.0
decorator==4.4.1
enum34==1.1.6
fastcluster==1.1.25
ffmpy==0.2.2
gast==0.3.2
grpcio==1.16.1
h5py==2.9.0
imageio==2.5.0
imageio-ffmpeg==0.3.0
joblib==0.13.2
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==2.2.2
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.16.2
nvidia-ml-py3==7.352.1
olefile==0.46
opencv-python==4.1.1.26
pathlib==1.0.1
Pillow==6.1.0
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.9.2
psutil==5.6.3
pycparser==2.19
pyparsing==2.4.2
pyreadline==2.1
python-dateutil==2.8.0
pytz==2019.3
PyWavelets==1.1.1
pywin32==223
PyYAML==5.1.2
scikit-image==0.15.0
scikit-learn==0.21.3
scipy==1.3.1
six==1.12.0
tensorboard==1.14.0
tensorflow==1.14.0
tensorflow-estimator==1.14.0
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.3
tqdm==4.36.1
Werkzeug==0.16.0
wincertstore==0.2
wrapt==1.11.2
============== Conda Packages ==============
# packages in environment at C:\Users\KB\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.3.0 mkl
absl-py 0.8.0 py36_0
astor 0.8.0 py36_0
blas 1.0 mkl
ca-certificates 2019.10.16 0
certifi 2019.9.11 py36_0
cffi 1.13.2 pypi_0 pypi
cloudpickle 1.2.2 py_0
cycler 0.10.0 py36h009560c_0
cytoolz 0.10.0 py36he774522_0
dask-core 2.6.0 py_0
decorator 4.4.1 py_0
enum34 1.1.6 pypi_0 pypi
fastcluster 1.1.25 py36he350917_1000 conda-forge
ffmpy 0.2.2 pypi_0 pypi
freetype 2.9.1 ha9979f8_1
gast 0.3.2 py_0
grpcio 1.16.1 py36h351948d_1
h5py 2.9.0 py36h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
imageio 2.5.0 py36_0
imageio-ffmpeg 0.3.0 pypi_0 pypi
intel-openmp 2019.4 245
joblib 0.13.2 py36_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py36ha925a31_0
libmklml 2019.0.5 0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.9.2 h7bd577a_0
libtiff 4.0.10 hb898794_2
markdown 3.1.1 py36_0
matplotlib 2.2.2 py36had4c4a9_2
mkl 2019.4 245
mkl-service 2.3.0 py36hb782905_0
mkl_fft 1.0.15 py36h14836fe_0
mkl_random 1.1.0 py36h675688f_0
networkx 2.4 py_0
numpy 1.16.2 py36h19fb1c0_0
numpy-base 1.16.2 py36hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi
olefile 0.46 py36_0
opencv-python 4.1.1.26 pypi_0 pypi
openssl 1.1.1d he774522_3
pathlib 1.0.1 py36_1
pillow 6.1.0 py36hdc69c19_0
pip 19.3.1 py36_0
plaidml 0.6.4 pypi_0 pypi
plaidml-keras 0.6.4 pypi_0 pypi
protobuf 3.9.2 py36h33f27b4_0
psutil 5.6.3 py36he774522_0
pycparser 2.19 pypi_0 pypi
pyparsing 2.4.2 py_0
pyqt 5.9.2 py36h6538335_2
pyreadline 2.1 py36_1
python 3.6.9 h5500b2f_0
python-dateutil 2.8.0 py36_0
pytz 2019.3 py_0
pywavelets 1.1.1 py36he774522_0
pywin32 223 py36hfa6e2cd_1
pyyaml 5.1.2 py36he774522_0
qt 5.9.7 vc14h73c81de_0
scikit-image 0.15.0 py36ha925a31_0
scikit-learn 0.21.3 py36h6288b17_0
scipy 1.3.1 py36h29ff71c_0
setuptools 41.6.0 py36_0
sip 4.19.8 py36h6538335_0
six 1.12.0 py36_0
sqlite 3.30.1 he774522_0
tensorboard 1.14.0 py36he3c9ec2_0
tensorflow 1.14.0 mkl_py36hb88db5b_0
tensorflow-base 1.14.0 mkl_py36ha978198_0
tensorflow-estimator 1.14.0 py_0
termcolor 1.1.0 py36_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge
tornado 6.0.3 py36he774522_0
tqdm 4.36.1 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_0
werkzeug 0.16.0 py_0
wheel 0.33.6 py36_0
wincertstore 0.2 py36h7fe50ca_0
wrapt 1.11.2 py36he774522_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0
================= Configs ==================
--------- .faceswap ---------
backend: amd
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
radius: 3.0
passes: 4
erosion: 0.0
[scaling.sharpen]
method: unsharp_mask
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 50
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
font: default
font_size: 9
--------- train.ini ---------
[global]
coverage: 68.75
mask_type: extended
mask_blur: False
icnr_init: True
conv_aware_init: True
subpixel_upscaling: False
reflect_padding: False
penalized_mask_loss: True
loss_function: ssim
learning_rate: 5e-05
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
Thanks.