Error while training: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
KBP
Posts: 5
Joined: Fri Aug 09, 2019 3:58 pm
Has thanked: 1 time

Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor while training

Post by KBP »

I tried to start training however as it was starting up a critical error was caught... Any idea what's going on here?
Here's the crash report:

Code: Select all

11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.model.dlight_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'model.dlight', info: 'A lightweight, high resolution Dfaker variant (Adapted from https://github.com/dfaker/df)\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'features', datatype: '<class 'str'>', default: 'best', info: 'Higher settings will allow learning more features such as tatoos, piercing,\nand wrinkles.\nStrongly affects VRAM usage.', rounding: 'None', min_max: None, choices: ['lowmem', 'fair', 'best'], gui_radio: True, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: model.dlight
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.model.original_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: model.original
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.model.realface_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: model.realface
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: model.unbalanced
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.model.villain_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Full model requires 9GB+ for batchsize 16\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: model.villain
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
11/08/2019 03:44:03 MainProcess     training_0      config          add_section               DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
11/08/2019 03:44:03 MainProcess     training_0      config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/08/2019 03:44:03 MainProcess     training_0      _config         load_module               DEBUG    Added defaults: trainer.original
11/08/2019 03:44:03 MainProcess     training_0      config          handle_config             DEBUG    Handling config
11/08/2019 03:44:03 MainProcess     training_0      config          check_exists              DEBUG    Config file exists: 'C:\Users\KB\faceswap\config\train.ini'
11/08/2019 03:44:03 MainProcess     training_0      config          load_config               VERBOSE  Loading config: 'C:\Users\KB\faceswap\config\train.ini'
11/08/2019 03:44:03 MainProcess     training_0      config          validate_config           DEBUG    Validating config
11/08/2019 03:44:03 MainProcess     training_0      config          check_config_change       DEBUG    Default config has not changed
11/08/2019 03:44:03 MainProcess     training_0      config          check_config_choices      DEBUG    Checking config choices
11/08/2019 03:44:03 MainProcess     training_0      config          check_config_choices      DEBUG    Checked config choices
11/08/2019 03:44:03 MainProcess     training_0      config          validate_config           DEBUG    Validated config
11/08/2019 03:44:03 MainProcess     training_0      config          handle_config             DEBUG    Handled config
11/08/2019 03:44:03 MainProcess     training_0      config          __init__                  DEBUG    Initialized: Config
11/08/2019 03:44:03 MainProcess     training_0      config          get                       DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
11/08/2019 03:44:03 MainProcess     training_0      config          get                       DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
11/08/2019 03:44:03 MainProcess     training_0      config          changeable_items          DEBUG    Alterable for existing models: {'learning_rate': 5e-05}
11/08/2019 03:44:03 MainProcess     training_0      _base           __init__                  DEBUG    Initializing State: (model_dir: 'C:\Users\KB\Desktop\Jingle Jam 2019\deepfake\faceswap\Yogcats\Lewis\Models\Model 1', model_name: 'dfaker', config_changeable_items: '{'learning_rate': 5e-05}', no_logs: False, pingpong: False, training_image_size: '256'
11/08/2019 03:44:03 MainProcess     training_0      serializer      get_serializer            DEBUG    <lib.serializer._JSONSerializer object at 0x0000026594EB33C8>
11/08/2019 03:44:03 MainProcess     training_0      _base           load                      DEBUG    Loading State
11/08/2019 03:44:03 MainProcess     training_0      _base           load                      INFO     No existing state file found. Generating.
11/08/2019 03:44:03 MainProcess     training_0      _base           new_session_id            DEBUG    1
11/08/2019 03:44:03 MainProcess     training_0      _base           create_new_session        DEBUG    Creating new session. id: 1
11/08/2019 03:44:03 MainProcess     training_0      _base           __init__                  DEBUG    Initialized State:
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       __init__                  DEBUG    Initializing NNBlocks: (use_subpixel: False, use_icnr_init: True, use_convaware_init: True, use_reflect_padding: False, first_run: True)
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       __init__                  INFO     Using Convolutional Aware Initialization. Model generation will take a few minutes...
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       __init__                  DEBUG    Initialized NNBlocks
11/08/2019 03:44:03 MainProcess     training_0      _base           name                      DEBUG    model name: 'dfaker'
11/08/2019 03:44:03 MainProcess     training_0      _base           load_state_info           DEBUG    Loading Input Shape from State file
11/08/2019 03:44:03 MainProcess     training_0      _base           load_state_info           DEBUG    No input shapes saved. Using model config
11/08/2019 03:44:03 MainProcess     training_0      _base           multiple_models_in_folder DEBUG    model_files: [], retval: False
11/08/2019 03:44:03 MainProcess     training_0      original        add_networks              DEBUG    Adding networks
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       upscale                   DEBUG    inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 512, kernel_size: 3, use_instance_norm: False, kwargs: {})
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       get_name                  DEBUG    Generating block name: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x0000026594EBC278>
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       switch_kernel_initializer DEBUG    Switched kernel_initializer from <lib.model.initializers.ConvolutionAware object at 0x0000026594EBC278> to <lib.model.initializers.ICNR object at 0x0000026594EBC320>
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       conv2d                    DEBUG    inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 2048, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d', 'kernel_initializer': <lib.model.initializers.ICNR object at 0x0000026594EBC320>})
11/08/2019 03:44:03 MainProcess     training_0      nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ICNR object at 0x0000026594EBC320>
11/08/2019 03:44:03 MainProcess     training_0      initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: [3, 3, 512, 512]
11/08/2019 03:44:03 MainProcess     training_0      library         _logger_callback          INFO     Opening device "opencl_amd_ellesmere.0"
11/08/2019 03:44:04 MainProcess     training_0      multithreading  run                       DEBUG    Error in thread (training_0): Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.
11/08/2019 03:44:04 MainProcess     MainThread      train           monitor                   DEBUG    Thread error detected
11/08/2019 03:44:04 MainProcess     MainThread      train           monitor                   DEBUG    Closed Monitor
11/08/2019 03:44:04 MainProcess     MainThread      train           end_thread                DEBUG    Ending Training thread
11/08/2019 03:44:04 MainProcess     MainThread      train           end_thread                CRITICAL Error caught! Exiting...
11/08/2019 03:44:04 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: 'training'
11/08/2019 03:44:04 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: 'training_0'
11/08/2019 03:44:04 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: 'training_0'
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   initialize                DEBUG    PlaidML already initialized
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   get_supported_devices     DEBUG    [<plaidml._DeviceConfig object at 0x000002658AAB5390>]
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    Experimental Devices: [<plaidml._DeviceConfig object at 0x000002658AAB5A20>]
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    [<plaidml._DeviceConfig object at 0x000002658AAB5A20>, <plaidml._DeviceConfig object at 0x000002658AAB5390>]
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   __init__                  DEBUG    Initialized: PlaidMLStats
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
11/08/2019 03:44:04 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
Traceback (most recent call last):
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 558, in make_tensor_proto
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 558, in <listcomp>
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\compat.py", line 65, in as_bytes
    (bytes_or_text,))
TypeError: Expected binary or unicode string, got <tile.Value upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512)>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\KB\faceswap\lib\cli.py", line 128, in execute_script
    process.process()
  File "C:\Users\KB\faceswap\scripts\train.py", line 109, in process
    self.end_thread(thread, err)
  File "C:\Users\KB\faceswap\scripts\train.py", line 135, in end_thread
    thread.join()
  File "C:\Users\KB\faceswap\lib\multithreading.py", line 117, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\KB\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\KB\faceswap\scripts\train.py", line 160, in training
    raise err
  File "C:\Users\KB\faceswap\scripts\train.py", line 148, in training
    model = self.load_model()
  File "C:\Users\KB\faceswap\scripts\train.py", line 183, in load_model
    predict=False)
  File "C:\Users\KB\faceswap\plugins\train\model\dfaker.py", line 21, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\KB\faceswap\plugins\train\model\original.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\KB\faceswap\plugins\train\model\_base.py", line 115, in __init__
    self.build()
  File "C:\Users\KB\faceswap\plugins\train\model\_base.py", line 240, in build
    self.add_networks()
  File "C:\Users\KB\faceswap\plugins\train\model\original.py", line 31, in add_networks
    self.add_network("decoder", "a", self.decoder(), is_output=True)
  File "C:\Users\KB\faceswap\plugins\train\model\dfaker.py", line 29, in decoder
    var_x = self.blocks.upscale(var_x, 512, res_block_follows=True)
  File "C:\Users\KB\faceswap\lib\model\nn_blocks.py", line 137, in upscale
    **kwargs)
  File "C:\Users\KB\faceswap\lib\model\nn_blocks.py", line 90, in conv2d
    **kwargs)(inp)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 141, in build
    constraint=self.kernel_constraint)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
    weight = K.variable(initializer(shape),
  File "C:\Users\KB\faceswap\lib\model\initializers.py", line 67, in __call__
    var_x = tf.transpose(var_x, perm=[2, 0, 1, 3])
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1738, in transpose
    ret = transpose_fn(a, perm, name=name)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11045, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 530, in _apply_op_helper
    raise err
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 527, in _apply_op_helper
    preferred_dtype=default_dtype)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 1224, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 305, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 246, in constant
    allow_broadcast=True)
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\constant_op.py", line 284, in _constant_impl
    allow_broadcast=allow_broadcast))
  File "C:\Users\KB\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 562, in make_tensor_proto
    "supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.

============ System Information ============
encoding:            cp1252
git_branch:          Not Found
git_commits:         Not Found
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - Ellesmere (experimental), GPU_1: Advanced Micro Devices, Inc. - Ellesmere (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['2906.10', '2906.10']
gpu_vram:            GPU_0: 8192MB, GPU_1: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.18362-SP0
os_release:          10
py_command:          C:\Users\KB\faceswap\faceswap.py train -A C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/DF images -ala C:/Users/KB/Videos/Lewis template_alignments.fsa -B C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/CATS/Cats DF sorted -alb C:/Users/KB/Videos/Lewis CAT_alignments.fsa -m C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/Models/Model 1 -t dfaker -bs 18 -it 1000000 -s 100 -ss 25000 -tia C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/DF images -tib C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/CATS/Cats DF sorted -to C:/Users/KB/Desktop/Jingle Jam 2019/deepfake/faceswap/Yogcats/Lewis/time lapses/TL 1 -ps 50 -wl -L INFO -gui
py_conda_version:    conda 4.7.12
py_implementation:   CPython
py_version:          3.6.9
py_virtual_env:      True
sys_cores:           4
sys_processor:       Intel64 Family 6 Model 158 Stepping 9, GenuineIntel
sys_ram:             Total: 16246MB, Available: 8645MB, Used: 7601MB, Free: 8645MB

=============== Pip Packages ===============
absl-py==0.8.0
astor==0.8.0
certifi==2019.9.11
cffi==1.13.2
cloudpickle==1.2.2
cycler==0.10.0
cytoolz==0.10.0
dask==2.6.0
decorator==4.4.1
enum34==1.1.6
fastcluster==1.1.25
ffmpy==0.2.2
gast==0.3.2
grpcio==1.16.1
h5py==2.9.0
imageio==2.5.0
imageio-ffmpeg==0.3.0
joblib==0.13.2
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==2.2.2
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.16.2
nvidia-ml-py3==7.352.1
olefile==0.46
opencv-python==4.1.1.26
pathlib==1.0.1
Pillow==6.1.0
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.9.2
psutil==5.6.3
pycparser==2.19
pyparsing==2.4.2
pyreadline==2.1
python-dateutil==2.8.0
pytz==2019.3
PyWavelets==1.1.1
pywin32==223
PyYAML==5.1.2
scikit-image==0.15.0
scikit-learn==0.21.3
scipy==1.3.1
six==1.12.0
tensorboard==1.14.0
tensorflow==1.14.0
tensorflow-estimator==1.14.0
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.3
tqdm==4.36.1
Werkzeug==0.16.0
wincertstore==0.2
wrapt==1.11.2

============== Conda Packages ==============
# packages in environment at C:\Users\KB\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.3.0                       mkl  
absl-py 0.8.0 py36_0
astor 0.8.0 py36_0
blas 1.0 mkl
ca-certificates 2019.10.16 0
certifi 2019.9.11 py36_0
cffi 1.13.2 pypi_0 pypi cloudpickle 1.2.2 py_0
cycler 0.10.0 py36h009560c_0
cytoolz 0.10.0 py36he774522_0
dask-core 2.6.0 py_0
decorator 4.4.1 py_0
enum34 1.1.6 pypi_0 pypi fastcluster 1.1.25 py36he350917_1000 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1
gast 0.3.2 py_0
grpcio 1.16.1 py36h351948d_1
h5py 2.9.0 py36h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
imageio 2.5.0 py36_0
imageio-ffmpeg 0.3.0 pypi_0 pypi intel-openmp 2019.4 245
joblib 0.13.2 py36_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py36ha925a31_0
libmklml 2019.0.5 0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.9.2 h7bd577a_0
libtiff 4.0.10 hb898794_2
markdown 3.1.1 py36_0
matplotlib 2.2.2 py36had4c4a9_2
mkl 2019.4 245
mkl-service 2.3.0 py36hb782905_0
mkl_fft 1.0.15 py36h14836fe_0
mkl_random 1.1.0 py36h675688f_0
networkx 2.4 py_0
numpy 1.16.2 py36h19fb1c0_0
numpy-base 1.16.2 py36hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi olefile 0.46 py36_0
opencv-python 4.1.1.26 pypi_0 pypi openssl 1.1.1d he774522_3
pathlib 1.0.1 py36_1
pillow 6.1.0 py36hdc69c19_0
pip 19.3.1 py36_0
plaidml 0.6.4 pypi_0 pypi plaidml-keras 0.6.4 pypi_0 pypi protobuf 3.9.2 py36h33f27b4_0
psutil 5.6.3 py36he774522_0
pycparser 2.19 pypi_0 pypi pyparsing 2.4.2 py_0
pyqt 5.9.2 py36h6538335_2
pyreadline 2.1 py36_1
python 3.6.9 h5500b2f_0
python-dateutil 2.8.0 py36_0
pytz 2019.3 py_0
pywavelets 1.1.1 py36he774522_0
pywin32 223 py36hfa6e2cd_1
pyyaml 5.1.2 py36he774522_0
qt 5.9.7 vc14h73c81de_0
scikit-image 0.15.0 py36ha925a31_0
scikit-learn 0.21.3 py36h6288b17_0
scipy 1.3.1 py36h29ff71c_0
setuptools 41.6.0 py36_0
sip 4.19.8 py36h6538335_0
six 1.12.0 py36_0
sqlite 3.30.1 he774522_0
tensorboard 1.14.0 py36he3c9ec2_0
tensorflow 1.14.0 mkl_py36hb88db5b_0
tensorflow-base 1.14.0 mkl_py36ha978198_0
tensorflow-estimator 1.14.0 py_0
termcolor 1.1.0 py36_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge tornado 6.0.3 py36he774522_0
tqdm 4.36.1 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_0
werkzeug 0.16.0 py_0
wheel 0.33.6 py36_0
wincertstore 0.2 py36h7fe50ca_0
wrapt 1.11.2 py36he774522_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized radius: 3.0 passes: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 50 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 font: default font_size: 9 --------- train.ini --------- [global] coverage: 68.75 mask_type: extended mask_blur: False icnr_init: True conv_aware_init: True subpixel_upscaling: False reflect_padding: False penalized_mask_loss: True loss_function: ssim learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4

Thanks.

User avatar
kilroythethird
Posts: 20
Joined: Fri Jul 12, 2019 11:35 pm
Answers: 1
Has thanked: 2 times
Been thanked: 10 times

Re: Crash before training starts

Post by kilroythethird »

ICNR currently doesn't work for AMD user.
Please go to Settings->Training->Global ad uncheck "ICNR init"

that amd guy

User avatar
ScrtAgentX
Posts: 1
Joined: Wed Mar 11, 2020 1:18 am
Has thanked: 1 time

Error while training: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor

Post by ScrtAgentX »

Hey! I'm new to the Faceswap community and was working on a simple test project. Everything seemed to be going okay until I put in the files for the training. I have attempted to switch trainers and change the batch size without any change. I have checked and the program is up to date. Any help is appreciated. Thanks in advance!

Code: Select all

03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'features', datatype: '<class 'str'>', default: 'best', info: 'Higher settings will allow learning more features such as tatoos, piercing,\nand wrinkles.\nStrongly affects VRAM usage.', rounding: 'None', min_max: None, choices: ['lowmem', 'fair', 'best'], gui_radio: True, fixed: True, group: None)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.dlight
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.original_defaults
03/10/2020 19:11:56 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.original
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.realface_defaults
03/10/2020 19:11:56 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.realface
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
03/10/2020 19:11:56 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.unbalanced
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.villain_defaults
03/10/2020 19:11:56 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Full model requires 9GB+ for batchsize 16\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.villain
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
03/10/2020 19:11:56 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/10/2020 19:11:56 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: trainer.original
03/10/2020 19:11:56 MainProcess     _training_0     config          handle_config             DEBUG    Handling config
03/10/2020 19:11:56 MainProcess     _training_0     config          check_exists              DEBUG    Config file exists: 'C:\Users\natha\faceswap\config\train.ini'
03/10/2020 19:11:56 MainProcess     _training_0     config          load_config               VERBOSE  Loading config: 'C:\Users\natha\faceswap\config\train.ini'
03/10/2020 19:11:56 MainProcess     _training_0     config          validate_config           DEBUG    Validating config
03/10/2020 19:11:56 MainProcess     _training_0     config          check_config_change       DEBUG    Default config has not changed
03/10/2020 19:11:56 MainProcess     _training_0     config          check_config_choices      DEBUG    Checking config choices
03/10/2020 19:11:56 MainProcess     _training_0     config          check_config_choices      DEBUG    Checked config choices
03/10/2020 19:11:56 MainProcess     _training_0     config          validate_config           DEBUG    Validated config
03/10/2020 19:11:56 MainProcess     _training_0     config          handle_config             DEBUG    Handled config
03/10/2020 19:11:56 MainProcess     _training_0     config          __init__                  DEBUG    Initialized: Config
03/10/2020 19:11:56 MainProcess     _training_0     config          get                       DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
03/10/2020 19:11:56 MainProcess     _training_0     config          get                       DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
03/10/2020 19:11:56 MainProcess     _training_0     config          changeable_items          DEBUG    Alterable for existing models: {'learning_rate': 5e-05}
03/10/2020 19:11:56 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing State: (model_dir: 'F:\FS\Faces\Extracted\Project 1\Trained', model_name: 'villain', config_changeable_items: '{'learning_rate': 5e-05}', no_logs: False, pingpong: False, training_image_size: '256'
03/10/2020 19:11:56 MainProcess     _training_0     serializer      get_serializer            DEBUG    <lib.serializer._JSONSerializer object at 0x00000206ACCB8C88>
03/10/2020 19:11:56 MainProcess     _training_0     _base           load                      DEBUG    Loading State
03/10/2020 19:11:56 MainProcess     _training_0     _base           load                      INFO     No existing state file found. Generating.
03/10/2020 19:11:56 MainProcess     _training_0     _base           new_session_id            DEBUG    1
03/10/2020 19:11:56 MainProcess     _training_0     _base           create_new_session        DEBUG    Creating new session. id: 1
03/10/2020 19:11:56 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized State:
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initializing NNBlocks: (use_subpixel: False, use_icnr_init: True, use_convaware_init: True, use_reflect_padding: True, first_run: True)
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       __init__                  INFO     Using Convolutional Aware Initialization. Model generation will take a few minutes...
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initialized NNBlocks
03/10/2020 19:11:56 MainProcess     _training_0     _base           name                      DEBUG    model name: 'villain'
03/10/2020 19:11:56 MainProcess     _training_0     _base           load_state_info           DEBUG    Loading Input Shape from State file
03/10/2020 19:11:56 MainProcess     _training_0     _base           load_state_info           DEBUG    No input shapes saved. Using model config
03/10/2020 19:11:56 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Requested coverage_ratio: 0.6875
03/10/2020 19:11:56 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Final coverage_ratio: 0.6875
03/10/2020 19:11:56 MainProcess     _training_0     _base           __init__                  DEBUG    training_opts: {'alignments': {'a': 'F:\\FS\\Faces\\Extracted\\Project 1\\AM Face\\alignments.fsa', 'b': 'F:\\FS\\Faces\\Extracted\\Project 1\\Haley Face\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}
03/10/2020 19:11:56 MainProcess     _training_0     _base           multiple_models_in_folder DEBUG    model_files: [], retval: False
03/10/2020 19:11:56 MainProcess     _training_0     original        add_networks              DEBUG    Adding networks
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 16, 16, 512), filters: 512, kernel_size: 3, use_instance_norm: False, kwargs: {'kernel_initializer': <keras.initializers.RandomNormal object at 0x00000206ACC57988>})
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale_(<tile.Value SymbolicDim UINT64()>, 16, 16, 512)_0
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <keras.initializers.RandomNormal object at 0x00000206ACC57988>
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       switch_kernel_initializer DEBUG    Switched kernel_initializer from <keras.initializers.RandomNormal object at 0x00000206ACC57988> to <lib.model.initializers.ICNR object at 0x00000206ACD665C8>
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: ReflectionPadding FLOAT32(<tile.Value SymbolicDim UINT64()>, 18, 18, 512), filters: 2048, kernel_size: 3, strides: (1, 1), padding: valid, kwargs: {'name': 'upscale_(<tile.Value SymbolicDim UINT64()>, 16, 16, 512)_0_conv2d', 'kernel_initializer': <lib.model.initializers.ICNR object at 0x00000206ACD665C8>})
03/10/2020 19:11:56 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ICNR object at 0x00000206ACD665C8>
03/10/2020 19:11:56 MainProcess     _training_0     library         _logger_callback          INFO     Opening device "opencl_amd_gfx900.0"
03/10/2020 19:11:56 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: RevMul FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.
03/10/2020 19:11:57 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
03/10/2020 19:11:57 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
03/10/2020 19:11:57 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
03/10/2020 19:11:57 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
03/10/2020 19:11:57 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
03/10/2020 19:11:57 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
03/10/2020 19:11:57 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   initialize                DEBUG    PlaidML already initialized
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   get_supported_devices     DEBUG    [<plaidml._DeviceConfig object at 0x00000206AD66DC88>]
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    Experimental Devices: [<plaidml._DeviceConfig object at 0x00000206AD69FFC8>]
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    [<plaidml._DeviceConfig object at 0x00000206AD69FFC8>, <plaidml._DeviceConfig object at 0x00000206AD66DC88>]
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   __init__                  DEBUG    Initialized: PlaidMLStats
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
03/10/2020 19:11:57 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
Traceback (most recent call last):
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 541, in make_tensor_proto
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 541, in <listcomp>
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\util\compat.py", line 71, in as_bytes
    (bytes_or_text,))
TypeError: Expected binary or unicode string, got <tile.Value RevMul FLOAT32(3, 3, 512, 512)>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\natha\faceswap\lib\cli.py", line 128, in execute_script
    process.process()
  File "C:\Users\natha\faceswap\scripts\train.py", line 159, in process
    self._end_thread(thread, err)
  File "C:\Users\natha\faceswap\scripts\train.py", line 199, in _end_thread
    thread.join()
  File "C:\Users\natha\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\natha\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\natha\faceswap\scripts\train.py", line 224, in _training
    raise err
  File "C:\Users\natha\faceswap\scripts\train.py", line 212, in _training
    model = self._load_model()
  File "C:\Users\natha\faceswap\scripts\train.py", line 253, in _load_model
    predict=False)
  File "C:\Users\natha\faceswap\plugins\train\model\villain.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\natha\faceswap\plugins\train\model\original.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\natha\faceswap\plugins\train\model\_base.py", line 126, in __init__
    self.build()
  File "C:\Users\natha\faceswap\plugins\train\model\_base.py", line 244, in build
    self.add_networks()
  File "C:\Users\natha\faceswap\plugins\train\model\original.py", line 31, in add_networks
    self.add_network("decoder", "a", self.decoder(), is_output=True)
  File "C:\Users\natha\faceswap\plugins\train\model\villain.py", line 68, in decoder
    var_x = self.blocks.upscale(var_x, 512, res_block_follows=True, **kwargs)
  File "C:\Users\natha\faceswap\lib\model\nn_blocks.py", line 137, in upscale
    **kwargs)
  File "C:\Users\natha\faceswap\lib\model\nn_blocks.py", line 90, in conv2d
    **kwargs)(inp)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 141, in build
    constraint=self.kernel_constraint)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
    weight = K.variable(initializer(shape),
  File "C:\Users\natha\faceswap\lib\model\initializers.py", line 67, in __call__
    var_x = tf.transpose(var_x, perm=[2, 0, 1, 3])
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1859, in transpose
    ret = transpose_fn(a, perm, name=name)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 11452, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 531, in _apply_op_helper
    raise err
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 528, in _apply_op_helper
    preferred_dtype=default_dtype)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1297, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 286, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 265, in _constant_impl
    allow_broadcast=allow_broadcast))
  File "C:\Users\natha\Anaconda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 545, in make_tensor_proto
    "supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: RevMul FLOAT32(3, 3, 512, 512). Consider casting elements to a supported type.

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         5ccf241 Merge branch 'staging'
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - gfx900 (experimental), GPU_1: Advanced Micro Devices, Inc. - gfx900 (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3004.8 (PAL,HSAIL)', '3004.8 (PAL,HSAIL)']
gpu_vram:            GPU_0: 8176MB, GPU_1: 8176MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.18362-SP0
os_release:          10
py_command:          C:\Users\natha\faceswap\faceswap.py train -A F:/FS/Faces/Extracted/Project 1/AM Face -B F:/FS/Faces/Extracted/Project 1/Haley Face -m F:/FS/Faces/Extracted/Project 1/Trained -t villain -bs 10 -it 1000000 -s 100 -ss 25000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.2
py_implementation:   CPython
py_version:          3.7.6
py_virtual_env:      True
sys_cores:           24
sys_processor:       AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram:             Total: 32678MB, Available: 25934MB, Used: 6744MB, Free: 25934MB

=============== Pip Packages ===============
absl-py==0.9.0
asn1crypto==1.3.0
astor==0.8.0
blinker==1.4
cachetools==3.1.1
certifi==2019.11.28
cffi==1.14.0
chardet==3.0.4
Click==7.0
cloudpickle==1.3.0
cryptography==2.8
cycler==0.10.0
cytoolz==0.10.1
dask==2.11.0
decorator==4.4.1
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.2
gast==0.2.2
google-auth==1.11.2
google-auth-oauthlib==0.4.1
google-pasta==0.1.8
grpcio==1.27.2
h5py==2.9.0
idna==2.8
imageio==2.6.1
imageio-ffmpeg==0.4.1
joblib==0.14.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==3.1.3
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.17.4
nvidia-ml-py3==7.352.1
oauthlib==3.1.0
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==6.2.1
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.11.4
psutil==5.6.7
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.19
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.6
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
pytz==2019.3
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn==0.22.1
scipy==1.4.1
six==1.14.0
tensorboard==2.1.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.3
tqdm==4.42.1
urllib3==1.25.8
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.11.2

============== Conda Packages ==============
# packages in environment at C:\Users\natha\Anaconda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.2.0                     eigen  
absl-py 0.9.0 py37_0
asn1crypto 1.3.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
ca-certificates 2020.1.1 0
cachetools 3.1.1 py_0
certifi 2019.11.28 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.0 py37_0
cloudpickle 1.3.0 py_0
cryptography 2.8 py37h7a1dbc1_0
cycler 0.10.0 py37_0
cytoolz 0.10.1 py37he774522_0
dask-core 2.11.0 py_0
decorator 4.4.1 py_0
enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py37he350917_0 conda-forge ffmpeg 4.2 h6538335_0 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-auth 1.11.2 py_0
google-auth-oauthlib 0.4.1 py_2
google-pasta 0.1.8 py_0
grpcio 1.27.2 py37h351948d_0
h5py 2.9.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.8 py37_0
imageio 2.6.1 py37_0
imageio-ffmpeg 0.4.1 py_0 conda-forge intel-openmp 2020.0 166
joblib 0.14.1 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py37ha925a31_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.11.4 h7bd577a_0
libtiff 4.1.0 h56a325e_0
markdown 3.1.1 py37_0
matplotlib 3.1.1 py37hc8f65d3_0
matplotlib-base 3.1.3 py37h64f37c6_0
mkl 2020.0 166
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.15 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
networkx 2.4 py_0
numpy 1.17.4 py37h4320e6b_0
numpy-base 1.17.4 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv-python 4.1.2.30 pypi_0 pypi openssl 1.1.1d he774522_4
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_1
pillow 6.2.1 py37hdc69c19_0
pip 20.0.2 py37_1
plaidml 0.6.4 pypi_0 pypi plaidml-keras 0.6.4 pypi_0 pypi protobuf 3.11.4 py37h33f27b4_0
psutil 5.6.7 py37he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.7 py_0
pycparser 2.19 py37_0
pyjwt 1.7.1 py37_0
pyopenssl 19.1.0 py37_0
pyparsing 2.4.6 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pysocks 1.7.1 py37_0
python 3.7.6 h60c2a47_2
python-dateutil 2.8.1 py_0
pytz 2019.3 py_0
pywavelets 1.1.1 py37he774522_0
pywin32 227 py37he774522_1
pyyaml 5.3 py37he774522_0
qt 5.9.7 vc14h73c81de_0
requests 2.22.0 py37_1
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.16.2 py37h47e9c7a_0
scikit-learn 0.22.1 py37h6288b17_0
scipy 1.4.1 py37h9439919_0
setuptools 45.2.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.14.0 py37_0
sqlite 3.31.1 he774522_0
tensorboard 2.1.0 py3_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge tornado 6.0.3 py37he774522_3
tqdm 4.42.1 py_0
urllib3 1.25.8 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.11.2 py37he774522_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 68.75 mask_type: none mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False icnr_init: True conv_aware_init: True subpixel_upscaling: False reflect_padding: True penalized_mask_loss: True loss_function: ssim learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: Error while training: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor

Post by torzdf »

My word is final

User avatar
locomanos
Posts: 7
Joined: Tue Mar 24, 2020 5:32 am
Has thanked: 5 times

Critical Error on Training Initialisation "Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor"

Post by locomanos »

Hello all,

I have encountered very strange issue when initialising training. After console displays line "opening device" it says:

"CRITICAL Error caught! Exiting...
ERROR Caught exception in thread: '_training_0'
ERROR Got Exception on main handler:"

I have tried using different training algorithms, and mask even no mask with low batch sizes... no change.

Please find below the log

Code: Select all

03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.dlight
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.original_defaults
03/24/2020 00:49:42 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.original
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.realface_defaults
03/24/2020 00:49:42 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.realface
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
03/24/2020 00:49:42 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.unbalanced
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.villain_defaults
03/24/2020 00:49:42 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Full model requires 9GB+ for batchsize 16\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.villain
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
03/24/2020 00:49:42 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
03/24/2020 00:49:42 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: trainer.original
03/24/2020 00:49:42 MainProcess     _training_0     config          handle_config             DEBUG    Handling config
03/24/2020 00:49:42 MainProcess     _training_0     config          check_exists              DEBUG    Config file exists: 'C:\Users\Jim\faceswap\config\train.ini'
03/24/2020 00:49:42 MainProcess     _training_0     config          load_config               VERBOSE  Loading config: 'C:\Users\Jim\faceswap\config\train.ini'
03/24/2020 00:49:42 MainProcess     _training_0     config          validate_config           DEBUG    Validating config
03/24/2020 00:49:42 MainProcess     _training_0     config          check_config_change       DEBUG    Default config has not changed
03/24/2020 00:49:42 MainProcess     _training_0     config          check_config_choices      DEBUG    Checking config choices
03/24/2020 00:49:42 MainProcess     _training_0     config          check_config_choices      DEBUG    Checked config choices
03/24/2020 00:49:42 MainProcess     _training_0     config          validate_config           DEBUG    Validated config
03/24/2020 00:49:42 MainProcess     _training_0     config          handle_config             DEBUG    Handled config
03/24/2020 00:49:42 MainProcess     _training_0     config          __init__                  DEBUG    Initialized: Config
03/24/2020 00:49:42 MainProcess     _training_0     config          get                       DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
03/24/2020 00:49:42 MainProcess     _training_0     config          get                       DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
03/24/2020 00:49:42 MainProcess     _training_0     config          changeable_items          DEBUG    Alterable for existing models: {'learning_rate': 5e-05}
03/24/2020 00:49:42 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing State: (model_dir: 'C:\Users\Jim\Documents\DF\Models\Nat Cherie X S Johansson', model_name: 'lightweight', config_changeable_items: '{'learning_rate': 5e-05}', no_logs: False, pingpong: False, training_image_size: '256'
03/24/2020 00:49:42 MainProcess     _training_0     serializer      get_serializer            DEBUG    <lib.serializer._JSONSerializer object at 0x000001C7CC30F808>
03/24/2020 00:49:42 MainProcess     _training_0     _base           load                      DEBUG    Loading State
03/24/2020 00:49:42 MainProcess     _training_0     _base           load                      INFO     No existing state file found. Generating.
03/24/2020 00:49:42 MainProcess     _training_0     _base           new_session_id            DEBUG    1
03/24/2020 00:49:42 MainProcess     _training_0     _base           create_new_session        DEBUG    Creating new session. id: 1
03/24/2020 00:49:42 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized State:
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initializing NNBlocks: (use_subpixel: False, use_icnr_init: True, use_convaware_init: True, use_reflect_padding: False, first_run: True)
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       __init__                  INFO     Using Convolutional Aware Initialization. Model generation will take a few minutes...
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initialized NNBlocks
03/24/2020 00:49:42 MainProcess     _training_0     _base           name                      DEBUG    model name: 'lightweight'
03/24/2020 00:49:42 MainProcess     _training_0     _base           load_state_info           DEBUG    Loading Input Shape from State file
03/24/2020 00:49:42 MainProcess     _training_0     _base           load_state_info           DEBUG    No input shapes saved. Using model config
03/24/2020 00:49:42 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Requested coverage_ratio: 0.75
03/24/2020 00:49:42 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Final coverage_ratio: 0.75
03/24/2020 00:49:42 MainProcess     _training_0     _base           __init__                  DEBUG    training_opts: {'alignments': {'a': 'C:\\Users\\Jim\\Documents\\DF\\Nat Cherie\\Nat Cherie Faceset\\Natalie_Cherie_Alignments.fsa', 'b': 'C:\\Users\\Jim\\Documents\\DF\\S Johansson\\S Johanson Faceset\\Scarlett_Johansson_Alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.75, 'mask_type': 'vgg-obstructed', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': True}
03/24/2020 00:49:42 MainProcess     _training_0     _base           multiple_models_in_folder DEBUG    model_files: [], retval: False
03/24/2020 00:49:42 MainProcess     _training_0     original        add_networks              DEBUG    Adding networks
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 256), filters: 512, kernel_size: 3, use_instance_norm: False, kwargs: {})
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       get_name                  DEBUG    Generating block name: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 256)_0
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Set default kernel_initializer to: <lib.model.initializers.ConvolutionAware object at 0x000001C7CC4A7C08>
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       switch_kernel_initializer DEBUG    Switched kernel_initializer from <lib.model.initializers.ConvolutionAware object at 0x000001C7CC4A7C08> to <lib.model.initializers.ICNR object at 0x000001C7CC255D48>
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    inp: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 256), filters: 2048, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 256)_0_conv2d', 'kernel_initializer': <lib.model.initializers.ICNR object at 0x000001C7CC255D48>})
03/24/2020 00:49:42 MainProcess     _training_0     nn_blocks       set_default_initializer   DEBUG    Using model specified initializer: <lib.model.initializers.ICNR object at 0x000001C7CC255D48>
03/24/2020 00:49:42 MainProcess     _training_0     initializers    __call__                  INFO     Calculating Convolution Aware Initializer for shape: [3, 3, 256, 512]
03/24/2020 00:49:43 MainProcess     _training_0     library         _logger_callback          INFO     Opening device "opencl_amd_gfx900.0"
03/24/2020 00:49:43 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 256)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 256, 512). Consider casting elements to a supported type.
03/24/2020 00:49:43 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
03/24/2020 00:49:43 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
03/24/2020 00:49:43 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
03/24/2020 00:49:43 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
03/24/2020 00:49:43 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
03/24/2020 00:49:43 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
03/24/2020 00:49:43 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   initialize                DEBUG    PlaidML already initialized
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   get_supported_devices     DEBUG    [<plaidml._DeviceConfig object at 0x000001C7CC5D3408>]
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    Experimental Devices: [<plaidml._DeviceConfig object at 0x000001C7CC5002C8>]
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   get_all_devices           DEBUG    [<plaidml._DeviceConfig object at 0x000001C7CC5002C8>, <plaidml._DeviceConfig object at 0x000001C7CC5D3408>]
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   __init__                  DEBUG    Initialized: PlaidMLStats
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
03/24/2020 00:49:43 MainProcess     MainThread      plaidml_tools   supported_indices         DEBUG    [1]
Traceback (most recent call last):
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 541, in make_tensor_proto
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 541, in <listcomp>
    str_values = [compat.as_bytes(x) for x in proto_values]
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\util\compat.py", line 71, in as_bytes
    (bytes_or_text,))
TypeError: Expected binary or unicode string, got <tile.Value upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 256)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 256, 512)>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Jim\faceswap\lib\cli.py", line 128, in execute_script
    process.process()
  File "C:\Users\Jim\faceswap\scripts\train.py", line 159, in process
    self._end_thread(thread, err)
  File "C:\Users\Jim\faceswap\scripts\train.py", line 199, in _end_thread
    thread.join()
  File "C:\Users\Jim\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\Jim\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Jim\faceswap\scripts\train.py", line 224, in _training
    raise err
  File "C:\Users\Jim\faceswap\scripts\train.py", line 212, in _training
    model = self._load_model()
  File "C:\Users\Jim\faceswap\scripts\train.py", line 253, in _load_model
    predict=False)
  File "C:\Users\Jim\faceswap\plugins\train\model\lightweight.py", line 20, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\Jim\faceswap\plugins\train\model\original.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\Jim\faceswap\plugins\train\model\_base.py", line 126, in __init__
    self.build()
  File "C:\Users\Jim\faceswap\plugins\train\model\_base.py", line 244, in build
    self.add_networks()
  File "C:\Users\Jim\faceswap\plugins\train\model\original.py", line 31, in add_networks
    self.add_network("decoder", "a", self.decoder(), is_output=True)
  File "C:\Users\Jim\faceswap\plugins\train\model\lightweight.py", line 40, in decoder
    var_x = self.blocks.upscale(var_x, 512)
  File "C:\Users\Jim\faceswap\lib\model\nn_blocks.py", line 137, in upscale
    **kwargs)
  File "C:\Users\Jim\faceswap\lib\model\nn_blocks.py", line 90, in conv2d
    **kwargs)(inp)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\keras\layers\convolutional.py", line 141, in build
    constraint=self.kernel_constraint)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
    weight = K.variable(initializer(shape),
  File "C:\Users\Jim\faceswap\lib\model\initializers.py", line 67, in __call__
    var_x = tf.transpose(var_x, perm=[2, 0, 1, 3])
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1859, in transpose
    ret = transpose_fn(a, perm, name=name)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 11452, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 531, in _apply_op_helper
    raise err
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 528, in _apply_op_helper
    preferred_dtype=default_dtype)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1297, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 286, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 265, in _constant_impl
    allow_broadcast=allow_broadcast))
  File "C:\Users\Jim\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\tensor_util.py", line 545, in make_tensor_proto
    "supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'plaidml.tile.Value'> to Tensor. Contents: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 256)_0_conv2d/conv_aware Tensor FLOAT32(3, 3, 256, 512). Consider casting elements to a supported type.

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         924d537 Core updates (#982)
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - gfx900 (experimental), GPU_1: Advanced Micro Devices, Inc. - gfx900 (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3004.8 (PAL,HSAIL)', '3004.8 (PAL,HSAIL)']
gpu_vram:            GPU_0: 8176MB, GPU_1: 8176MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.18362-SP0
os_release:          10
py_command:          C:\Users\Jim\faceswap\faceswap.py train -A C:/Users/Jim/Documents/DF/A/A Faceset -ala C:/Users/Jim/Documents/DF/A/A Faceset/A_Alignments.fsa -B C:/Users/Jim/Documents/DF/B/B Faceset -alb C:/Users/Jim/Documents/DF/B/B Faceset/B_Alignments.fsa -m C:/Users/Jim/Documents/DF/Models/AB -t lightweight -bs 10 -it 1000000 -s 100 -ss 25000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.6
py_virtual_env:      True
sys_cores:           8
sys_processor:       Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
sys_ram:             Total: 16344MB, Available: 3214MB, Used: 13130MB, Free: 3214MB

=============== Pip Packages ===============
absl-py==0.9.0
asn1crypto==1.3.0
astor==0.8.0
blinker==1.4
cachetools==3.1.1
certifi==2019.11.28
cffi==1.14.0
chardet==3.0.4
click==7.1.1
cloudpickle==1.3.0
cryptography==2.8
cycler==0.10.0
cytoolz==0.10.1
dask==2.12.0
decorator==4.4.2
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.2
gast==0.2.2
google-auth==1.11.2
google-auth-oauthlib==0.4.1
google-pasta==0.1.8
grpcio==1.27.2
h5py==2.9.0
idna==2.9
imageio==2.6.1
imageio-ffmpeg==0.4.1
joblib==0.14.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==3.1.3
mkl-fft==1.0.15
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.4
numpy==1.17.4
nvidia-ml-py3==7.352.1
oauthlib==3.1.0
olefile==0.46
opencv-python==4.1.2.30
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==6.2.1
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.11.4
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.6
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
pytz==2019.3
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn==0.22.1
scipy==1.4.1
six==1.14.0
tensorboard==2.1.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm==4.43.0
urllib3==1.25.8
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\Jim\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.2.0                     eigen  
absl-py 0.9.0 py37_0
asn1crypto 1.3.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
ca-certificates 2020.1.1 0
cachetools 3.1.1 py_0
certifi 2019.11.28 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.1.1 py_0
cloudpickle 1.3.0 py_0
cryptography 2.8 py37h7a1dbc1_0
cycler 0.10.0 py37_0
cytoolz 0.10.1 py37he774522_0
dask-core 2.12.0 py_0
decorator 4.4.2 py_0
enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py37he350917_0 conda-forge ffmpeg 4.2 h6538335_0 conda-forge ffmpy 0.2.2 pypi_0 pypi freetype 2.9.1 ha9979f8_1
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-auth 1.11.2 py_0
google-auth-oauthlib 0.4.1 py_2
google-pasta 0.1.8 py_0
grpcio 1.27.2 py37h351948d_0
h5py 2.9.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.9 py_1
imageio 2.6.1 py37_0
imageio-ffmpeg 0.4.1 py_0 conda-forge intel-openmp 2020.0 166
joblib 0.14.1 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py37ha925a31_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.11.4 h7bd577a_0
libtiff 4.1.0 h56a325e_0
markdown 3.1.1 py37_0
matplotlib 3.1.1 py37hc8f65d3_0
matplotlib-base 3.1.3 py37h64f37c6_0
mkl 2020.0 166
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.15 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
networkx 2.4 py_0
numpy 1.17.4 py37h4320e6b_0
numpy-base 1.17.4 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv-python 4.1.2.30 pypi_0 pypi openssl 1.1.1e he774522_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_1
pillow 6.2.1 py37hdc69c19_0
pip 20.0.2 py37_1
plaidml 0.6.4 pypi_0 pypi plaidml-keras 0.6.4 pypi_0 pypi protobuf 3.11.4 py37h33f27b4_0
psutil 5.7.0 py37he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.7 py_0
pycparser 2.20 py_0
pyjwt 1.7.1 py37_0
pyopenssl 19.1.0 py37_0
pyparsing 2.4.6 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pysocks 1.7.1 py37_0
python 3.7.6 h60c2a47_2
python-dateutil 2.8.1 py_0
python_abi 3.7 1_cp37m conda-forge pytz 2019.3 py_0
pywavelets 1.1.1 py37he774522_0
pywin32 227 py37he774522_1
pyyaml 5.3 py37he774522_0
qt 5.9.7 vc14h73c81de_0
requests 2.23.0 py37_0
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.16.2 py37h47e9c7a_0
scikit-learn 0.22.1 py37h6288b17_0
scipy 1.4.1 py37h9439919_0
setuptools 46.0.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.14.0 py37_0
sqlite 3.31.1 he774522_0
tensorboard 2.1.0 py3_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge tornado 6.0.4 py37he774522_1
tqdm 4.43.0 py_0
urllib3 1.25.8 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.12.1 py37he774522_1
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 75.0 mask_type: vgg-obstructed mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False icnr_init: True conv_aware_init: True subpixel_upscaling: False reflect_padding: False penalized_mask_loss: True loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4

Thank you for your help in advance

Last edited by torzdf on Tue Mar 24, 2020 10:31 am, edited 1 time in total.
Reason: Title clarity
Locked