ValueError: Layer encoder was called with an input that isn't a symbolic tensor

Getting errors or found a bug when converting faces from a trained model? Post about them here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Convert process. If you want to get tips, or better understand the Convert process, then you should look in the Convert Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
PhantomAfiq
Posts: 1
Joined: Sun Feb 07, 2021 2:00 am

ValueError: Layer encoder was called with an input that isn't a symbolic tensor

Post by PhantomAfiq »

Started a new project and successfully trained using RealFace for 33 hours. However, when I tried converting, a critical error pops up and crashes. Have used same method previously multiple times and it works without any problems. No idea why its acting up now.

Code: Select all

02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Added defaults: model.realface
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_section                    DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Added defaults: model.unbalanced
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Importing defaults module: plugins.train.model.villain_defaults
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_section                    DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Don't try to run this if you have a small GPU.\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Added defaults: model.villain
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_section                    DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'disable_warp', datatype: '<class 'bool'>', default: 'False', info: 'Disable warp augmentation. Warping is integral to the Neural Network training. If you decide to disable warping, you should only do so towards the end of a model's training session.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: False, group: image augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
02/07/2021 09:51:30 MainProcess     MainThread                     _config         load_module                    DEBUG    Added defaults: trainer.original
02/07/2021 09:51:30 MainProcess     MainThread                     config          handle_config                  DEBUG    Handling config
02/07/2021 09:51:30 MainProcess     MainThread                     config          check_exists                   DEBUG    Config file exists: 'C:\Users\phant\faceswap\config\train.ini'
02/07/2021 09:51:30 MainProcess     MainThread                     config          load_config                    VERBOSE  Loading config: 'C:\Users\phant\faceswap\config\train.ini'
02/07/2021 09:51:30 MainProcess     MainThread                     config          validate_config                DEBUG    Validating config
02/07/2021 09:51:30 MainProcess     MainThread                     config          check_config_change            DEBUG    Default config has not changed
02/07/2021 09:51:30 MainProcess     MainThread                     config          check_config_choices           DEBUG    Checking config choices
02/07/2021 09:51:30 MainProcess     MainThread                     config          check_config_choices           DEBUG    Checked config choices
02/07/2021 09:51:30 MainProcess     MainThread                     config          validate_config                DEBUG    Validated config
02/07/2021 09:51:30 MainProcess     MainThread                     config          handle_config                  DEBUG    Handled config
02/07/2021 09:51:30 MainProcess     MainThread                     config          __init__                       DEBUG    Initialized: Config
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Getting config item: (section: 'global', option: 'allow_growth')
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Returning item: (type: <class 'bool'>, value: False)
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Getting config item: (section: 'global', option: 'convert_batchsize')
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 16)
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'eye_multiplier')
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 3)
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'mouth_multiplier')
02/07/2021 09:51:30 MainProcess     MainThread                     config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 2)
02/07/2021 09:51:30 MainProcess     MainThread                     config          changeable_items               DEBUG    Alterable for existing models: {'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initializing State: (model_dir: 'G:\Movies\Motion\FS\Illaishah\Project 1\RF', model_name: 'realface', config_changeable_items: '{'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}', no_logs: False
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      get_serializer                 DEBUG    <lib.serializer._JSONSerializer object at 0x000002CD5D9A4F70>
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _load                          DEBUG    Loading State
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      load                           DEBUG    filename: G:\Movies\Motion\FS\Illaishah\Project 1\RF\realface_state.json
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      load                           DEBUG    stored data type: <class 'bytes'>
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      unmarshal                      DEBUG    data type: <class 'bytes'>
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      unmarshal                      DEBUG    returned data type: <class 'dict'>
02/07/2021 09:51:30 MainProcess     MainThread                     serializer      load                           DEBUG    data type: <class 'dict'>
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _load                          DEBUG    Loaded state: {'name': 'realface', 'sessions': {'1': {'timestamp': 1612536265.3767107, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 84750, 'config': {'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '2': {'timestamp': 1612662383.8888512, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 42, 'config': {'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}}, 'lowest_avg_loss': {'a': 0.02553022165596485, 'b': 0.022043966341763734}, 'iterations': 84792, 'config': {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'allow_growth': False, 'mixed_precision': False, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'input_size': 64, 'output_size': 128, 'dense_nodes': 1536, 'complexity_encoder': 128, 'complexity_decoder': 512}}
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _update_legacy_config          DEBUG    Checking for legacy state file update
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _update_legacy_config          DEBUG    Legacy item 'dssim_loss' not in config. Skipping update
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _update_legacy_config          DEBUG    State file updated for legacy config: False
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _replace_config                DEBUG    Replacing config. Old config: {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'allow_growth': False, 'mixed_precision': False, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'input_size': 64, 'output_size': 128, 'dense_nodes': 1536, 'complexity_encoder': 128, 'complexity_decoder': 512}
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _replace_config                DEBUG    Replaced config. New config: {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'allow_growth': False, 'mixed_precision': False, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'input_size': 64, 'output_size': 128, 'dense_nodes': 1536, 'complexity_encoder': 128, 'complexity_decoder': 512}
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _replace_config                INFO     Using configuration saved in state file
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _new_session_id                DEBUG    3
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _create_new_session            DEBUG    Creating new session. id: 3
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initialized State:
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initializing _Settings: (arguments: Namespace(alignments_path='G:\\Movies\\Motion\\FS\\Illaishah\\Source\\Scene\\China student 6. Full bit.lyfanxxxchina - XVIDEOS.fsa', colab=False, color_adjustment='avg-color', configfile=None, exclude_gpus=None, filter=None, frame_ranges=None, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x000002CD4EED0BB0>>, input_aligned_dir=None, input_dir='G:\\Movies\\Motion\\FS\\Illaishah\\Source\\Scene\\China student 6. Full bit.lyfanxxxchina - XVIDEOS.COM.mp4', jobs=0, keep_unchanged=False, logfile=None, loglevel='INFO', mask_type='extended', model_dir='G:\\Movies\\Motion\\FS\\Illaishah\\Project 1\\RF', nfilter=None, on_the_fly=False, output_dir='G:\\Movies\\Motion\\FS\\Illaishah\\Project 1', output_scale=100, redirect_gui=True, ref_threshold=0.4, reference_video='G:\\Movies\\Motion\\FS\\Illaishah\\Source\\Her\\B.mp4', singleprocess=False, swap_model=True, trainer='realface', writer='ffmpeg'), mixed_precision: False, allow_growth: False, is_predict: True)
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _set_keras_mixed_precision     DEBUG    use_mixed_precision: False, exclude_gpus: False
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _set_keras_mixed_precision     DEBUG    Not enabling 'mixed_precision' (backend: amd, use_mixed_precision: False)
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _get_strategy                  DEBUG    Using strategy: None
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initialized _Settings
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initializing _Loss
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initialized: _Loss
02/07/2021 09:51:30 MainProcess     MainThread                     _base           __init__                       DEBUG    Initialized ModelBase (Model)
02/07/2021 09:51:30 MainProcess     MainThread                     realface        check_input_output             DEBUG    Input and output sizes are valid
02/07/2021 09:51:30 MainProcess     MainThread                     realface        get_dense_width_upscalers_numbers DEBUG    dense_width: 4, upscalers_no: 5
02/07/2021 09:51:30 MainProcess     MainThread                     _base           strategy_scope                 DEBUG    Using strategy scope: <contextlib.nullcontext object at 0x000002CD5F3550D0>
02/07/2021 09:51:30 MainProcess     MainThread                     _base           _load                          DEBUG    Loading model: G:\Movies\Motion\FS\Illaishah\Project 1\RF\realface.h5
02/07/2021 09:51:30 MainProcess     MainThread                     library         _logger_callback               INFO     Opening device "opencl_amd_gfx900.0"
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _load                          INFO     Loaded model from disk: 'G:\Movies\Motion\FS\Illaishah\Project 1\RF\realface.h5'
02/07/2021 09:51:33 MainProcess     MainThread                     _base           __init__                       DEBUG    Initializing: _Inference (saved_model: <keras.engine.training.Model object at 0x000002CD5F355F40>, switch_sides: True)
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Compiling inference model. saved_model: <keras.engine.training.Model object at 0x000002CD5F355F40>
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _get_filtered_structure        DEBUG    Model structure: OrderedDict([('decoder_a', [('encoder', 0)]), ('encoder', [('face_in_b', 0)]), ('face_in_b', [])])
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _get_inputs                    DEBUG    model inputs: [<tile.Value face_in_a Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>, <tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>], input_split: 1, start_idx: 1, inference_inputs: [<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Skipping unused layer: 'face_in_a'
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Processing layer 'face_in_b': (layer: <keras.engine.input_layer.InputLayer object at 0x000002CD5F355D90>, inbound_nodes: [])
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Adding model inputs face_in_b: [<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]
02/07/2021 09:51:33 MainProcess     MainThread                     _base           KerasModel                     DEBUG    Flattening inputs ([<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]) and outputs ([<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]) for AMD
02/07/2021 09:51:33 MainProcess     MainThread                     _base           KerasModel                     DEBUG    Flattened inputs ([<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]) and outputs ([<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>])
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Processing layer 'encoder': (layer: <keras.engine.training.Model object at 0x000002CD60B7EFD0>, inbound_nodes: [('face_in_b', 0)])
02/07/2021 09:51:33 MainProcess     MainThread                     _base           _make_inference_model          DEBUG    Compiling layer 'encoder': layer inputs: [[<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]]
Traceback (most recent call last):
  File "C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 279, in assert_input_compatibility
    K.is_keras_tensor(x)
  File "C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\plaidml\keras\backend.py", line 59, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\plaidml\keras\backend.py", line 927, in is_keras_tensor
    raise ValueError('Unexpectedly found an instance of type `' + str(type(x)) + '`. '
ValueError: Unexpectedly found an instance of type `<class 'list'>`. Expected a symbolic tensor instance.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\phant\faceswap\lib\cli\launcher.py", line 181, in execute_script
    process = script(arguments)
  File "C:\Users\phant\faceswap\scripts\convert.py", line 65, in __init__
    self._predictor = Predict(self._disk_io.load_queue, self._queue_size, arguments)
  File "C:\Users\phant\faceswap\scripts\convert.py", line 661, in __init__
    self._model = self._load_model()
  File "C:\Users\phant\faceswap\scripts\convert.py", line 746, in _load_model
    model.build()
  File "C:\Users\phant\faceswap\plugins\train\model\_base.py", line 260, in build
    inference = _Inference(model, self._args.swap_model)
  File "C:\Users\phant\faceswap\plugins\train\model\_base.py", line 1377, in __init__
    self._model = self._make_inference_model(saved_model)
  File "C:\Users\phant\faceswap\plugins\train\model\_base.py", line 1447, in _make_inference_model
    model = layer(layer_inputs)
  File "C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 440, in __call__
    self.assert_input_compatibility(inputs)
  File "C:\Users\phant\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\base_layer.py", line 281, in assert_input_compatibility
    raise ValueError('Layer ' + self.name + ' was called with '
ValueError: Layer encoder was called with an input that isn't a symbolic tensor. Received type: <class 'list'>. Full input: [[<tile.Value face_in_b Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 64, 64, 3)>]]. All inputs to the layer should be tensors.

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         cd24f25 Extract - Always output faces as .png. daaf6d4 Merge branch 'master' into staging. f32d714 lib.model.nn_blocks - Maintenance   - Add additional activation functions   - Add custom upscale block. 27a7adb Update README.md. 1d8c3c4 Merge branch 'master' into staging
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - gfx900 (experimental), GPU_1: Advanced Micro Devices, Inc. - gfx900 (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3188.4 (PAL,HSAIL)', '3188.4 (PAL,HSAIL)']
gpu_vram:            GPU_0: 8176MB, GPU_1: 8176MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\phant\faceswap\faceswap.py convert -i G:/Movies/Motion/FS/Illaishah/Source/Scene/China student 6. Full bit.lyfanxxxchina - XVIDEOS.COM.mp4 -o G:/Movies/Motion/FS/Illaishah/Project 1 -al G:/Movies/Motion/FS/Illaishah/Source/Scene/China student 6. Full bit.lyfanxxxchina - XVIDEOS.fsa -ref G:/Movies/Motion/FS/Illaishah/Source/Her/B.mp4 -m G:/Movies/Motion/FS/Illaishah/Project 1/RF -c avg-color -M extended -w ffmpeg -osc 100 -l 0.4 -j 0 -t realface -s -L INFO -gui
py_conda_version:    conda 4.9.2
py_implementation:   CPython
py_version:          3.8.5
py_virtual_env:      True
sys_cores:           16
sys_processor:       AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram:             Total: 16334MB, Available: 9823MB, Used: 6511MB, Free: 9823MB

=============== Pip Packages ===============
absl-py==0.11.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.11.8
cffi==1.14.4
chardet==3.0.4
cycler==0.10.0
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.3.3
google-auth==1.23.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.34.0
h5py==2.10.0
idna==2.10
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1604014703538/work
Markdown==3.3.3
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.4.0.46
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1603823068645/work
plaidml==0.7.0
plaidml-keras==0.7.0
protobuf==3.14.0
psutil @ file:///C:/ci/psutil_1598370330503/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
python-dateutil==2.8.1
pywin32==227
PyYAML==5.3.1
requests==2.25.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn @ file:///C:/ci/scikit-learn_1598377018496/work
scipy @ file:///C:/ci/scipy_1604596260408/work
sip==4.19.13
six @ file:///C:/ci/six_1605187374963/work
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow==2.2.1
tensorflow-estimator==2.2.0
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado @ file:///C:/ci/tornado_1606942392901/work
tqdm @ file:///tmp/build/80754af9/tqdm_1606938474023/work
urllib3==1.26.2
Werkzeug==1.0.1
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\phant\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   0.11.0                   pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
ca-certificates 2020.10.14 0
cachetools 4.1.1 pypi_0 pypi certifi 2020.11.8 py38haa95532_0
cffi 1.14.4 pypi_0 pypi chardet 3.0.4 pypi_0 pypi cycler 0.10.0 py38_0
enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py38h251f6bf_2 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.4 hd328e21_0
gast 0.3.3 pypi_0 pypi git 2.23.0 h6bb4b03_0
google-auth 1.23.0 pypi_0 pypi google-auth-oauthlib 0.4.2 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.34.0 pypi_0 pypi h5py 2.10.0 pypi_0 pypi icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pypi_0 pypi imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge intel-openmp 2020.2 254
joblib 0.17.0 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 pypi_0 pypi keras-applications 1.0.8 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.3.0 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 hf4a77e7_3
markdown 3.3.3 pypi_0 pypi matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38h196d8e1_0
mkl_fft 1.2.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.18.5 pypi_0 pypi nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 pypi_0 pypi olefile 0.46 py_0
opencv-python 4.4.0.46 pypi_0 pypi openssl 1.1.1h he774522_0
opt-einsum 3.3.0 pypi_0 pypi pathlib 1.0.1 py_1
pillow 8.0.1 py38h4fa10fc_0
pip 20.3 py38haa95532_0
plaidml 0.7.0 pypi_0 pypi plaidml-keras 0.7.0 pypi_0 pypi protobuf 3.14.0 pypi_0 pypi psutil 5.7.2 py38he774522_0
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycparser 2.20 pypi_0 pypi pyparsing 2.4.7 py_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.5 h5fd99cc_1
python-dateutil 2.8.1 py_0
python_abi 3.8 1_cp38 conda-forge pywin32 227 py38he774522_1
pyyaml 5.3.1 pypi_0 pypi qt 5.9.7 vc14h73c81de_0
requests 2.25.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.6 pypi_0 pypi scikit-learn 0.23.2 py38h47e9c7a_0
scipy 1.5.2 py38h14eb087_0
setuptools 50.3.2 py38haa95532_2
sip 4.19.13 py38ha925a31_0
six 1.15.0 py38haa95532_0
sqlite 3.33.0 h2a8f88b_0
tensorboard 2.2.2 pypi_0 pypi tensorboard-plugin-wit 1.7.0 pypi_0 pypi tensorflow 2.2.1 pypi_0 pypi tensorflow-estimator 2.2.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.1 py38h2bbff1b_0
tqdm 4.54.0 pyhd3eb1b0_0
urllib3 1.26.2 pypi_0 pypi vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 1.0.1 pypi_0 pypi wheel 0.36.0 pyhd3eb1b0_0
wincertstore 0.2 py38_0
wrapt 1.12.1 pypi_0 pypi xz 5.2.5 h62dcd97_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 h04227a9_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: gaussian kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: gaussian amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 0 preset: veryslow tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] centering: face coverage: 68.75 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 reflect_padding: False allow_growth: False mixed_precision: False convert_batchsize: 16 [global.loss] loss_function: ssim mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfaker] output_size: 128 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 disable_warp: False color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
by torzdf » Sun Feb 07, 2021 2:19 am

This is an AMD specific bug that I am looking to fix over the weekend.

Go to full post

Tags:
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Critical An unexpected Crash occured.

Post by torzdf »

This is an AMD specific bug that I am looking to fix over the weekend.

My word is final

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: Critical An unexpected Crash occured.

Post by torzdf »

This should now be fixed. Please update

My word is final

Locked