Crash while trying to train - OSError: Unable to open file (truncated file)

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Post Reply
User avatar
85lesbian
Posts: 7
Joined: Wed Feb 10, 2021 3:58 am

Crash while trying to train - OSError: Unable to open file (truncated file)

Post by 85lesbian »

Code: Select all

06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'freeze_layers', datatype: '<class 'list'>', default: 'keras_encoder', info: 'If the command line option 'freeze-weights' is enabled, then the layers indicated here will be frozen the next time the model starts up. NB: Not all architectures contain all of the layers listed here, so any layers marked for freezing that are not within your chosen architecture will be ignored. EG:\n If 'split fc' has been selected, then 'fc_a' and 'fc_b' are available for freezing. If it has not been selected then 'fc_both' is available for freezing.', rounding: 'None', min_max: None, choices: ['encoder', 'keras_encoder', 'fc_a', 'fc_b', 'fc_both', 'fc_shared', 'fc_gblock', 'g_block_a', 'g_block_b', 'g_block_both', 'decoder_a', 'decoder_b', 'decoder_both'], gui_radio: False, fixed: False, group: weights)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'load_layers', datatype: '<class 'list'>', default: 'encoder', info: 'If the command line option 'load-weights' is populated, then the layers indicated here will be loaded from the given weights file if starting a new model. NB Not all architectures contain all of the layers listed here, so any layers marked for loading that are not within your chosen architecture will be ignored. EG:\n If 'split fc' has been selected, then 'fc_a' and 'fc_b' are available for loading. If it has not been selected then 'fc_both' is available for loading.', rounding: 'None', min_max: None, choices: ['encoder', 'fc_a', 'fc_b', 'fc_both', 'fc_shared', 'fc_gblock', 'g_block_a', 'g_block_b', 'g_block_both', 'decoder_a', 'decoder_b', 'decoder_both'], gui_radio: False, fixed: True, group: weights)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'fs_original_depth', datatype: '<class 'int'>', default: '4', info: 'Faceswap Encoder only: The number of convolutions to perform within the encoder.', rounding: '1', min_max: (2, 10), choices: None, gui_radio: False, fixed: True, group: faceswap encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'fs_original_min_filters', datatype: '<class 'int'>', default: '128', info: 'Faceswap Encoder only: The minumum number of filters to use for encoder convolutions. (i.e. the number of filters to use for the first encoder layer).', rounding: '64', min_max: (64, 2048), choices: None, gui_radio: False, fixed: True, group: faceswap encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'fs_original_max_filters', datatype: '<class 'int'>', default: '1024', info: 'Faceswap Encoder only: The maximum number of filters to use for encoder convolutions. (i.e. the number of filters to use for the final encoder layer).', rounding: '128', min_max: (256, 8192), choices: None, gui_radio: False, fixed: True, group: faceswap encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'mobilenet_width', datatype: '<class 'float'>', default: '1.0', info: 'The width multiplier for mobilenet encoders. Controls the width of the network. Values less than 1.0 proportionally decrease the number of filters within each layer. Values greater than 1.0 proportionally increase the number of filters within each layer. 1.0 is the default number of layers used within the paper.\nNB: This option is ignored for any non-mobilenet encoders.\nNB: If loading ImageNet weights, then for mobilenet v1 only values of '0.25', '0.5', '0.75' or '1.0 can be selected. For mobilenet v2 only values of '0.35', '0.50', '0.75', '1.0', '1.3' or '1.4' can be selected', rounding: '2', min_max: (0.1, 2.0), choices: None, gui_radio: False, fixed: True, group: mobilenet encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'mobilenet_depth', datatype: '<class 'int'>', default: '1', info: 'The depth multiplier for mobilenet v1 encoder. This is the depth multiplier for depthwise convolution (known as the resolution multiplier within the original paper).\nNB: This option is only used for mobilenet v1 and is ignored for all other encoders.\nNB: If loading ImageNet weights, this must be set to 1.', rounding: '1', min_max: (1, 10), choices: None, gui_radio: False, fixed: True, group: mobilenet encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.phaze_a', title: 'mobilenet_dropout', datatype: '<class 'float'>', default: '0.001', info: 'The dropout rate for for mobilenet v1 encoder.\nNB: This option is only used for mobilenet v1 and is ignored for all other encoders.\nNB: If loading ImageNet weights, this must be set to 1.0.', rounding: '2', min_max: (0.1, 2.0), choices: None, gui_radio: False, fixed: True, group: mobilenet encoder configuration)
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.phaze_a
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.model.realface_defaults
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n')
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.realface
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n')
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.unbalanced
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.model.villain_defaults
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Don't try to run this if you have a small GPU.\n')
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.villain
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/17/2021 09:09:30 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: trainer.original
06/17/2021 09:09:30 MainProcess     _training_0                    config          handle_config                  DEBUG    Handling config: (section: model.original, configfile: 'C:\Users\denye\faceswap\config\train.ini')
06/17/2021 09:09:30 MainProcess     _training_0                    config          check_exists                   DEBUG    Config file exists: 'C:\Users\denye\faceswap\config\train.ini'
06/17/2021 09:09:30 MainProcess     _training_0                    config          load_config                    VERBOSE  Loading config: 'C:\Users\denye\faceswap\config\train.ini'
06/17/2021 09:09:30 MainProcess     _training_0                    config          validate_config                DEBUG    Validating config
06/17/2021 09:09:30 MainProcess     _training_0                    config          check_config_change            DEBUG    Default config has not changed
06/17/2021 09:09:30 MainProcess     _training_0                    config          check_config_choices           DEBUG    Checking config choices
06/17/2021 09:09:30 MainProcess     _training_0                    config          _parse_list                    DEBUG    Processed raw option 'keras_encoder' to list ['keras_encoder'] for section 'model.phaze_a', option 'freeze_layers'
06/17/2021 09:09:30 MainProcess     _training_0                    config          _parse_list                    DEBUG    Processed raw option 'encoder' to list ['encoder'] for section 'model.phaze_a', option 'load_layers'
06/17/2021 09:09:30 MainProcess     _training_0                    config          check_config_choices           DEBUG    Checked config choices
06/17/2021 09:09:30 MainProcess     _training_0                    config          validate_config                DEBUG    Validated config
06/17/2021 09:09:30 MainProcess     _training_0                    config          handle_config                  DEBUG    Handled config
06/17/2021 09:09:30 MainProcess     _training_0                    config          __init__                       DEBUG    Initialized: Config
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'epsilon_exponent')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: -7)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'allow_growth')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'bool'>, value: False)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'nan_protection')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'bool'>, value: True)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'convert_batchsize')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 16)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'eye_multiplier')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 3)
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'mouth_multiplier')
06/17/2021 09:09:30 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 2)
06/17/2021 09:09:30 MainProcess     _training_0                    config          changeable_items               DEBUG    Alterable for existing models: {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing State: (model_dir: 'C:\Users\denye\Videos\Captures\%\ModelfaceAB', model_name: 'original', config_changeable_items: '{'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}', no_logs: False
06/17/2021 09:09:30 MainProcess     _training_0                    serializer      get_serializer                 DEBUG    <lib.serializer._JSONSerializer object at 0x000002A860D54070>
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _load                          DEBUG    Loading State
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _load                          INFO     No existing state file found. Generating.
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _new_session_id                DEBUG    1
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _create_new_session            DEBUG    Creating new session. id: 1
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized State:
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Settings: (arguments: Namespace(batch_size=16, colab=False, configfile=None, distributed=False, exclude_gpus=None, freeze_weights=False, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x000002A859758B80>>, input_a='C:\\Users\\denye\\Videos\\Captures\\%\\Face a', input_b='C:\\Users\\denye\\Videos\\Captures\\%\\FaceCC', iterations=1000000, load_weights=None, logfile=None, loglevel='INFO', model_dir='C:\\Users\\denye\\Videos\\Captures\\%\\ModelfaceAB', no_augment_color=False, no_flip=False, no_logs=False, no_warp=False, preview=True, preview_scale=100, redirect_gui=True, save_interval=10, snapshot_interval=5000, summary=False, timelapse_input_a='C:\\Users\\denye\\Videos\\Captures\\%\\Face a', timelapse_input_b='C:\\Users\\denye\\Videos\\Captures\\%\\FaceCC', timelapse_output='C:\\Users\\denye\\Videos\\Captures\\%\\TimelapseAB', trainer='original', warp_to_landmarks=False, write_image=True), mixed_precision: False, allow_growth: False, is_predict: False)
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _set_tf_settings               VERBOSE  Hiding GPUs from Tensorflow
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _set_keras_mixed_precision     DEBUG    use_mixed_precision: False, exclude_gpus: False
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _set_keras_mixed_precision     DEBUG    Not enabling 'mixed_precision' (backend: cpu, use_mixed_precision: False)
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _get_strategy                  DEBUG    Using strategy: None
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Settings
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Loss
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized: _Loss
06/17/2021 09:09:30 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized ModelBase (Model)
06/17/2021 09:09:30 MainProcess     _training_0                    _base           strategy_scope                 DEBUG    Using strategy scope: <contextlib.nullcontext object at 0x000002A860E6D220>
06/17/2021 09:09:30 MainProcess     _training_0                    _base           _load                          DEBUG    Loading model: C:\Users\denye\Videos\Captures\%\ModelfaceAB\original.h5
06/17/2021 09:09:30 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): Unable to open file (truncated file: eof = 152043520, sblock->base_addr = 0, stored_eof = 328304912)
06/17/2021 09:09:32 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
06/17/2021 09:09:32 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
06/17/2021 09:09:32 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
06/17/2021 09:09:32 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
06/17/2021 09:09:32 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
06/17/2021 09:09:32 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
06/17/2021 09:09:32 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\denye\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\denye\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\denye\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\denye\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\denye\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\denye\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\denye\faceswap\scripts\train.py", line 240, in _training
    model = self._load_model()
  File "C:\Users\denye\faceswap\scripts\train.py", line 268, in _load_model
    model.build()
  File "C:\Users\denye\faceswap\plugins\train\model\_base.py", line 286, in build
    model = self._io._load()  # pylint:disable=protected-access
  File "C:\Users\denye\faceswap\plugins\train\model\_base.py", line 556, in _load
    model = load_model(self._filename, compile=False)
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\save.py", line 182, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 166, in load_model_from_hdf5
    f = h5py.File(filepath, mode='r')
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 406, in __init__
    fid = make_fid(name, mode, userblock_size,
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 173, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (truncated file: eof = 152043520, sblock->base_addr = 0, stored_eof = 328304912)

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         55bb723 New Model: Phaze-A. 0775245 Bugfix - Manual Tool   - Fix bug when adding new face with "misaligned" filter applied. 104a549 Bugfix - Manual Tool   - Fix bug when changing filter modes from a filter with no matches. fb6f576 Bugfix - Manual Tool   - Fix non-appearing landmark annotations in face viewer. cbf64de Typofix
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         
gpu_devices_active:
gpu_driver: No Nvidia driver found gpu_vram:
os_machine: AMD64 os_platform: Windows-10-10.0.19041-SP0 os_release: 10 py_command: C:\Users\denye\faceswap\faceswap.py train -A C:/Users/denye/Videos/Captures/%/Face a -B C:/Users/denye/Videos/Captures/%/FaceCC -m C:/Users/denye/Videos/Captures/%/ModelfaceAB -t original -bs 16 -it 1000000 -s 10 -ss 5000 -tia C:/Users/denye/Videos/Captures/%/Face a -tib C:/Users/denye/Videos/Captures/%/FaceCC -to C:/Users/denye/Videos/Captures/%/TimelapseAB -ps 100 -p -w -L INFO -gui py_conda_version: conda 4.9.2 py_implementation: CPython py_version: 3.8.8 py_virtual_env: True sys_cores: 4 sys_processor: Intel64 Family 6 Model 61 Stepping 4, GenuineIntel sys_ram: Total: 8105MB, Available: 3318MB, Used: 4787MB, Free: 3318MB =============== Pip Packages =============== absl-py @ file:///C:/ci/absl-py_1615411229697/work aiohttp @ file:///C:/ci/aiohttp_1614361024229/work astunparse==1.6.3 async-timeout==3.0.1 attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work blinker==1.4 brotlipy==0.7.0 cachetools @ file:///tmp/build/80754af9/cachetools_1611600262290/work certifi==2020.12.5 cffi @ file:///C:/ci/cffi_1613247279197/work chardet @ file:///C:/ci/chardet_1605303225733/work click @ file:///home/linux1/recipes/ci/click_1610990599742/work coverage @ file:///C:/ci/coverage_1614615074147/work cryptography @ file:///C:/ci/cryptography_1616769344312/work cycler==0.10.0 Cython @ file:///C:/ci/cython_1614014958194/work fastcluster==1.1.26 ffmpy==0.2.3 gast @ file:///tmp/build/80754af9/gast_1597433534803/work google-auth @ file:///tmp/build/80754af9/google-auth_1616008050444/work google-auth-oauthlib @ file:///tmp/build/80754af9/google-auth-oauthlib_1617120569401/work google-pasta==0.2.0 grpcio @ file:///C:/ci/grpcio_1614884412260/work h5py==2.10.0 idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1609799311556/work importlib-metadata @ file:///C:/ci/importlib-metadata_1615900494248/work joblib @ file:///tmp/build/80754af9/joblib_1613502643832/work Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work Keras-Preprocessing @ file:///tmp/build/80754af9/keras-preprocessing_1612283640596/work kiwisolver @ file:///C:/ci/kiwisolver_1612282606037/work Markdown @ file:///C:/ci/markdown_1614364121613/work matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work mkl-fft==1.3.0 mkl-random==1.1.1 mkl-service==2.3.0 multidict @ file:///C:/ci/multidict_1607362065515/work numpy @ file:///C:/ci/numpy_and_numpy_base_1603466732592/work nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6 oauthlib==3.1.0 olefile==0.46 opencv-python==4.5.1.48 opt-einsum==3.1.0 pathlib==1.0.1 Pillow @ file:///C:/ci/pillow_1615224342392/work protobuf==3.14.0 psutil @ file:///C:/ci/psutil_1612298324802/work pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work PyJWT==1.7.1 pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work pyparsing @ file:///home/linux1/recipes/ci/pyparsing_1610983426697/work pyreadline==2.1 PySocks @ file:///C:/ci/pysocks_1605287845585/work python-dateutil @ file:///home/ktietz/src/ci/python-dateutil_1611928101742/work pywin32==227 requests @ file:///tmp/build/80754af9/requests_1608241421344/work requests-oauthlib==1.3.0 rsa @ file:///tmp/build/80754af9/rsa_1614366226499/work scikit-learn @ file:///C:/ci/scikit-learn_1614446896245/work scipy @ file:///C:/ci/scipy_1616703433439/work sip==4.19.13 six @ file:///C:/ci/six_1605187374963/work tensorboard @ file:///home/builder/ktietz/aggregate/tensorflow_recipes/ci_te/tensorboard_1614593728657/work/tmp_pip_dir tensorboard-plugin-wit==1.6.0 tensorflow==2.3.0 tensorflow-estimator @ file:///tmp/build/80754af9/tensorflow-estimator_1599136169057/work/whl_temp/tensorflow_estimator-2.3.0-py2.py3-none-any.whl termcolor==1.1.0 threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl tornado @ file:///C:/ci/tornado_1606942392901/work tqdm @ file:///tmp/build/80754af9/tqdm_1615925068909/work typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1611751222202/work urllib3 @ file:///tmp/build/80754af9/urllib3_1615837158687/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work win-inet-pton @ file:///C:/ci/win_inet_pton_1605306167264/work wincertstore==0.2 wrapt==1.12.1 yarl @ file:///C:/ci/yarl_1606940076464/work zipp @ file:///tmp/build/80754af9/zipp_1615904174917/work ============== Conda Packages ============== # packages in environment at C:\Users\denye\MiniConda3\envs\faceswap: # # Name Version Build Channel _tflow_select 2.3.0 eigen
absl-py 0.12.0 py38haa95532_0
aiohttp 3.7.4 py38h2bbff1b_1
astunparse 1.6.3 py_0
async-timeout 3.0.1 py38haa95532_0
attrs 20.3.0 pyhd3eb1b0_0
blas 1.0 mkl
blinker 1.4 py38haa95532_0
brotlipy 0.7.0 py38h2bbff1b_1003
ca-certificates 2021.1.19 haa95532_1
cachetools 4.2.1 pyhd3eb1b0_0
certifi 2020.12.5 py38haa95532_0
cffi 1.14.5 py38hcd4344a_0
chardet 3.0.4 py38haa95532_1003
click 7.1.2 pyhd3eb1b0_0
coverage 5.5 py38h2bbff1b_2
cryptography 3.4.7 py38h71e12ea_0
cycler 0.10.0 py38_0
cython 0.29.22 py38hd77b12b_0
fastcluster 1.1.26 py38h251f6bf_2 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.4 hd328e21_0
gast 0.4.0 py_0
git 2.23.0 h6bb4b03_0
google-auth 1.28.0 pyhd3eb1b0_0
google-auth-oauthlib 0.4.4 pyhd3eb1b0_0
google-pasta 0.2.0 py_0
grpcio 1.36.1 py38hc60d5dd_1
h5py 2.10.0 py38h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pyhd3eb1b0_0
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.3 pyhd8ed1ab_0 conda-forge importlib-metadata 3.7.3 py38haa95532_1
intel-openmp 2020.2 254
joblib 1.0.1 pyhd3eb1b0_0
jpeg 9b hb83a4c4_2
keras-applications 1.0.8 py_1
keras-preprocessing 1.1.2 pyhd3eb1b0_0
kiwisolver 1.3.1 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.14.0 h23ce68f_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.3 h2bbff1b_0
markdown 3.3.4 py38haa95532_0
matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38h196d8e1_0
mkl_fft 1.3.0 py38h46781fe_0
mkl_random 1.1.1 py38h47e9c7a_0
multidict 5.1.0 py38h2bbff1b_2
numpy 1.19.2 py38hadc3359_0
numpy-base 1.19.2 py38ha3acd2a_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 py_0
olefile 0.46 py_0
opencv-python 4.5.1.48 pypi_0 pypi openssl 1.1.1k h2bbff1b_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py_1
pillow 8.1.2 py38h4fa10fc_0
pip 21.0.1 py38haa95532_0
protobuf 3.14.0 py38hd77b12b_1
psutil 5.8.0 py38h2bbff1b_1
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.8 py_0
pycparser 2.20 py_2
pyjwt 1.7.1 py38_0
pyopenssl 20.0.1 pyhd3eb1b0_1
pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.9.2 py38ha925a31_4
pyreadline 2.1 py38_1
pysocks 1.7.1 py38haa95532_0
python 3.8.8 hdbf39b2_4
python-dateutil 2.8.1 pyhd3eb1b0_0
python_abi 3.8 1_cp38 conda-forge pywin32 227 py38he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.25.1 pyhd3eb1b0_0
requests-oauthlib 1.3.0 py_0
rsa 4.7.2 pyhd3eb1b0_1
scikit-learn 0.24.1 py38hf11a4ad_0
scipy 1.6.2 py38h14eb087_0
setuptools 52.0.0 py38haa95532_0
sip 4.19.13 py38ha925a31_0
six 1.15.0 py38haa95532_0
sqlite 3.35.3 h2bbff1b_0
tensorboard 2.4.0 pyhc547734_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 2.3.0 mkl_py38h8c0d9a2_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.3.0 pyheb71bc4_0
termcolor 1.1.0 py38haa95532_1
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.1 py38h2bbff1b_0
tqdm 4.59.0 pyhd3eb1b0_1
typing-extensions 3.7.4.3 hd3eb1b0_0
typing_extensions 3.7.4.3 pyh06a4308_0
urllib3 1.26.4 pyhd3eb1b0_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 1.0.1 pyhd3eb1b0_0
wheel 0.36.2 pyhd3eb1b0_0
win_inet_pton 1.1.0 py38haa95532_0
wincertstore 0.2 py38_0
wrapt 1.12.1 py38he774522_1
xz 5.2.5 h62dcd97_0
yarl 1.6.3 py38h2bbff1b_0
zipp 3.4.1 pyhd3eb1b0_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.9 h19a0ad4_0 ================= Configs ================== --------- .faceswap --------- backend: cpu --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 8 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] centering: face coverage: 68.75 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 epsilon_exponent: -7 reflect_padding: False allow_growth: False mixed_precision: False nan_protection: True convert_batchsize: 16 [global.loss] loss_function: ssim mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfaker] output_size: 128 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.phaze_a] output_size: 128 shared_fc: none enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: fs_original enc_scaling: 40 enc_load_weights: True bottleneck_type: dense bottleneck_norm: none bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: upsample2d fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: subpixel dec_norm: none dec_min_filters: 64 dec_max_filters: 512 dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers: keras_encoder load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4

i tried restoring the model but it said model does not exist in given file


User avatar
torzdf
Posts: 1495
Joined: Fri Jul 12, 2019 12:53 am
Answers: 127
Has thanked: 51 times
Been thanked: 287 times

Re: Crash while trying to train - OSError: Unable to open file (truncated file)

Post by torzdf »

This is a corrupted model file. It can happen if you are out of disk space, or if the process terminates unexpectedly when saving.

If restore tool doesn't work, then just roll back to an earlier snapshot.

My word is final


User avatar
85lesbian
Posts: 7
Joined: Wed Feb 10, 2021 3:58 am

Failed to start training

Post by 85lesbian »

Code: Select all

07/22/2021 22:20:27 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n')
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.realface
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n')
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.unbalanced
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.model.villain_defaults
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Don't try to run this if you have a small GPU.\n')
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: model.villain
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_section                    DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          add_item                       DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
07/22/2021 22:20:27 MainProcess     _training_0                    config          _load_defaults_from_module     DEBUG    Added defaults: trainer.original
07/22/2021 22:20:27 MainProcess     _training_0                    config          handle_config                  DEBUG    Handling config: (section: model.original, configfile: 'C:\Users\denye\faceswap\config\train.ini')
07/22/2021 22:20:27 MainProcess     _training_0                    config          check_exists                   DEBUG    Config file exists: 'C:\Users\denye\faceswap\config\train.ini'
07/22/2021 22:20:27 MainProcess     _training_0                    config          load_config                    VERBOSE  Loading config: 'C:\Users\denye\faceswap\config\train.ini'
07/22/2021 22:20:27 MainProcess     _training_0                    config          validate_config                DEBUG    Validating config
07/22/2021 22:20:27 MainProcess     _training_0                    config          check_config_change            DEBUG    Default config has not changed
07/22/2021 22:20:27 MainProcess     _training_0                    config          check_config_choices           DEBUG    Checking config choices
07/22/2021 22:20:27 MainProcess     _training_0                    config          _parse_list                    DEBUG    Processed raw option 'keras_encoder' to list ['keras_encoder'] for section 'model.phaze_a', option 'freeze_layers'
07/22/2021 22:20:27 MainProcess     _training_0                    config          _parse_list                    DEBUG    Processed raw option 'encoder' to list ['encoder'] for section 'model.phaze_a', option 'load_layers'
07/22/2021 22:20:27 MainProcess     _training_0                    config          check_config_choices           DEBUG    Checked config choices
07/22/2021 22:20:27 MainProcess     _training_0                    config          validate_config                DEBUG    Validated config
07/22/2021 22:20:27 MainProcess     _training_0                    config          handle_config                  DEBUG    Handled config
07/22/2021 22:20:27 MainProcess     _training_0                    config          __init__                       DEBUG    Initialized: Config
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'epsilon_exponent')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: -7)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'allow_growth')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'bool'>, value: False)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'nan_protection')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'bool'>, value: True)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global', option: 'convert_batchsize')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 16)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'eye_multiplier')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 3)
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Getting config item: (section: 'global.loss', option: 'mouth_multiplier')
07/22/2021 22:20:27 MainProcess     _training_0                    config          get                            DEBUG    Returning item: (type: <class 'int'>, value: 2)
07/22/2021 22:20:27 MainProcess     _training_0                    config          changeable_items               DEBUG    Alterable for existing models: {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing State: (model_dir: 'C:\Users\denye\Videos\Captures\Solar\ModelFaceAB', model_name: 'original', config_changeable_items: '{'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}', no_logs: False
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      get_serializer                 DEBUG    <lib.serializer._JSONSerializer object at 0x000001AAA3CC46D0>
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _load                          DEBUG    Loading State
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      load                           DEBUG    filename: C:\Users\denye\Videos\Captures\Solar\ModelFaceAB\original_state.json
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      load                           DEBUG    stored data type: <class 'bytes'>
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      unmarshal                      DEBUG    data type: <class 'bytes'>
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      unmarshal                      DEBUG    returned data type: <class 'dict'>
07/22/2021 22:20:27 MainProcess     _training_0                    serializer      load                           DEBUG    data type: <class 'dict'>
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _load                          DEBUG    Loaded state: {'name': 'original', 'sessions': {'1': {'timestamp': 1624208052.830708, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2265, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '2': {'timestamp': 1624294872.001325, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2403, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '3': {'timestamp': 1624466693.656511, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2442, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '4': {'timestamp': 1624554948.3509147, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2493, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '5': {'timestamp': 1624640645.3112664, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2391, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '6': {'timestamp': 1624727911.989967, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2570, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '7': {'timestamp': 1624814967.424246, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2410, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '8': {'timestamp': 1624901682.5992787, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2274, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '9': {'timestamp': 1624986119.4277413, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 880, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '10': {'timestamp': 1625108888.1690295, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 3, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '11': {'timestamp': 1625160435.104646, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2361, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '12': {'timestamp': 1625245827.3960488, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2418, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '13': {'timestamp': 1625333057.26791, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2865, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '14': {'timestamp': 1625418142.2556713, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2472, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '15': {'timestamp': 1625505004.8080537, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2145, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '16': {'timestamp': 1625590017.235631, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2571, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '17': {'timestamp': 1625678310.0310345, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 500, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '18': {'timestamp': 1625763843.3335555, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2318, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '19': {'timestamp': 1625848990.578944, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2605, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '20': {'timestamp': 1625936232.4417107, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 3400, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '21': {'timestamp': 1626022766.2270312, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2317, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '22': {'timestamp': 1626109268.3799677, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2562, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '23': {'timestamp': 1626195839.1711514, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2595, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '24': {'timestamp': 1626284419.512272, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 1110, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '25': {'timestamp': 1626367525.050002, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2644, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '26': {'timestamp': 1626431533.7603688, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 244, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '27': {'timestamp': 1626454650.3116822, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2533, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '28': {'timestamp': 1626542039.993974, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 3841, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '29': {'timestamp': 1626628736.0994782, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2304, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '30': {'timestamp': 1626716424.2215889, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 491, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '31': {'timestamp': 1626800744.886527, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2891, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}, '32': {'timestamp': 1626886710.8271956, 'no_logs': False, 'loss_names': ['total', 'face_a', 'face_b'], 'batchsize': 16, 'iterations': 2515, 'config': {'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'nan_protection': True, 'convert_batchsize': 16, 'eye_multiplier': 3, 'mouth_multiplier': 2}}}, 'lowest_avg_loss': {'a': 0.029950972646474838, 'b': 0.026551460847258568}, 'iterations': 69833, 'config': {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'lowmem': False}}
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _update_legacy_config          DEBUG    Checking for legacy state file update
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _update_legacy_config          DEBUG    Legacy item 'dssim_loss' not in config. Skipping update
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _update_legacy_config          DEBUG    State file updated for legacy config: False
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _replace_config                DEBUG    Replacing config. Old config: {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'lowmem': False}
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _replace_config                DEBUG    Replaced config. New config: {'centering': 'face', 'coverage': 68.75, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'lowmem': False}
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _replace_config                INFO     Using configuration saved in state file
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _new_session_id                DEBUG    33
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _create_new_session            DEBUG    Creating new session. id: 33
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized State:
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Settings: (arguments: Namespace(batch_size=16, colab=False, configfile=None, distributed=False, exclude_gpus=None, freeze_weights=False, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x000001AA9C748B80>>, input_a='C:\\Users\\denye\\Videos\\Captures\\Solar\\Face a', input_b='C:\\Users\\denye\\Videos\\Captures\\Solar\\Face b', iterations=1000000, load_weights=None, logfile=None, loglevel='INFO', model_dir='C:\\Users\\denye\\Videos\\Captures\\Solar\\ModelFaceAB', no_augment_color=False, no_flip=False, no_logs=False, no_warp=False, preview=False, preview_scale=100, redirect_gui=True, save_interval=10, snapshot_interval=5000, summary=False, timelapse_input_a='C:\\Users\\denye\\Videos\\Captures\\Solar\\Face a', timelapse_input_b='C:\\Users\\denye\\Videos\\Captures\\Solar\\Face b', timelapse_output='C:\\Users\\denye\\Videos\\Captures\\Solar\\TimelapseAb', trainer='original', warp_to_landmarks=False, write_image=False), mixed_precision: False, allow_growth: False, is_predict: False)
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _set_tf_settings               VERBOSE  Hiding GPUs from Tensorflow
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _set_keras_mixed_precision     DEBUG    use_mixed_precision: False, exclude_gpus: False
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _set_keras_mixed_precision     DEBUG    Not enabling 'mixed_precision' (backend: cpu, use_mixed_precision: False)
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _get_strategy                  DEBUG    Using strategy: None
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized _Settings
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initializing _Loss
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized: _Loss
07/22/2021 22:20:27 MainProcess     _training_0                    _base           __init__                       DEBUG    Initialized ModelBase (Model)
07/22/2021 22:20:27 MainProcess     _training_0                    _base           strategy_scope                 DEBUG    Using strategy scope: <contextlib.nullcontext object at 0x000001AAA3E5B3A0>
07/22/2021 22:20:27 MainProcess     _training_0                    _base           _load                          DEBUG    Loading model: C:\Users\denye\Videos\Captures\Solar\ModelFaceAB\original.h5
07/22/2021 22:20:27 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): Unable to open file (bad object header version number)
07/22/2021 22:20:28 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
07/22/2021 22:20:28 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
07/22/2021 22:20:28 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
07/22/2021 22:20:28 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
07/22/2021 22:20:28 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
07/22/2021 22:20:28 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
07/22/2021 22:20:28 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\denye\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\denye\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\denye\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\denye\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\denye\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\denye\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\denye\faceswap\scripts\train.py", line 240, in _training
    model = self._load_model()
  File "C:\Users\denye\faceswap\scripts\train.py", line 268, in _load_model
    model.build()
  File "C:\Users\denye\faceswap\plugins\train\model\_base.py", line 286, in build
    model = self._io._load()  # pylint:disable=protected-access
  File "C:\Users\denye\faceswap\plugins\train\model\_base.py", line 556, in _load
    model = load_model(self._filename, compile=False)
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\save.py", line 182, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 166, in load_model_from_hdf5
    f = h5py.File(filepath, mode='r')
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 406, in __init__
    fid = make_fid(name, mode, userblock_size,
  File "C:\Users\denye\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 173, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (bad object header version number)

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         ac22d40 extract: mask - Delete any mask from outside of frame boundaries. 55bb723 New Model: Phaze-A. 0775245 Bugfix - Manual Tool   - Fix bug when adding new face with "misaligned" filter applied. 104a549 Bugfix - Manual Tool   - Fix bug when changing filter modes from a filter with no matches. fb6f576 Bugfix - Manual Tool   - Fix non-appearing landmark annotations in face viewer
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         
gpu_devices_active:  
gpu_driver:          No Nvidia driver found
gpu_vram:            
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\denye\faceswap\faceswap.py train -A C:/Users/denye/Videos/Captures/Solar/Face a -B C:/Users/denye/Videos/Captures/Solar/Face b -m C:/Users/denye/Videos/Captures/Solar/ModelFaceAB -t original -bs 16 -it 1000000 -s 10 -ss 5000 -tia C:/Users/denye/Videos/Captures/Solar/Face a -tib C:/Users/denye/Videos/Captures/Solar/Face b -to C:/Users/denye/Videos/Captures/Solar/TimelapseAb -ps 100 -L INFO -gui
py_conda_version:    conda 4.9.2
py_implementation:   CPython
py_version:          3.8.8
py_virtual_env:      True
sys_cores:           4
sys_processor:       Intel64 Family 6 Model 61 Stepping 4, GenuineIntel
sys_ram:             Total: 8105MB, Available: 4154MB, Used: 3951MB, Free: 4154MB

=============== Pip Packages ===============
absl-py @ file:///C:/ci/absl-py_1615411229697/work
aiohttp @ file:///C:/ci/aiohttp_1614361024229/work
astunparse==1.6.3
async-timeout==3.0.1
attrs @ file:///tmp/build/80754af9/attrs_1604765588209/work
blinker==1.4
brotlipy==0.7.0
cachetools @ file:///tmp/build/80754af9/cachetools_1611600262290/work
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1613247279197/work
chardet @ file:///C:/ci/chardet_1605303225733/work
click @ file:///home/linux1/recipes/ci/click_1610990599742/work
coverage @ file:///C:/ci/coverage_1614615074147/work
cryptography @ file:///C:/ci/cryptography_1616769344312/work
cycler==0.10.0
Cython @ file:///C:/ci/cython_1614014958194/work
fastcluster==1.1.26
ffmpy==0.2.3
gast @ file:///tmp/build/80754af9/gast_1597433534803/work
google-auth @ file:///tmp/build/80754af9/google-auth_1616008050444/work
google-auth-oauthlib @ file:///tmp/build/80754af9/google-auth-oauthlib_1617120569401/work
google-pasta==0.2.0
grpcio @ file:///C:/ci/grpcio_1614884412260/work
h5py==2.10.0
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1609799311556/work
importlib-metadata @ file:///C:/ci/importlib-metadata_1615900494248/work
joblib @ file:///tmp/build/80754af9/joblib_1613502643832/work
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing @ file:///tmp/build/80754af9/keras-preprocessing_1612283640596/work
kiwisolver @ file:///C:/ci/kiwisolver_1612282606037/work
Markdown @ file:///C:/ci/markdown_1614364121613/work
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.3.0
mkl-random==1.1.1
mkl-service==2.3.0
multidict @ file:///C:/ci/multidict_1607362065515/work
numpy @ file:///C:/ci/numpy_and_numpy_base_1603466732592/work
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.5.1.48
opt-einsum==3.1.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1615224342392/work
protobuf==3.14.0
psutil @ file:///C:/ci/psutil_1612298324802/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
PyJWT==1.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing @ file:///home/linux1/recipes/ci/pyparsing_1610983426697/work
pyreadline==2.1
PySocks @ file:///C:/ci/pysocks_1605287845585/work
python-dateutil @ file:///home/ktietz/src/ci/python-dateutil_1611928101742/work
pywin32==227
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
requests-oauthlib==1.3.0
rsa @ file:///tmp/build/80754af9/rsa_1614366226499/work
scikit-learn @ file:///C:/ci/scikit-learn_1614446896245/work
scipy @ file:///C:/ci/scipy_1616703433439/work
sip==4.19.13
six @ file:///C:/ci/six_1605187374963/work
tensorboard @ file:///home/builder/ktietz/aggregate/tensorflow_recipes/ci_te/tensorboard_1614593728657/work/tmp_pip_dir
tensorboard-plugin-wit==1.6.0
tensorflow==2.3.0
tensorflow-estimator @ file:///tmp/build/80754af9/tensorflow-estimator_1599136169057/work/whl_temp/tensorflow_estimator-2.3.0-py2.py3-none-any.whl
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado @ file:///C:/ci/tornado_1606942392901/work
tqdm @ file:///tmp/build/80754af9/tqdm_1615925068909/work
typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1611751222202/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1615837158687/work
Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306167264/work
wincertstore==0.2
wrapt==1.12.1
yarl @ file:///C:/ci/yarl_1606940076464/work
zipp @ file:///tmp/build/80754af9/zipp_1615904174917/work

============== Conda Packages ==============
# packages in environment at C:\Users\denye\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
_tflow_select             2.3.0                     eigen  
absl-py                   0.12.0           py38haa95532_0  
aiohttp                   3.7.4            py38h2bbff1b_1  
astunparse                1.6.3                      py_0  
async-timeout             3.0.1            py38haa95532_0  
attrs                     20.3.0             pyhd3eb1b0_0  
blas                      1.0                         mkl  
blinker                   1.4              py38haa95532_0  
brotlipy                  0.7.0           py38h2bbff1b_1003  
ca-certificates           2021.1.19            haa95532_1  
cachetools                4.2.1              pyhd3eb1b0_0  
certifi                   2020.12.5        py38haa95532_0  
cffi                      1.14.5           py38hcd4344a_0  
chardet                   3.0.4           py38haa95532_1003  
click                     7.1.2              pyhd3eb1b0_0  
coverage                  5.5              py38h2bbff1b_2  
cryptography              3.4.7            py38h71e12ea_0  
cycler                    0.10.0                   py38_0  
cython                    0.29.22          py38hd77b12b_0  
fastcluster               1.1.26           py38h251f6bf_2    conda-forge
ffmpeg                    4.3.1                ha925a31_0    conda-forge
ffmpy                     0.2.3                    pypi_0    pypi
freetype                  2.10.4               hd328e21_0  
gast                      0.4.0                      py_0  
git                       2.23.0               h6bb4b03_0  
google-auth               1.28.0             pyhd3eb1b0_0  
google-auth-oauthlib      0.4.4              pyhd3eb1b0_0  
google-pasta              0.2.0                      py_0  
grpcio                    1.36.1           py38hc60d5dd_1  
h5py                      2.10.0           py38h5e291fa_0  
hdf5                      1.10.4               h7ebc959_0  
icc_rt                    2019.0.0             h0cc432a_1  
icu                       58.2                 ha925a31_3  
idna                      2.10               pyhd3eb1b0_0  
imageio                   2.9.0                      py_0  
imageio-ffmpeg            0.4.3              pyhd8ed1ab_0    conda-forge
importlib-metadata        3.7.3            py38haa95532_1  
intel-openmp              2020.2                      254  
joblib                    1.0.1              pyhd3eb1b0_0  
jpeg                      9b                   hb83a4c4_2  
keras-applications        1.0.8                      py_1  
keras-preprocessing       1.1.2              pyhd3eb1b0_0  
kiwisolver                1.3.1            py38hd77b12b_0  
libpng                    1.6.37               h2a8f88b_0  
libprotobuf               3.14.0               h23ce68f_0  
libtiff                   4.1.0                h56a325e_1  
lz4-c                     1.9.3                h2bbff1b_0  
markdown                  3.3.4            py38haa95532_0  
matplotlib                3.2.2                         0  
matplotlib-base           3.2.2            py38h64f37c6_0  
mkl                       2020.2                      256  
mkl-service               2.3.0            py38h196d8e1_0  
mkl_fft                   1.3.0            py38h46781fe_0  
mkl_random                1.1.1            py38h47e9c7a_0  
multidict                 5.1.0            py38h2bbff1b_2  
numpy                     1.19.2           py38hadc3359_0  
numpy-base                1.19.2           py38ha3acd2a_0  
nvidia-ml-py3             7.352.1                  pypi_0    pypi
oauthlib                  3.1.0                      py_0  
olefile                   0.46                       py_0  
opencv-python             4.5.1.48                 pypi_0    pypi
openssl                   1.1.1k               h2bbff1b_0  
opt_einsum                3.1.0                      py_0  
pathlib                   1.0.1                      py_1  
pillow                    8.1.2            py38h4fa10fc_0  
pip                       21.0.1           py38haa95532_0  
protobuf                  3.14.0           py38hd77b12b_1  
psutil                    5.8.0            py38h2bbff1b_1  
pyasn1                    0.4.8                      py_0  
pyasn1-modules            0.2.8                      py_0  
pycparser                 2.20                       py_2  
pyjwt                     1.7.1                    py38_0  
pyopenssl                 20.0.1             pyhd3eb1b0_1  
pyparsing                 2.4.7              pyhd3eb1b0_0  
pyqt                      5.9.2            py38ha925a31_4  
pyreadline                2.1                      py38_1  
pysocks                   1.7.1            py38haa95532_0  
python                    3.8.8                hdbf39b2_4  
python-dateutil           2.8.1              pyhd3eb1b0_0  
python_abi                3.8                      1_cp38    conda-forge
pywin32                   227              py38he774522_1  
qt                        5.9.7            vc14h73c81de_0  
requests                  2.25.1             pyhd3eb1b0_0  
requests-oauthlib         1.3.0                      py_0  
rsa                       4.7.2              pyhd3eb1b0_1  
scikit-learn              0.24.1           py38hf11a4ad_0  
scipy                     1.6.2            py38h14eb087_0  
setuptools                52.0.0           py38haa95532_0  
sip                       4.19.13          py38ha925a31_0  
six                       1.15.0           py38haa95532_0  
sqlite                    3.35.3               h2bbff1b_0  
tensorboard               2.4.0              pyhc547734_0  
tensorboard-plugin-wit    1.6.0                      py_0  
tensorflow                2.3.0           mkl_py38h8c0d9a2_0  
tensorflow-base           2.3.0           eigen_py38h75a453f_0  
tensorflow-estimator      2.3.0              pyheb71bc4_0  
termcolor                 1.1.0            py38haa95532_1  
threadpoolctl             2.1.0              pyh5ca1d4c_0  
tk                        8.6.10               he774522_0  
tornado                   6.1              py38h2bbff1b_0  
tqdm                      4.59.0             pyhd3eb1b0_1  
typing-extensions         3.7.4.3              hd3eb1b0_0  
typing_extensions         3.7.4.3            pyh06a4308_0  
urllib3                   1.26.4             pyhd3eb1b0_0  
vc                        14.2                 h21ff451_1  
vs2015_runtime            14.27.29016          h5e58377_2  
werkzeug                  1.0.1              pyhd3eb1b0_0  
wheel                     0.36.2             pyhd3eb1b0_0  
win_inet_pton             1.1.0            py38haa95532_0  
wincertstore              0.2                      py38_0  
wrapt                     1.12.1           py38he774522_1  
xz                        5.2.5                h62dcd97_0  
yarl                      1.6.3            py38h2bbff1b_0  
zipp                      3.4.1              pyhd3eb1b0_0  
zlib                      1.2.11               h62dcd97_4  
zstd                      1.4.9                h19a0ad4_0  

=============== State File =================
{
  "name": "original",
  "sessions": {
    "1": {
      "timestamp": 1624208052.830708,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2265,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "2": {
      "timestamp": 1624294872.001325,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2403,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "3": {
      "timestamp": 1624466693.656511,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2442,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "4": {
      "timestamp": 1624554948.3509147,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2493,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "5": {
      "timestamp": 1624640645.3112664,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2391,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "6": {
      "timestamp": 1624727911.989967,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2570,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "7": {
      "timestamp": 1624814967.424246,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2410,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "8": {
      "timestamp": 1624901682.5992787,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2274,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "9": {
      "timestamp": 1624986119.4277413,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 880,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "10": {
      "timestamp": 1625108888.1690295,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "11": {
      "timestamp": 1625160435.104646,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2361,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "12": {
      "timestamp": 1625245827.3960488,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2418,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "13": {
      "timestamp": 1625333057.26791,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2865,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "14": {
      "timestamp": 1625418142.2556713,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2472,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "15": {
      "timestamp": 1625505004.8080537,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2145,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "16": {
      "timestamp": 1625590017.235631,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2571,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "17": {
      "timestamp": 1625678310.0310345,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 500,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "18": {
      "timestamp": 1625763843.3335555,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2318,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "19": {
      "timestamp": 1625848990.578944,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2605,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "20": {
      "timestamp": 1625936232.4417107,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3400,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "21": {
      "timestamp": 1626022766.2270312,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2317,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "22": {
      "timestamp": 1626109268.3799677,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2562,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "23": {
      "timestamp": 1626195839.1711514,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2595,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "24": {
      "timestamp": 1626284419.512272,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 1110,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "25": {
      "timestamp": 1626367525.050002,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2644,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "26": {
      "timestamp": 1626431533.7603688,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 244,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "27": {
      "timestamp": 1626454650.3116822,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2533,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "28": {
      "timestamp": 1626542039.993974,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 3841,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "29": {
      "timestamp": 1626628736.0994782,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2304,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "30": {
      "timestamp": 1626716424.2215889,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 491,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "31": {
      "timestamp": 1626800744.886527,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2891,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    },
    "32": {
      "timestamp": 1626886710.8271956,
      "no_logs": false,
      "loss_names": [
        "total",
        "face_a",
        "face_b"
      ],
      "batchsize": 16,
      "iterations": 2515,
      "config": {
        "learning_rate": 5e-05,
        "epsilon_exponent": -7,
        "allow_growth": false,
        "nan_protection": true,
        "convert_batchsize": 16,
        "eye_multiplier": 3,
        "mouth_multiplier": 2
      }
    }
  },
  "lowest_avg_loss": {
    "a": 0.029950972646474838,
    "b": 0.026551460847258568
  },
  "iterations": 69833,
  "config": {
    "centering": "face",
    "coverage": 68.75,
    "optimizer": "adam",
    "learning_rate": 5e-05,
    "epsilon_exponent": -7,
    "allow_growth": false,
    "mixed_precision": false,
    "nan_protection": true,
    "convert_batchsize": 16,
    "loss_function": "ssim",
    "mask_loss_function": "mse",
    "l2_reg_term": 100,
    "eye_multiplier": 3,
    "mouth_multiplier": 2,
    "penalized_mask_loss": true,
    "mask_type": "extended",
    "mask_blur_kernel": 3,
    "mask_threshold": 4,
    "learn_mask": false,
    "lowmem": false
  }
}

================= Configs ==================
--------- .faceswap ---------
backend:                  cpu

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.box_blend]
type:                     gaussian
distance:                 11.0
radius:                   5.0
passes:                   1

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0

[scaling.sharpen]
method:                   none
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto
skip_mux:                 False

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
scalefactor:              0.709
batch-size:               8
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.bisenet_fp]
batch-size:               8
include_ears:             False
include_hair:             False
include_glasses:          True

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
centering:                face
coverage:                 68.75
icnr_init:                False
conv_aware_init:          False
optimizer:                adam
learning_rate:            5e-05
epsilon_exponent:         -7
reflect_padding:          False
allow_growth:             False
mixed_precision:          False
nan_protection:           True
convert_batchsize:        16

[global.loss]
loss_function:            ssim
mask_loss_function:       mse
l2_reg_term:              100
eye_multiplier:           3
mouth_multiplier:         2
penalized_mask_loss:      True
mask_type:                extended
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False

[model.dfaker]
output_size:              128

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
clipnorm:                 True
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.phaze_a]
output_size:              128
shared_fc:                none
enable_gblock:            True
split_fc:                 True
split_gblock:             False
split_decoders:           False
enc_architecture:         fs_original
enc_scaling:              40
enc_load_weights:         True
bottleneck_type:          dense
bottleneck_norm:          none
bottleneck_size:          1024
bottleneck_in_encoder:    True
fc_depth:                 1
fc_min_filters:           1024
fc_max_filters:           1024
fc_dimensions:            4
fc_filter_slope:          -0.5
fc_dropout:               0.0
fc_upsampler:             upsample2d
fc_upsamples:             1
fc_upsample_filters:      512
fc_gblock_depth:          3
fc_gblock_min_nodes:      512
fc_gblock_max_nodes:      512
fc_gblock_filter_slope:   -0.5
fc_gblock_dropout:        0.0
dec_upscale_method:       subpixel
dec_norm:                 none
dec_min_filters:          64
dec_max_filters:          512
dec_filter_slope:         -0.45
dec_res_blocks:           1
dec_output_kernel:        5
dec_gaussian:             True
dec_skip_last_residual:   True
freeze_layers:            keras_encoder
load_layers:              encoder
fs_original_depth:        4
fs_original_min_filters:  128
fs_original_max_filters:  1024
mobilenet_width:          1.0
mobilenet_depth:          1
mobilenet_dropout:        0.001

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
clipnorm:                 True
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

User avatar
torzdf
Posts: 1495
Joined: Fri Jul 12, 2019 12:53 am
Answers: 127
Has thanked: 51 times
Been thanked: 287 times

Re: Crash while trying to train - OSError: Unable to open file (truncated file)

Post by torzdf »

I was going to say "please search prior to posting support requests". But.... This questions was already asked... by you... just a month ago.

My word is final


Post Reply