So, I keep getting a crash. I'll paste the crash log below.
I've been using my Shadow PC to do this. And If I leave the house, I have to log in to shadow on my phone to keep shadow active. Well, it looks like Shadow closed during one of my sessions. Which happens all the time, but this time I keep getting this crash.
Its happened before. And I was able to use the model recovery tool. But, when I try using the model recovery tool this time, it says no Model Found in directory (and I know I picked the correct directory.
Here is the error log:
Code: Select all
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.dfl_sae', title: 'decoder_dims', datatype: '<class 'int'>', default: '21', info: 'Decoder dimensions per channel. Higher number of decoder dimensions will help the model to improve details, but will require more VRAM.', rounding: '1', min_max: (10, 85), choices: None, gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.dfl_sae', title: 'multiscale_decoder', datatype: '<class 'bool'>', default: 'False', info: 'Multiscale decoder can help to obtain better details.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.dfl_sae
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: dlight_defaults.py, module_path: plugins.train.model, plugin_type: model
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.dlight_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.dlight', info: 'A lightweight, high resolution Dfaker variant (Adapted from https://github.com/dfaker/df)\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'features', datatype: '<class 'str'>', default: 'best', info: 'Higher settings will allow learning more features such as tatoos, piercing,\nand wrinkles.\nStrongly affects VRAM usage.', rounding: 'None', min_max: None, choices: ['lowmem', 'fair', 'best'], gui_radio: True, fixed: True, group: None)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.dlight
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.original_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.original
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.realface_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.realface
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.unbalanced_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.unbalanced
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.model.villain_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Don't try to run this if you have a small GPU.\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: model.villain
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Importing defaults module: plugins.train.trainer.original_defaults
11/16/2020 17:47:52 MainProcess _training_0 config add_section DEBUG Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'disable_warp', datatype: '<class 'bool'>', default: 'False', info: 'Disable warp augmentation. Warping is integral to the Neural Network training. If you decide to disable warping, you should only do so towards the end of a model's training session.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: False, group: image augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
11/16/2020 17:47:52 MainProcess _training_0 config add_item DEBUG Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
11/16/2020 17:47:52 MainProcess _training_0 _config load_module DEBUG Added defaults: trainer.original
11/16/2020 17:47:52 MainProcess _training_0 config handle_config DEBUG Handling config
11/16/2020 17:47:52 MainProcess _training_0 config check_exists DEBUG Config file exists: 'C:\Users\Shadow\faceswap\config\train.ini'
11/16/2020 17:47:52 MainProcess _training_0 config load_config VERBOSE Loading config: 'C:\Users\Shadow\faceswap\config\train.ini'
11/16/2020 17:47:52 MainProcess _training_0 config validate_config DEBUG Validating config
11/16/2020 17:47:52 MainProcess _training_0 config check_config_change DEBUG Default config has not changed
11/16/2020 17:47:52 MainProcess _training_0 config check_config_choices DEBUG Checking config choices
11/16/2020 17:47:52 MainProcess _training_0 config check_config_choices DEBUG Checked config choices
11/16/2020 17:47:52 MainProcess _training_0 config validate_config DEBUG Validated config
11/16/2020 17:47:52 MainProcess _training_0 config handle_config DEBUG Handled config
11/16/2020 17:47:52 MainProcess _training_0 config __init__ DEBUG Initialized: Config
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'learning_rate')
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'float'>, value: 5e-05)
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'allow_growth')
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'bool'>, value: False)
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Getting config item: (section: 'global', option: 'convert_batchsize')
11/16/2020 17:47:52 MainProcess _training_0 config get DEBUG Returning item: (type: <class 'int'>, value: 16)
11/16/2020 17:47:52 MainProcess _training_0 config changeable_items DEBUG Alterable for existing models: {'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16}
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initializing State: (model_dir: 'C:\Users\Shadow\Desktop\faceswap project\stephanie_fire_model', model_name: 'realface', config_changeable_items: '{'learning_rate': 5e-05, 'allow_growth': False, 'convert_batchsize': 16}', no_logs: False, training_image_size: '256'
11/16/2020 17:47:52 MainProcess _training_0 serializer get_serializer DEBUG <lib.serializer._JSONSerializer object at 0x000002903472FFA0>
11/16/2020 17:47:52 MainProcess _training_0 _base _load DEBUG Loading State
11/16/2020 17:47:52 MainProcess _training_0 _base _load INFO No existing state file found. Generating.
11/16/2020 17:47:52 MainProcess _training_0 _base _new_session_id DEBUG 1
11/16/2020 17:47:52 MainProcess _training_0 _base _create_new_session DEBUG Creating new session. id: 1
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initialized State:
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initializing _Settings: (arguments: Namespace(alignments_path_a='C:\\Users\\Shadow\\Desktop\\faceswap project\\a_reba_fire_intro_cut_alignments.fsa', alignments_path_b='C:\\Users\\Shadow\\Desktop\\faceswap project\\stephanie_new_alignments.fsa', batch_size=16, colab=False, configfile=None, distributed=False, exclude_gpus=None, func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x000002901E7078B0>>, input_a='C:\\Users\\Shadow\\Desktop\\faceswap project\\a_reba_fire_intro_faces - girl', input_b='C:\\Users\\Shadow\\Desktop\\faceswap project\\stephanie_new_faces', iterations=1000000, logfile=None, loglevel='INFO', model_dir='C:\\Users\\Shadow\\Desktop\\faceswap project\\stephanie_fire_model', no_augment_color=False, no_flip=False, no_logs=False, preview=False, preview_scale=50, redirect_gui=True, save_interval=250, snapshot_interval=25000, timelapse_input_a='C:\\Users\\Shadow\\Desktop\\faceswap project\\a_reba_fire_intro_faces - girl', timelapse_input_b='C:\\Users\\Shadow\\Desktop\\faceswap project\\stephanie_new_faces', timelapse_output='C:\\Users\\Shadow\\Desktop\\faceswap project\\stephanie_fire_timelapse', trainer='realface', warp_to_landmarks=False, write_image=False), mixed_precision: False, allow_growth: False, is_predict: False)
11/16/2020 17:47:52 MainProcess _training_0 _base _set_tf_settings DEBUG Not setting any specific Tensorflow settings
11/16/2020 17:47:52 MainProcess _training_0 _base _set_keras_mixed_precision DEBUG use_mixed_precision: False, skip_check: False
11/16/2020 17:47:52 MainProcess _training_0 _base _set_keras_mixed_precision DEBUG Not enabling 'mixed_precision' (backend: nvidia, use_mixed_precision: False)
11/16/2020 17:47:52 MainProcess _training_0 _base _get_strategy DEBUG Using strategy: <tensorflow.python.distribute.distribute_lib._DefaultDistributionStrategy object at 0x0000029034770880>
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initialized _Settings
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initializing _Loss
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initialized: _Loss
11/16/2020 17:47:52 MainProcess _training_0 _base __init__ DEBUG Initialized ModelBase (Model)
11/16/2020 17:47:52 MainProcess _training_0 realface check_input_output DEBUG Input and output sizes are valid
11/16/2020 17:47:52 MainProcess _training_0 realface get_dense_width_upscalers_numbers DEBUG dense_width: 4, upscalers_no: 4
11/16/2020 17:47:52 MainProcess _training_0 _base strategy_scope DEBUG Using strategy scope: <tensorflow.python.distribute.distribute_lib._DefaultDistributionContext object at 0x00000290346C8F40>
11/16/2020 17:47:52 MainProcess _training_0 _base _load DEBUG Loading model: C:\Users\Shadow\Desktop\faceswap project\stephanie_fire_model\realface.h5
11/16/2020 17:47:52 MainProcess _training_0 multithreading run DEBUG Error in thread (_training_0): Unable to open file (bad object header version number)
11/16/2020 17:47:53 MainProcess MainThread train _monitor DEBUG Thread error detected
11/16/2020 17:47:53 MainProcess MainThread train _monitor DEBUG Closed Monitor
11/16/2020 17:47:53 MainProcess MainThread train _end_thread DEBUG Ending Training thread
11/16/2020 17:47:53 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
11/16/2020 17:47:53 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
11/16/2020 17:47:53 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training_0'
11/16/2020 17:47:53 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
Traceback (most recent call last):
File "C:\Users\Shadow\faceswap\lib\cli\launcher.py", line 182, in execute_script
process.process()
File "C:\Users\Shadow\faceswap\scripts\train.py", line 180, in process
self._end_thread(thread, err)
File "C:\Users\Shadow\faceswap\scripts\train.py", line 220, in _end_thread
thread.join()
File "C:\Users\Shadow\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Shadow\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Shadow\faceswap\scripts\train.py", line 242, in _training
raise err
File "C:\Users\Shadow\faceswap\scripts\train.py", line 230, in _training
model = self._load_model()
File "C:\Users\Shadow\faceswap\scripts\train.py", line 259, in _load_model
model.build()
File "C:\Users\Shadow\faceswap\plugins\train\model\_base.py", line 260, in build
model = self._io._load() # pylint:disable=protected-access
File "C:\Users\Shadow\faceswap\plugins\train\model\_base.py", line 521, in _load
model = load_model(self._filename, compile=False)
File "C:\Users\Shadow\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\save.py", line 184, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "C:\Users\Shadow\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 166, in load_model_from_hdf5
f = h5py.File(filepath, mode='r')
File "C:\Users\Shadow\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 406, in __init__
fid = make_fid(name, mode, userblock_size,
File "C:\Users\Shadow\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (bad object header version number)
============ System Information ============
encoding: cp1252
git_branch: master
git_commits: c24bf2b GUI - Revert Conda default font fix
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: Quadro P5000
gpu_devices_active: GPU_0
gpu_driver: 442.92
gpu_vram: GPU_0: 16384MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: C:\Users\Shadow\faceswap\faceswap.py train -A C:/Users/Shadow/Desktop/faceswap project/a_reba_fire_intro_faces - girl -ala C:/Users/Shadow/Desktop/faceswap project/a_reba_fire_intro_cut_alignments.fsa -B C:/Users/Shadow/Desktop/faceswap project/stephanie_new_faces -alb C:/Users/Shadow/Desktop/faceswap project/stephanie_new_alignments.fsa -m C:/Users/Shadow/Desktop/faceswap project/stephanie_fire_model -t realface -bs 16 -it 1000000 -s 250 -ss 25000 -tia C:/Users/Shadow/Desktop/faceswap project/a_reba_fire_intro_faces - girl -tib C:/Users/Shadow/Desktop/faceswap project/stephanie_new_faces -to C:/Users/Shadow/Desktop/faceswap project/stephanie_fire_timelapse -ps 50 -L INFO -gui
py_conda_version: conda 4.9.1
py_implementation: CPython
py_version: 3.8.5
py_virtual_env: True
sys_cores: 8
sys_processor: Intel64 Family 6 Model 63 Stepping 2, GenuineIntel
sys_ram: Total: 12286MB, Available: 9247MB, Used: 3039MB, Free: 9247MB
=============== Pip Packages ===============
absl-py==0.11.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
cycler==0.10.0
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.3.3
google-auth==1.23.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.33.2
h5py==2.10.0
idna==2.10
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1604014703538/work
Markdown==3.3.3
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.4.0.46
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1603823068645/work
protobuf==3.13.0
psutil @ file:///C:/ci/psutil_1598370330503/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pywin32==227
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn @ file:///C:/ci/scikit-learn_1598377018496/work
scipy @ file:///C:/ci/scipy_1604596260408/work
sip==4.19.13
six==1.15.0
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow-gpu==2.2.1
tensorflow-gpu-estimator==2.2.0
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1602185206534/work
urllib3==1.25.11
Werkzeug==1.0.1
wincertstore==0.2
wrapt==1.12.1
============== Conda Packages ==============
# packages in environment at C:\Users\Shadow\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
absl-py 0.11.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2020.10.14 0
cachetools 4.1.1 pypi_0 pypi
certifi 2020.6.20 pyhd3eb1b0_3
chardet 3.0.4 pypi_0 pypi
cudatoolkit 10.1.243 h74a9793_0
cudnn 7.6.5 cuda10.1_0
cycler 0.10.0 py38_0
fastcluster 1.1.26 py38h251f6bf_2 conda-forge
ffmpeg 4.3.1 ha925a31_0 conda-forge
ffmpy 0.2.3 pypi_0 pypi
freetype 2.10.4 hd328e21_0
gast 0.3.3 pypi_0 pypi
git 2.23.0 h6bb4b03_0
google-auth 1.23.0 pypi_0 pypi
google-auth-oauthlib 0.4.2 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.33.2 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pypi_0 pypi
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge
intel-openmp 2020.2 254
joblib 0.17.0 py_0
jpeg 9b hb83a4c4_2
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.3.0 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 hf4a77e7_3
markdown 3.3.3 pypi_0 pypi
matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38hb782905_0
mkl_fft 1.2.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.18.5 pypi_0 pypi
nvidia-ml-py3 7.352.1 pypi_0 pypi
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py_0
opencv-python 4.4.0.46 pypi_0 pypi
openssl 1.1.1h he774522_0
opt-einsum 3.3.0 pypi_0 pypi
pathlib 1.0.1 py_1
pillow 8.0.1 py38h4fa10fc_0
pip 20.2.4 py38haa95532_0
protobuf 3.13.0 pypi_0 pypi
psutil 5.7.2 py38he774522_0
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 2.4.7 py_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.5 h5fd99cc_1
python-dateutil 2.8.1 py_0
python_abi 3.8 1_cp38 conda-forge
pywin32 227 py38he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.24.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.6 pypi_0 pypi
scikit-learn 0.23.2 py38h47e9c7a_0
scipy 1.5.2 py38h14eb087_0
setuptools 50.3.1 py38haa95532_1
sip 4.19.13 py38ha925a31_0
six 1.15.0 py_0
sqlite 3.33.0 h2a8f88b_0
tensorboard 2.2.2 pypi_0 pypi
tensorboard-plugin-wit 1.7.0 pypi_0 pypi
tensorflow-gpu 2.2.1 pypi_0 pypi
tensorflow-gpu-estimator 2.2.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.0.4 py38he774522_1
tqdm 4.50.2 py_0
urllib3 1.25.11 pypi_0 pypi
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 1.0.1 pypi_0 pypi
wheel 0.35.1 py_0
wincertstore 0.2 py38_0
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h62dcd97_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 h04227a9_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[scaling.sharpen]
method: none
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
coverage: 68.75
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
reflect_padding: False
allow_growth: False
mixed_precision: False
convert_batchsize: 16
[global.loss]
loss_function: ssim
mask_loss_function: mse
l2_reg_term: 100
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.realface]
input_size: 64
output_size: 64
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
disable_warp: False
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4