Caught exception in thread: '_training_0' Error: OSError: exception: access violation reading 0x0000000000000010

Training your model
Forum rules
Read the FAQs and search the forum before posting a new topic.

Please mark any answers that fixed your problems so others can find the solutions.
Post Reply
User avatar
sam.21
Posts: 1
Joined: Thu Jun 25, 2020 5:28 pm

Caught exception in thread: '_training_0' Error: OSError: exception: access violation reading 0x0000000000000010

Post by sam.21 »

Specs :
Processor: AMD Ryzen 5 3550H with Radeon Vega Mobile Gfx (8 CPUs), ~2.1GHz
Ram : 8 gb
Card name: Radeon RX 560X Series (4 gb)

Error message :
.
.
.
06/25/2020 22:55:17 INFO Loading data, this may take a while...
06/25/2020 22:55:17 INFO Loading Model from Original plugin...
06/25/2020 22:55:18 INFO No existing state file found. Generating.
06/25/2020 22:55:18 INFO Opening device "opencl_amd_gfx902.0"
06/25/2020 22:55:18 CRITICAL Error caught! Exiting...
06/25/2020 22:55:18 ERROR Caught exception in thread: '_training_0'
06/25/2020 22:55:21 ERROR Got Exception on main handler:

.
.
.

Code: Select all

06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dfl_sae', title: 'decoder_dims', datatype: '<class 'int'>', default: '21', info: 'Decoder dimensions per channel. Higher number of decoder dimensions will help the model to improve details, but will require more VRAM.', rounding: '1', min_max: (10, 85), choices: None, gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dfl_sae', title: 'multiscale_decoder', datatype: '<class 'bool'>', default: 'False', info: 'Multiscale decoder can help to obtain better details.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.dfl_sae
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: dlight_defaults.py, module_path: plugins.train.model, plugin_type: model
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.dlight_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.dlight', info: 'A lightweight, high resolution Dfaker variant (Adapted from https://github.com/dfaker/df)\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'features', datatype: '<class 'str'>', default: 'best', info: 'Higher settings will allow learning more features such as tatoos, piercing,\nand wrinkles.\nStrongly affects VRAM usage.', rounding: 'None', min_max: None, choices: ['lowmem', 'fair', 'best'], gui_radio: True, fixed: True, group: None)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'details', datatype: '<class 'str'>', default: 'good', info: 'Defines detail fidelity. Lower setting can appear 'rugged' while 'good' might take onger time to train.\nAffects VRAM usage.', rounding: 'None', min_max: None, choices: ['fast', 'good'], gui_radio: True, fixed: True, group: None)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.dlight', title: 'output_size', datatype: '<class 'int'>', default: '256', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be either 128, 256, or 384.', rounding: '128', min_max: (128, 384), choices: [], gui_radio: False, fixed: True, group: None)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.dlight
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.model, plugin_type: model
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.original_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.original', info: 'Original Faceswap Model.\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.original', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.original
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: realface_defaults.py, module_path: plugins.train.model, plugin_type: model
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.realface_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.realface', info: 'An extra detailed variant of Original model.\nIncorporates ideas from Bryanlyon and inspiration from the Villain model.\nRequires about 6GB-8GB of VRAM (batchsize 8-16).\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'input_size', datatype: '<class 'int'>', default: '64', info: 'Resolution (in pixels) of the input image to train on.\nBE AWARE Larger resolution will dramatically increase VRAM requirements.\nHigher resolutions may increase prediction accuracy, but does not effect the resulting output size.\nMust be between 64 and 128 and be divisible by 16.', rounding: '16', min_max: (64, 128), choices: [], gui_radio: False, fixed: True, group: size)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'output_size', datatype: '<class 'int'>', default: '128', info: 'Output image resolution (in pixels).\nBe aware that larger resolution will increase VRAM requirements.\nNB: Must be between 64 and 256 and be divisible by 16.', rounding: '16', min_max: (64, 256), choices: [], gui_radio: False, fixed: True, group: size)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'dense_nodes', datatype: '<class 'int'>', default: '1536', info: 'Number of nodes for decoder. Might affect your model's ability to learn in general.\nNote that: Lower values will affect the ability to predict details.', rounding: '64', min_max: (768, 2048), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 150.', rounding: '4', min_max: (96, 160), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.realface', title: 'complexity_decoder', datatype: '<class 'int'>', default: '512', info: 'Decoder Complexity.', rounding: '4', min_max: (512, 544), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.realface
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: unbalanced_defaults.py, module_path: plugins.train.model, plugin_type: model
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.unbalanced_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.unbalanced', info: 'An unbalanced model with adjustable input size options.\nThis is an unbalanced model so b>a swaps may not work well\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'input_size', datatype: '<class 'int'>', default: '128', info: 'Resolution (in pixels) of the image to train on.\nBE AWARE Larger resolution will dramatically increaseVRAM requirements.\nMake sure your resolution is divisible by 64 (e.g. 64, 128, 256 etc.).\nNB: Your faceset must be at least 1.6x larger than your required input size.\n(e.g. 160 is the maximum input size for a 256x256 faceset).', rounding: '64', min_max: (64, 512), choices: [], gui_radio: False, fixed: True, group: size)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.\nNB: lowmem will override cutom nodes and complexity settings.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'clipnorm', datatype: '<class 'bool'>', default: 'True', info: 'Controls gradient clipping of the optimizer. Can prevent model corruption at the expense of VRAM.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'nodes', datatype: '<class 'int'>', default: '1024', info: 'Number of nodes for decoder. Don't change this unless you know what you are doing!', rounding: '64', min_max: (512, 4096), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_encoder', datatype: '<class 'int'>', default: '128', info: 'Encoder Convolution Layer Complexity. sensible ranges: 128 to 160.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_a', datatype: '<class 'int'>', default: '384', info: 'Decoder A Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.unbalanced', title: 'complexity_decoder_b', datatype: '<class 'int'>', default: '512', info: 'Decoder B Complexity.', rounding: '16', min_max: (64, 1024), choices: [], gui_radio: False, fixed: True, group: network)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.unbalanced
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: villain_defaults.py, module_path: plugins.train.model, plugin_type: model
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.model.villain_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'model.villain', info: 'A Higher resolution version of the Original Model by VillainGuy.\nExtremely VRAM heavy. Full model requires 9GB+ for batchsize 16\n\nNB: Unless specifically stated, values changed here will only take effect when creating a new model.')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'model.villain', title: 'lowmem', datatype: '<class 'bool'>', default: 'False', info: 'Lower memory mode. Set to 'True' if having issues with VRAM useage.\nNB: Models with a changed lowmem mode are not compatible with each other.', rounding: 'None', min_max: None, choices: [], gui_radio: False, fixed: True, group: settings)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: model.villain
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Adding defaults: (filename: original_defaults.py, module_path: plugins.train.trainer, plugin_type: trainer
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Importing defaults module: plugins.train.trainer.original_defaults
06/25/2020 22:55:18 MainProcess     _training_0     config          add_section               DEBUG    Add section: (title: 'trainer.original', info: 'Original Trainer Options.\nWARNING: The defaults for augmentation will be fine for 99.9% of use cases. Only change them if you absolutely know what you are doing!')
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'preview_images', datatype: '<class 'int'>', default: '14', info: 'Number of sample faces to display for each side in the preview when training.', rounding: '2', min_max: (2, 16), choices: None, gui_radio: False, fixed: True, group: evaluation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'zoom_amount', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly zoom each training image in and out.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'rotation_range', datatype: '<class 'int'>', default: '10', info: 'Percentage amount to randomly rotate each training image.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'shift_range', datatype: '<class 'int'>', default: '5', info: 'Percentage amount to randomly shift each training image horizontally and vertically.', rounding: '1', min_max: (0, 25), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'flip_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to randomly flip each training image horizontally.\nNB: This is ignored if the 'no-flip' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: image augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_lightness', datatype: '<class 'int'>', default: '30', info: 'Percentage amount to randomly alter the lightness of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_ab', datatype: '<class 'int'>', default: '8', info: 'Percentage amount to randomly alter the 'a' and 'b' colors of the L*a*b* color space of each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 50), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_chance', datatype: '<class 'int'>', default: '50', info: 'Percentage chance to perform Contrast Limited Adaptive Histogram Equalization on each training image.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (0, 75), choices: None, gui_radio: False, fixed: False, group: color augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     config          add_item                  DEBUG    Add item: (section: 'trainer.original', title: 'color_clahe_max_size', datatype: '<class 'int'>', default: '4', info: 'The grid size dictates how much Contrast Limited Adaptive Histogram Equalization is performed on any training image selected for clahe. Contrast will be applied randomly with a gridsize of 0 up to the maximum. This value is a multiplier calculated from the training image size.\nNB: This is ignored if the 'no-augment-color' option is enabled', rounding: '1', min_max: (1, 8), choices: None, gui_radio: False, fixed: True, group: color augmentation)
06/25/2020 22:55:18 MainProcess     _training_0     _config         load_module               DEBUG    Added defaults: trainer.original
06/25/2020 22:55:18 MainProcess     _training_0     config          handle_config             DEBUG    Handling config
06/25/2020 22:55:18 MainProcess     _training_0     config          check_exists              DEBUG    Config file exists: 'C:\Users\samee\faceswap\config\train.ini'
06/25/2020 22:55:18 MainProcess     _training_0     config          load_config               VERBOSE  Loading config: 'C:\Users\samee\faceswap\config\train.ini'
06/25/2020 22:55:18 MainProcess     _training_0     config          validate_config           DEBUG    Validating config
06/25/2020 22:55:18 MainProcess     _training_0     config          check_config_change       DEBUG    Default config has not changed
06/25/2020 22:55:18 MainProcess     _training_0     config          check_config_choices      DEBUG    Checking config choices
06/25/2020 22:55:18 MainProcess     _training_0     config          check_config_choices      DEBUG    Checked config choices
06/25/2020 22:55:18 MainProcess     _training_0     config          validate_config           DEBUG    Validated config
06/25/2020 22:55:18 MainProcess     _training_0     config          handle_config             DEBUG    Handled config
06/25/2020 22:55:18 MainProcess     _training_0     config          __init__                  DEBUG    Initialized: Config
06/25/2020 22:55:18 MainProcess     _training_0     config          get                       DEBUG    Getting config item: (section: 'global', option: 'learning_rate')
06/25/2020 22:55:18 MainProcess     _training_0     config          get                       DEBUG    Returning item: (type: <class 'float'>, value: 5e-05)
06/25/2020 22:55:18 MainProcess     _training_0     config          changeable_items          DEBUG    Alterable for existing models: {'learning_rate': 5e-05}
06/25/2020 22:55:18 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing State: (model_dir: 'D:\faceswap\faces\modeldir', model_name: 'original', config_changeable_items: '{'learning_rate': 5e-05}', no_logs: False, pingpong: False, training_image_size: '256'
06/25/2020 22:55:18 MainProcess     _training_0     serializer      get_serializer            DEBUG    <lib.serializer._JSONSerializer object at 0x0000019401BDD5C8>
06/25/2020 22:55:18 MainProcess     _training_0     _base           load                      DEBUG    Loading State
06/25/2020 22:55:18 MainProcess     _training_0     _base           load                      INFO     No existing state file found. Generating.
06/25/2020 22:55:18 MainProcess     _training_0     _base           new_session_id            DEBUG    1
06/25/2020 22:55:18 MainProcess     _training_0     _base           create_new_session        DEBUG    Creating new session. id: 1
06/25/2020 22:55:18 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized State:
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initializing NNBlocks: (use_icnr_init: False, use_convaware_init: False, use_reflect_padding: False, first_run: True)
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       __init__                  DEBUG    Initialized NNBlocks
06/25/2020 22:55:18 MainProcess     _training_0     _base           name                      DEBUG    model name: 'original'
06/25/2020 22:55:18 MainProcess     _training_0     _base           rename_legacy             DEBUG    Renaming legacy files
06/25/2020 22:55:18 MainProcess     _training_0     _base           name                      DEBUG    model name: 'original'
06/25/2020 22:55:18 MainProcess     _training_0     _base           rename_legacy             DEBUG    No legacy files to rename
06/25/2020 22:55:18 MainProcess     _training_0     _base           load_state_info           DEBUG    Loading Input Shape from State file
06/25/2020 22:55:18 MainProcess     _training_0     _base           load_state_info           DEBUG    No input shapes saved. Using model config
06/25/2020 22:55:18 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Requested coverage_ratio: 0.6875
06/25/2020 22:55:18 MainProcess     _training_0     _base           calculate_coverage_ratio  DEBUG    Final coverage_ratio: 0.6875
06/25/2020 22:55:18 MainProcess     _training_0     _base           __init__                  DEBUG    training_opts: {'alignments': {'a': 'D:\\faceswap\\faces\\Aloona\\alignments.fsa', 'b': 'D:\\faceswap\\faces\\Brad\\alignments.fsa'}, 'preview_scaling': 0.5, 'warp_to_landmarks': False, 'augment_color': True, 'no_flip': False, 'pingpong': False, 'snapshot_interval': 25000, 'training_size': 256, 'no_logs': False, 'coverage_ratio': 0.6875, 'mask_type': None, 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'penalized_mask_loss': False}
06/25/2020 22:55:18 MainProcess     _training_0     _base           multiple_models_in_folder DEBUG    model_files: [], retval: False
06/25/2020 22:55:18 MainProcess     _training_0     original        add_networks              DEBUG    Adding networks
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       upscale                   DEBUG    input_tensor: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 256, kernel_size: 3, use_instance_norm: False, kwargs: {})
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       _get_name                 DEBUG    Generating block name: upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       _set_default_initializer  DEBUG    Set default kernel_initializer to: <keras.initializers.VarianceScaling object at 0x0000019401B33808>
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       conv2d                    DEBUG    input_tensor: input_1 Placeholder FLOAT32(<tile.Value SymbolicDim UINT64()>, 8, 8, 512), filters: 1024, kernel_size: 3, strides: (1, 1), padding: same, kwargs: {'name': 'upscale_(<tile.Value SymbolicDim UINT64()>, 8, 8, 512)_0_conv2d', 'kernel_initializer': <keras.initializers.VarianceScaling object at 0x0000019401B33808>})
06/25/2020 22:55:18 MainProcess     _training_0     nn_blocks       _set_default_initializer  DEBUG    Using model specified initializer: <keras.initializers.VarianceScaling object at 0x0000019401B33808>
06/25/2020 22:55:18 MainProcess     _training_0     library         _logger_callback          INFO     Opening device "opencl_amd_gfx902.0"
06/25/2020 22:55:18 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): exception: access violation reading 0x0000000000000010
06/25/2020 22:55:18 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
06/25/2020 22:55:18 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
06/25/2020 22:55:18 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
06/25/2020 22:55:18 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
06/25/2020 22:55:18 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
06/25/2020 22:55:18 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
06/25/2020 22:55:18 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\samee\faceswap\lib\cli\launcher.py", line 155, in execute_script
    process.process()
  File "C:\Users\samee\faceswap\scripts\train.py", line 161, in process
    self._end_thread(thread, err)
  File "C:\Users\samee\faceswap\scripts\train.py", line 201, in _end_thread
    thread.join()
  File "C:\Users\samee\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\samee\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\samee\faceswap\scripts\train.py", line 226, in _training
    raise err
  File "C:\Users\samee\faceswap\scripts\train.py", line 214, in _training
    model = self._load_model()
  File "C:\Users\samee\faceswap\scripts\train.py", line 255, in _load_model
    predict=False)
  File "C:\Users\samee\faceswap\plugins\train\model\original.py", line 25, in __init__
    super().__init__(*args, **kwargs)
  File "C:\Users\samee\faceswap\plugins\train\model\_base.py", line 125, in __init__
    self.build()
  File "C:\Users\samee\faceswap\plugins\train\model\_base.py", line 243, in build
    self.add_networks()
  File "C:\Users\samee\faceswap\plugins\train\model\original.py", line 31, in add_networks
    self.add_network("decoder", "a", self.decoder(), is_output=True)
  File "C:\Users\samee\faceswap\plugins\train\model\original.py", line 66, in decoder
    var_x = self.blocks.upscale(var_x, 256)
  File "C:\Users\samee\faceswap\lib\model\nn_blocks.py", line 301, in upscale
    **kwargs)
  File "C:\Users\samee\faceswap\lib\model\nn_blocks.py", line 182, in conv2d
    **kwargs)(input_tensor)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\keras\engine\base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\keras\layers\convolutional.py", line 141, in build
    constraint=self.kernel_constraint)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\keras\engine\base_layer.py", line 249, in add_weight
    weight = K.variable(initializer(shape),
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\keras\initializers.py", line 218, in __call__
    dtype=dtype, seed=self.seed)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\plaidml\keras\backend.py", line 1160, in random_uniform
    rng_state = _make_rng_state(seed)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\plaidml\keras\backend.py", line 191, in _make_rng_state
    rng_state = variable(rng_init, dtype='uint32')
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\plaidml\keras\backend.py", line 1735, in variable
    with tensor.mmap_discard(_ctx) as view:
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "C:\Users\samee\anaconda3\envs\faceswap2\lib\site-packages\plaidml\__init__.py", line 1252, in mmap_discard
    mapping = _lib().plaidml_map_buffer_discard(ctx, self.buffer)
OSError: exception: access violation reading 0x0000000000000010

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         127d3db Dependencies update (#1028)
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - gfx902 (experimental)
gpu_devices_active:  GPU_0
gpu_driver:          ['3004.8 (PAL,HSAIL)']
gpu_vram:            GPU_0: 4136MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.17763-SP0
os_release:          10
py_command:          C:\Users\samee\faceswap\faceswap.py train -A D:/faceswap/faces/Aloona -B D:/faceswap/faces/Brad -m D:/faceswap/faces/modeldir -t original -bs 64 -it 1000000 -s 100 -ss 25000 -tia D:/faceswap/faces/Aloona -tib D:/faceswap/faces/Brad -to D:/faceswap/faces/timelapse output -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.7
py_virtual_env:      True
sys_cores:           8
sys_processor:       AMD64 Family 23 Model 24 Stepping 1, AuthenticAMD
sys_ram:             Total: 6082MB, Available: 1524MB, Used: 4558MB, Free: 1524MB

=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
brotlipy==0.7.0
cachetools==4.1.0
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cloudpickle==1.4.1
cryptography==2.9.2
cycler==0.10.0
cytoolz==0.10.1
dask @ file:///tmp/build/80754af9/dask-core_1592842333140/work
decorator==4.4.2
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-auth==1.14.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.10.0
idna==2.9
imageio==2.8.0
imageio-ffmpeg==0.4.2
joblib==0.15.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib @ file:///C:/ci/matplotlib-base_1592846084747/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
networkx==2.4
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.2.0.34
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==7.1.2
plaidml==0.6.4
plaidml-keras==0.6.4
protobuf==3.12.3
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.7
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1592841827918/work
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn @ file:///C:/ci/scikit-learn_1592847564598/work
scipy @ file:///C:/ci/scipy_1592916958183/work
six==1.15.0
tensorboard==1.15.0
tensorboard-plugin-wit==1.6.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm==4.46.1
urllib3==1.25.9
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\samee\anaconda3\envs\faceswap2:
#
# Name                    Version                   Build  Channel
_tflow_select             2.2.0                     eigen  
absl-py                   0.9.0                    py37_0  
astor                     0.8.0                    py37_0  
blas                      1.0                         mkl  
blinker                   1.4                      py37_0  
brotlipy                  0.7.0           py37he774522_1000  
ca-certificates           2020.1.1                      0  
cachetools                4.1.0                      py_1  
certifi                   2020.6.20                py37_0  
cffi                      1.14.0           py37h7a1dbc1_0  
chardet                   3.0.4                 py37_1003  
click                     7.1.2                      py_0  
cloudpickle               1.4.1                      py_0  
cryptography              2.9.2            py37h7a1dbc1_0  
cycler                    0.10.0                   py37_0  
cytoolz                   0.10.1           py37he774522_0  
dask-core                 2.19.0                     py_0  
decorator                 4.4.2                      py_0  
enum34                    1.1.10                   pypi_0    pypi
fastcluster               1.1.26                   pypi_0    pypi
ffmpy                     0.2.3                    pypi_0    pypi
freetype                  2.10.2               hd328e21_0  
gast                      0.2.2                    py37_0  
google-auth               1.14.1                     py_0  
google-auth-oauthlib      0.4.1                      py_2  
google-pasta              0.2.0                      py_0  
grpcio                    1.27.2           py37h351948d_0  
h5py                      2.10.0           py37h5e291fa_0  
hdf5                      1.10.4               h7ebc959_0  
icc_rt                    2019.0.0             h0cc432a_1  
icu                       58.2                 ha925a31_3  
idna                      2.9                        py_1  
imageio                   2.8.0                      py_0  
imageio-ffmpeg            0.4.2                    pypi_0    pypi
intel-openmp              2020.1                      216  
joblib                    0.15.1                     py_0  
jpeg                      9b                   hb83a4c4_2  
keras                     2.2.4                         0  
keras-applications        1.0.8                      py_0  
keras-base                2.2.4                    py37_0  
keras-preprocessing       1.1.0                      py_1  
kiwisolver                1.2.0            py37h74a9793_0  
libpng                    1.6.37               h2a8f88b_0  
libprotobuf               3.12.3               h7bd577a_0  
libtiff                   4.1.0                h56a325e_1  
lz4-c                     1.9.2                h62dcd97_0  
markdown                  3.1.1                    py37_0  
matplotlib                3.2.2                         0  
matplotlib-base           3.2.2            py37h64f37c6_0  
mkl                       2020.1                      216  
mkl-service               2.3.0            py37hb782905_0  
mkl_fft                   1.1.0            py37h45dec08_0  
mkl_random                1.1.1            py37h47e9c7a_0  
networkx                  2.4                        py_0  
numpy                     1.18.5           py37h6530119_0  
numpy-base                1.18.5           py37hc3f5095_0  
nvidia-ml-py3             7.352.1                  pypi_0    pypi
oauthlib                  3.1.0                      py_0  
olefile                   0.46                     py37_0  
opencv-python             4.2.0.34                 pypi_0    pypi
openssl                   1.1.1g               he774522_0  
opt_einsum                3.1.0                      py_0  
pathlib                   1.0.1                    py37_1  
pillow                    7.1.2            py37hcc1f983_0  
pip                       20.1.1                   py37_1  
plaidml                   0.6.4                    pypi_0    pypi
plaidml-keras             0.6.4                    pypi_0    pypi
protobuf                  3.12.3           py37h33f27b4_0  
psutil                    5.7.0            py37he774522_0  
pyasn1                    0.4.8                      py_0  
pyasn1-modules            0.2.7                      py_0  
pycparser                 2.20                       py_0  
pyjwt                     1.7.1                    py37_0  
pyopenssl                 19.1.0                   py37_0  
pyparsing                 2.4.7                      py_0  
pyqt                      5.9.2            py37h6538335_2  
pyreadline                2.1                      py37_1  
pysocks                   1.7.1                    py37_0  
python                    3.7.7                h81c818b_4  
python-dateutil           2.8.1                      py_0  
pywavelets                1.1.1            py37he774522_0  
pywin32                   227              py37he774522_1  
pyyaml                    5.3.1            py37he774522_0  
qt                        5.9.7            vc14h73c81de_0  
requests                  2.24.0                     py_0  
requests-oauthlib         1.3.0                      py_0  
rsa                       4.0                        py_0  
scikit-image              0.16.2           py37h47e9c7a_0  
scikit-learn              0.23.1           py37h25d0782_0  
scipy                     1.5.0            py37h9439919_0  
setuptools                47.3.1                   py37_0  
sip                       4.19.8           py37h6538335_0  
six                       1.15.0                     py_0  
sqlite                    3.32.3               h2a8f88b_0  
tensorboard               1.15.0                   pypi_0    pypi
tensorboard-plugin-wit    1.6.0                      py_0  
tensorflow                1.15.0          eigen_py37h9f89a44_0  
tensorflow-base           1.15.0          eigen_py37h07d2309_0  
tensorflow-estimator      1.15.1             pyh2649769_0  
termcolor                 1.1.0                    py37_1  
threadpoolctl             2.1.0              pyh5ca1d4c_0  
tk                        8.6.10               he774522_0  
toolz                     0.10.0                     py_0  
toposort                  1.5                      pypi_0    pypi
tornado                   6.0.4            py37he774522_1  
tqdm                      4.46.1                     py_0  
urllib3                   1.25.9                     py_0  
vc                        14.1                 h0510ff6_4  
vs2015_runtime            14.16.27012          hf0eaf9b_2  
werkzeug                  0.16.1                     py_0  
wheel                     0.34.2                   py37_0  
win_inet_pton             1.1.0                    py37_0  
wincertstore              0.2                      py37_0  
wrapt                     1.12.1           py37he774522_1  
xz                        5.2.5                h62dcd97_0  
yaml                      0.1.7                hc54c509_2  
zlib                      1.2.11               h62dcd97_4  
zstd                      1.4.4                ha9fde0e_3  

================= Configs ==================
--------- .faceswap ---------
backend:                  amd

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.box_blend]
type:                     gaussian
distance:                 11.0
radius:                   5.0
passes:                   1

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0

[scaling.sharpen]
method:                   unsharp_mask
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             False

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7
scalefactor:              0.709
batch-size:               8

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               False
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
coverage:                 68.75
mask_type:                none
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False
icnr_init:                False
conv_aware_init:          False
reflect_padding:          False
penalized_mask_loss:      True
loss_function:            mae
learning_rate:            5e-05

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
clipnorm:                 True
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
clipnorm:                 True
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           14
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

User avatar
torzdf
Posts: 718
Joined: Fri Jul 12, 2019 12:53 am
Answers: 101
Has thanked: 19 times
Been thanked: 146 times

Re: Caught exception in thread: '_training_0' Error: OSError: exception: access violation reading 0x0000000000000010

Post by torzdf »

Most likely you are out of GPU Memory (https://github.com/plaidml/plaidml/issues/225)

Try lowering the batch size.
My word is final

User avatar
calipheron
Posts: 15
Joined: Thu May 14, 2020 7:39 pm
Has thanked: 1 time

Re: Caught exception in thread: '_training_0' Error: OSError: exception: access violation reading 0x0000000000000010

Post by calipheron »

Also consider sticking with simpler models.
Lightweight, Original or DFaker.
They get a bit too complicated after that, for AMD Polaris GPUs.

Post Reply