You do not have enough GPU memory

The Extraction process failing on you, and you aren't getting an error back with clear instructions? Tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Extraction process. If you want to get tips, or better understand the Extract process, then you should look in the Extract Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
k1s
Posts: 4
Joined: Mon Nov 16, 2020 7:55 pm

You do not have enough GPU memory

Post by k1s »

Hi, new here and trying faceswap for the first time. 4G of VRAM, no other applications open, and trying to extract from a folder containing 7 small jpegs. I can't believe I actually am out of VRAM, yet that is apparently what the app is saying.

I've pasted some of the log (Full log exceeds posting limit). Any guidance welcome. Thanks

Code: Select all

11/16/2020 19:53:52 MainProcess     MainThread      logger          log_setup                 INFO     Log level set to: DEBUG
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Initializing GPUStats
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    OS is not macOS. Trying pynvml
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Device count: 1
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Active GPU Devices: [0]
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Handles found: 1
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Driver: 451.48
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Devices: ['GeForce GTX 1650']
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU VRAM: [4096.0]
11/16/2020 19:53:52 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Initialized GPUStats
11/16/2020 19:53:52 MainProcess     MainThread      launcher        _configure_backend        DEBUG    Executing: extract. PID: 2920
11/16/2020 19:53:52 MainProcess     MainThread      tpu_cluster_resolver <module>                  DEBUG    Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
11/16/2020 19:53:53 MainProcess     MainThread      launcher        _test_for_tf_version      DEBUG    Installed Tensorflow Version: 2.2
11/16/2020 19:53:53 MainProcess     MainThread      queue_manager   __init__                  DEBUG    Initializing QueueManager
11/16/2020 19:53:53 MainProcess     MainThread      queue_manager   __init__                  DEBUG    Initialized QueueManager
...
11/16/2020 19:53:53 MainProcess     MainThread      image           _get_fps                  DEBUG    25.0
11/16/2020 19:53:53 MainProcess     MainThread      utils           get_image_paths           DEBUG    Scanned Folder contains 8 files
11/16/2020 19:53:53 MainProcess     MainThread      utils           get_image_paths           DEBUG    Returning 7 images
11/16/2020 19:53:53 MainProcess     MainThread      image           _get_count_and_filelist   DEBUG    count: 7
...
11/16/2020 19:53:53 MainProcess     MainThread      serializer      get_serializer            DEBUG    <lib.serializer._PickleSerializer object at 0x0000023B6F9174C0>
11/16/2020 19:53:53 MainProcess     MainThread      serializer      get_serializer            DEBUG    <lib.serializer._CompressedSerializer object at 0x0000023B49AAB490>
11/16/2020 19:53:53 MainProcess     MainThread      alignments      _get_location             DEBUG    Getting location: (folder: 'C:\Users\ks\Desktop\faces', filename: 'alignments')
...
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Initializing GPUStats
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    OS is not macOS. Trying pynvml
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Device count: 1
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Active GPU Devices: [0]
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Handles found: 1
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Driver: 451.48
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Devices: ['GeForce GTX 1650']
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU VRAM: [4096.0]
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Initialized GPUStats
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    OS is not macOS. Trying pynvml
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Device count: 1
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Active GPU Devices: [0]
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU Handles found: 1
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    GPU VRAM free: [3504.90234375]
11/16/2020 19:53:53 MainProcess     MainThread      gpu_stats       _log                      DEBUG    Active GPU Card with most free VRAM: {'card_id': 0, 'device': 'GeForce GTX 1650', 'free': 3504.90234375, 'total': 4096.0}
11/16/2020 19:53:53 MainProcess     MainThread      pipeline        _get_vram_stats           DEBUG    {'count': 1, 'device': 'GeForce GTX 1650', 'vram_free': 3248, 'vram_total': 4096}
11/16/2020 19:53:53 MainProcess     MainThread      pipeline        _load_detect              DEBUG    Loading Detector: 's3fd'
11/16/2020 19:53:53 MainProcess     MainThread      plugin_loader   _import                   INFO     Loading Detect from S3Fd plugin...
11/16/2020 19:53:53 MainProcess     MainThread      utils           find_spec                 DEBUG    Importing 'tf.keras' as keras for backend: 'nvidia'
11/16/2020 19:53:53 MainProcess     MainThread      utils           find_spec                 DEBUG    Scanning: 'C:\Users\ks\faceswap\tensorflow_core\python\keras\api\_v2' for 'keras'
...
11/16/2020 19:53:53 MainProcess     MainThread      utils           find_spec                 DEBUG    Scanning: 'C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\api\_v2' for 'keras'
11/16/2020 19:53:53 MainProcess     MainThread      utils           find_spec                 DEBUG    Found spec: ModuleSpec(name='keras', loader=<_frozen_importlib_external.SourceFileLoader object at 0x0000023B6FBCB100>, origin='C:\\Users\\ks\\MiniConda3\\envs\\faceswap\\lib\\site-packages\\tensorflow\\python\\keras\\api\\_v2\\keras\\__init__.py', submodule_search_locations=['C:\\Users\\ks\\MiniConda3\\envs\\faceswap\\lib\\site-packages\\tensorflow\\python\\keras\\api\\_v2\\keras'])
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initializing Detect: (rotation: None, min_size: 20)
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initializing Detect: (git_model_id: 11, model_filename: s3fd_keras_v2.h5, exclude_gpus: None, configfile: None, instance: 0, )
11/16/2020 19:53:53 MainProcess     MainThread      config          __init__                  DEBUG    Initializing: Config
11/16/2020 19:53:53 MainProcess     MainThread      config          get_config_file           DEBUG    Config File location: 'C:\Users\ks\faceswap\config\extract.ini'
11/16/2020 19:53:53 MainProcess     MainThread      _config         set_defaults              DEBUG    Setting defaults
11/16/2020 19:53:53 MainProcess     MainThread      _config         set_globals               DEBUG    Setting global config
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'global', info: 'Options that apply to all extraction plugins')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'global', title: 'allow_growth', datatype: '<class 'bool'>', default: 'False', info: '[Nvidia Only]. Enable the Tensorflow GPU `allow_growth` configuration option. This option prevents Tensorflow from allocating all of the GPU VRAM at launch but can lead to higher VRAM fragmentation and slower performance. Should only be enabled if you are having problems running extraction.', rounding: 'None', min_max: None, choices: None, gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: fan_defaults.py, module_path: plugins.extract.align, plugin_type: align
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.align.fan_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'align.fan', info: 'FAN Aligner options.\nFast on GPU, slow on CPU. Best aligner.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'align.fan', title: 'batch-size', datatype: '<class 'int'>', default: '12', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.\n	AMD users: A batchsize of 8 requires about 4 GB vram.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: align.fan
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: cv2_dnn_defaults.py, module_path: plugins.extract.detect, plugin_type: detect
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.detect.cv2_dnn_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'detect.cv2_dnn', info: 'CV2 DNN Detector options.\nA CPU only extractor, is the least reliable, but uses least resources and runs fast on CPU. Use this if not using a GPU and time is important')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.cv2_dnn', title: 'confidence', datatype: '<class 'int'>', default: '50', info: 'The confidence level at which the detector has succesfully found a face.\nHigher levels will be more discriminating, lower levels will have more false positives.', rounding: '5', min_max: (25, 100), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: detect.cv2_dnn
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: mtcnn_defaults.py, module_path: plugins.extract.detect, plugin_type: detect
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.detect.mtcnn_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'detect.mtcnn', info: 'MTCNN Detector options.\nFast on GPU, slow on CPU. Uses fewer resources than other GPU detectors but can often return more false positives.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'minsize', datatype: '<class 'int'>', default: '20', info: 'The minimum size of a face (in pixels) to be accepted as a positive match.\nLower values use significantly more VRAM and will detect more false positives.', rounding: '10', min_max: (20, 1000), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'threshold_1', datatype: '<class 'float'>', default: '0.6', info: 'First stage threshold for face detection. This stage obtains face candidates.', rounding: '2', min_max: (0.1, 0.9), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'threshold_2', datatype: '<class 'float'>', default: '0.7', info: 'Second stage threshold for face detection. This stage refines face candidates.', rounding: '2', min_max: (0.1, 0.9), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'threshold_3', datatype: '<class 'float'>', default: '0.7', info: 'Third stage threshold for face detection. This stage further refines face candidates.', rounding: '2', min_max: (0.1, 0.9), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'scalefactor', datatype: '<class 'float'>', default: '0.709', info: 'The scale factor for the image pyramid.', rounding: '3', min_max: (0.1, 0.9), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.mtcnn', title: 'batch-size', datatype: '<class 'int'>', default: '8', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: detect.mtcnn
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: s3fd_defaults.py, module_path: plugins.extract.detect, plugin_type: detect
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.detect.s3fd_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'detect.s3fd', info: 'S3FD Detector options.\nFast on GPU, slow on CPU. Can detect more faces and fewer false positives than other GPU detectors, but is a lot more resource intensive.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.s3fd', title: 'confidence', datatype: '<class 'int'>', default: '70', info: 'The confidence level at which the detector has succesfully found a face.\nHigher levels will be more discriminating, lower levels will have more false positives.', rounding: '5', min_max: (25, 100), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'detect.s3fd', title: 'batch-size', datatype: '<class 'int'>', default: '4', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.\n	AMD users: A batchsize of 8 requires about 2 GB vram.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: detect.s3fd
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: unet_dfl_defaults.py, module_path: plugins.extract.mask, plugin_type: mask
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.mask.unet_dfl_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'mask.unet_dfl', info: 'UNET_DFL options. Mask designed to provide smart segmentation of mostly frontal faces.\nThe mask model has been trained by community members. Insert more commentary on testing here. Profile faces may result in sub-par performance.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'mask.unet_dfl', title: 'batch-size', datatype: '<class 'int'>', default: '8', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: mask.unet_dfl
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: vgg_clear_defaults.py, module_path: plugins.extract.mask, plugin_type: mask
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.mask.vgg_clear_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'mask.vgg_clear', info: 'VGG_Clear options. Mask designed to provide smart segmentation of mostly frontal faces clear of obstructions.\nProfile faces and obstructions may result in sub-par performance.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'mask.vgg_clear', title: 'batch-size', datatype: '<class 'int'>', default: '6', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: mask.vgg_clear
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Adding defaults: (filename: vgg_obstructed_defaults.py, module_path: plugins.extract.mask, plugin_type: mask
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Importing defaults module: plugins.extract.mask.vgg_obstructed_defaults
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'mask.vgg_obstructed', info: 'VGG_Obstructed options. Mask designed to provide smart segmentation of mostly frontal faces.\nThe mask model has been specifically trained to recognize some facial obstructions (hands and eyeglasses). Profile faces may result in sub-par performance.')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'mask.vgg_obstructed', title: 'batch-size', datatype: '<class 'int'>', default: '2', info: 'The batch size to use. To a point, higher batch sizes equal better performance, but setting it too high can harm performance.\n\n	Nvidia users: If the batchsize is set higher than the your GPU can accomodate then this will automatically be lowered.', rounding: '1', min_max: (1, 64), choices: [], gui_radio: False, fixed: True, group: None)
11/16/2020 19:53:53 MainProcess     MainThread      _config         load_module               DEBUG    Added defaults: mask.vgg_obstructed
11/16/2020 19:53:53 MainProcess     MainThread      config          handle_config             DEBUG    Handling config
11/16/2020 19:53:53 MainProcess     MainThread      config          check_exists              DEBUG    Config file exists: 'C:\Users\ks\faceswap\config\extract.ini'
11/16/2020 19:53:53 MainProcess     MainThread      config          load_config               VERBOSE  Loading config: 'C:\Users\ks\faceswap\config\extract.ini'
11/16/2020 19:53:53 MainProcess     MainThread      config          validate_config           DEBUG    Validating config
11/16/2020 19:53:53 MainProcess     MainThread      config          check_config_change       DEBUG    Default config has not changed
11/16/2020 19:53:53 MainProcess     MainThread      config          check_config_choices      DEBUG    Checking config choices
11/16/2020 19:53:53 MainProcess     MainThread      config          check_config_choices      DEBUG    Checked config choices
11/16/2020 19:53:53 MainProcess     MainThread      config          validate_config           DEBUG    Validated config
11/16/2020 19:53:53 MainProcess     MainThread      config          handle_config             DEBUG    Handled config
11/16/2020 19:53:53 MainProcess     MainThread      config          __init__                  DEBUG    Initialized: Config
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Getting config item: (section: 'global', option: 'allow_growth')
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Returning item: (type: <class 'bool'>, value: True)
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Getting config item: (section: 'detect.s3fd', option: 'confidence')
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Returning item: (type: <class 'int'>, value: 70)
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Getting config item: (section: 'detect.s3fd', option: 'batch-size')
11/16/2020 19:53:53 MainProcess     MainThread      config          get                       DEBUG    Returning item: (type: <class 'int'>, value: 1)
11/16/2020 19:53:53 MainProcess     MainThread      utils           _get                      DEBUG    Model exists: C:\Users\ks\faceswap\plugins\extract\detect\.cache\s3fd_keras_v2.h5
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initialized _base Detect
11/16/2020 19:53:53 MainProcess     MainThread      _base           _get_rotation_angles      DEBUG    Not setting rotation angles
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initialized _base Detect
11/16/2020 19:53:53 MainProcess     MainThread      pipeline        _load_align               DEBUG    Loading Aligner: 'fan'
11/16/2020 19:53:53 MainProcess     MainThread      plugin_loader   _import                   INFO     Loading Align from Fan plugin...
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initializing Align: (normalize_method: None)
11/16/2020 19:53:53 MainProcess     MainThread      _base           __init__                  DEBUG    Initializing Align: (git_model_id: 13, model_filename: face-alignment-network_2d4_keras_v2.h5, exclude_gpus: None, configfile: None, instance: 0, )
11/16/2020 19:53:53 MainProcess     MainThread      config          __init__                  DEBUG    Initializing: Config
11/16/2020 19:53:53 MainProcess     MainThread      config          get_config_file           DEBUG    Config File location: 'C:\Users\ks\faceswap\config\extract.ini'
11/16/2020 19:53:53 MainProcess     MainThread      _config         set_defaults              DEBUG    Setting defaults
11/16/2020 19:53:53 MainProcess     MainThread      _config         set_globals               DEBUG    Setting global config
11/16/2020 19:53:53 MainProcess     MainThread      config          add_section               DEBUG    Add section: (title: 'global', info: 'Options that apply to all extraction plugins')
11/16/2020 19:53:53 MainProcess     MainThread      config          add_item                  DEBUG    Add item: (section: 'global', title: 'allow_growth', datatype: '<class 'bool'>', default: 'False', info: '[Nvidia Only]. Enable the Tensorflow GPU `allow_growth` configuration ...
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_s3fd_input', thread_count: 1)
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_s3fd_input'
11/16/2020 19:53:53 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_s3fd_input
11/16/2020 19:53:53 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: detect_s3fd_predict, function: <bound method Detector._predict of <plugins.extract.detect.s3fd.Detect object at 0x0000023B6F922040>>, in_queue: <queue.Queue object at 0x0000023B6FC93400>, out_queue: <queue.Queue object at 0x0000023B6FC656A0>)
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_s3fd_predict', thread_count: 1)
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_s3fd_predict'
11/16/2020 19:53:53 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_s3fd_predict
11/16/2020 19:53:53 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: detect_s3fd_output, function: <bound method Detect.process_output of <plugins.extract.detect.s3fd.Detect object at 0x0000023B6F922040>>, in_queue: <queue.Queue object at 0x0000023B6FC656A0>, out_queue: <queue.Queue object at 0x0000023B6FC65160>)
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_s3fd_output', thread_count: 1)
11/16/2020 19:53:53 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_s3fd_output'
11/16/2020 19:53:53 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_s3fd_output
11/16/2020 19:53:53 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiled detect threads: [<lib.multithreading.MultiThread object at 0x0000023B6FC65340>, <lib.multithreading.MultiThread object at 0x0000023B6FC65370>, <lib.multithreading.MultiThread object at 0x0000023B6FC652B0>]
11/16/2020 19:53:53 MainProcess     MainThread      s3fd            __init__                  DEBUG    Initializing: S3fd: (model_path: 'C:\Users\ks\faceswap\plugins\extract\detect\.cache\s3fd_keras_v2.h5', model_kwargs: {'custom_objects': {'L2Norm': <class 'plugins.extract.detect.s3fd.L2Norm'>, 'SliceO2K': <class 'plugins.extract.detect.s3fd.SliceO2K'>}}, allow_growth: True, exclude_gpus: None, confidence: 0.7)
11/16/2020 19:53:54 MainProcess     MainThread      session         _set_session              INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
11/16/2020 19:53:55 MainProcess     MainThread      session         load_model_weights        VERBOSE  Initializing plugin model: S3FD
11/16/2020 19:53:55 MainProcess     MainThread      s3fd            __init__                  DEBUG    Initialized: S3fd
11/16/2020 19:53:55 MainProcess     MainThread      _base           initialize                INFO     Initialized S3FD (Detect) with batchsize of 1
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_s3fd_input'
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_s3fd_input_0'
11/16/2020 19:53:55 MainProcess     detect_s3fd_input_0 _base           _thread_process           DEBUG    threading: (function: 'process_input')
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_s3fd_input': 1
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_s3fd_predict'
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_s3fd_predict_0'
11/16/2020 19:53:55 MainProcess     detect_s3fd_predict_0 _base           _thread_process           DEBUG    threading: (function: '_predict')
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_s3fd_predict': 1
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_s3fd_output'
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_s3fd_output_0'
11/16/2020 19:53:55 MainProcess     detect_s3fd_output_0 _base           _thread_process           DEBUG    threading: (function: 'process_output')
11/16/2020 19:53:55 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_s3fd_output': 1
11/16/2020 19:53:55 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launched detect plugin
11/16/2020 19:53:55 MainProcess     MainThread      pipeline        detected_faces            DEBUG    Running Detection. Phase: '['detect']'
11/16/2020 19:54:11 MainProcess     detect_s3fd_predict_0 multithreading  run                       DEBUG    Error in thread (detect_s3fd_predict_0): You do not have enough GPU memory available to run detection at the selected batch size. You can try a number of things:\n1) Close any other application that is using your GPU (web browsers are particularly bad for this).\n2) Lower the batchsize (the amount of images fed into the model) by editing the plugin settings (GUI: Settings > Configure extract settings, CLI: Edit the file faceswap/config/extract.ini).\n3) Enable 'Single Process' mode.
11/16/2020 19:54:11 MainProcess     MainThread      multithreading  check_and_raise_error     DEBUG    Thread error caught: [(<class 'lib.utils.FaceswapError'>, FaceswapError("You do not have enough GPU memory available to run detection at the selected batch size. You can try a number of things:\n1) Close any other application that is using your GPU (web browsers are particularly bad for this).\n2) Lower the batchsize (the amount of images fed into the model) by editing the plugin settings (GUI: Settings > Configure extract settings, CLI: Edit the file faceswap/config/extract.ini).\n3) Enable 'Single Process' mode."), <traceback object at 0x0000023B23F7C340>)]
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    You do not have enough GPU memory available to run detection at the selected batch size. You can try a number of things:
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    1) Close any other application that is using your GPU (web browsers are particularly bad for this).
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    2) Lower the batchsize (the amount of images fed into the model) by editing the plugin settings (GUI: Settings > Configure extract settings, CLI: Edit the file faceswap/config/extract.ini).
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    3) Enable 'Single Process' mode.
11/16/2020 19:54:11 MainProcess     MainThread      utils           safe_shutdown             DEBUG    Safely shutting down
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating all queues
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'ImagesLoader'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_detect_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_align_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_mask_0_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_mask_1_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_mask_2_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_mask_3_in'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'extract0_mask_3_out'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'ImagesSaver'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'detect0_predict_s3fd'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queue               DEBUG    QueueManager flushing: 'detect0_post_s3fd'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   flush_queues              DEBUG    QueueManager flushed all queues
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'ImagesLoader'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_detect_in'
11/16/2020 19:54:11 MainProcess     _load_0         image           load                      DEBUG    Closing Load Generator
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_align_in'
11/16/2020 19:54:11 MainProcess     _load_0         image           close                     DEBUG    Received Close
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_mask_0_in'
11/16/2020 19:54:11 MainProcess     _load_0         multithreading  join                      DEBUG    Joining Threads: 'ImagesLoader'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_mask_1_in'
11/16/2020 19:54:11 MainProcess     _load_0         multithreading  join                      DEBUG    Joining Thread: 'ImagesLoader_0'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_mask_2_in'
11/16/2020 19:54:11 MainProcess     _load_0         multithreading  join                      DEBUG    Joined all Threads: 'ImagesLoader'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_mask_3_in'
11/16/2020 19:54:11 MainProcess     _load_0         image           close                     DEBUG    Closed
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'extract0_mask_3_out'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'ImagesSaver'
11/16/2020 19:54:11 MainProcess     MainThread      queue_manager   terminate_queues          DEBUG    QueueManager terminating: 'detect0_predict_s3fd'


User avatar
bryanlyon
Site Admin
Posts: 793
Joined: Fri Jul 12, 2019 12:49 am
Answers: 44
Location: San Francisco
Has thanked: 4 times
Been thanked: 218 times
Contact:

Re: You do not have enough GPU memory

Post by bryanlyon »

Yes, that is exactly your problem. Try the steps included in that message.

Code: Select all

11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    You do not have enough GPU memory available to run detection at the selected batch size. You can try a number of things:
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    1) Close any other application that is using your GPU (web browsers are particularly bad for this).
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    2) Lower the batchsize (the amount of images fed into the model) by editing the plugin settings (GUI: Settings > Configure extract settings, CLI: Edit the file faceswap/config/extract.ini).
11/16/2020 19:54:11 MainProcess     MainThread      launcher        execute_script            ERROR    3) Enable 'Single Process' mode.
User avatar
k1s
Posts: 4
Joined: Mon Nov 16, 2020 7:55 pm

Re: You do not have enough GPU memory

Post by k1s »

Thanks for your swift response.

I have indeed tried all those steps. Batch size is 1. Single Process mode enabled. No other applications running.

Does Faceswap really need more VRAM for 7 small jpegs than PC games (PES2020, Forza, etc. all run with this PC with good framerates)?

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: You do not have enough GPU memory

Post by torzdf »

S3FD shouldn't fail on 4GB of RAM, but every PC is different, so who knows what is going on there.

Yes, S3FD is a very heavy model (it isn't that Faceswap takes a lot of VRAM, but the models we use). Basically (without going into a full session about Machine Learning), models hold a certain number of parameters in VRAM depending on the model size. These can run into the millions or billions.

My suggestion to you is to use the MTCNN detector instead. It is a lot more lightweight, and still fairly reliable.

You may want to do some diagnosing in the meantime to see if anything is taking VRAM that shouldn't be. If you really want to maximize vram availability then you should use Linux, but I appreciate this is not a route everyone wants to go down,

My word is final

User avatar
k1s
Posts: 4
Joined: Mon Nov 16, 2020 7:55 pm

Re: You do not have enough GPU memory

Post by k1s »

Thanks for your reply and suggestions

I went with MTCNN. Here is the crash report:

Code: Select all

11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'mask0_post_extended'
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'mask0_post_extended', maxsize: 1)
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'mask0_post_extended')
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'mask0_post_extended'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiling mask threads
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_extended_input, function: <bound method Mask.process_input of <plugins.extract.mask.extended.Mask object at 0x000001D3CAA88FD0>>, in_queue: <queue.Queue object at 0x000001D3CAE5F7F0>, out_queue: <queue.Queue object at 0x000001D4278BE820>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_extended_input', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_extended_input'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_extended_input
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_extended_predict, function: <bound method Masker._predict of <plugins.extract.mask.extended.Mask object at 0x000001D3CAA88FD0>>, in_queue: <queue.Queue object at 0x000001D4278BE820>, out_queue: <queue.Queue object at 0x000001D4278BE190>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_extended_predict', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_extended_predict'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_extended_predict
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_extended_output, function: <bound method Mask.process_output of <plugins.extract.mask.extended.Mask object at 0x000001D3CAA88FD0>>, in_queue: <queue.Queue object at 0x000001D4278BE190>, out_queue: <queue.Queue object at 0x000001D3CAE5FD90>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_extended_output', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_extended_output'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_extended_output
11/17/2020 08:30:44 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiled mask threads: [<lib.multithreading.MultiThread object at 0x000001D4278BE8E0>, <lib.multithreading.MultiThread object at 0x000001D4278AD430>, <lib.multithreading.MultiThread object at 0x000001D4278AD1C0>]
11/17/2020 08:30:44 MainProcess     MainThread      extended        init_model                DEBUG    No mask model to initialize
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  start                     DEBUG    Started all threads 'ImagesLoader': 1
11/17/2020 08:30:44 MainProcess     MainThread      _base           initialize                INFO     Initialized Extended (Mask) with batchsize of 1
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'mask_extended_input'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'mask_extended_input_0'
11/17/2020 08:30:44 MainProcess     mask_extended_input_0 _base           _thread_process           DEBUG    threading: (function: 'process_input')
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'mask_extended_input': 1
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'mask_extended_predict'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'mask_extended_predict_0'
11/17/2020 08:30:44 MainProcess     mask_extended_predict_0 _base           _thread_process           DEBUG    threading: (function: '_predict')
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'mask_extended_predict': 1
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'mask_extended_output'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'mask_extended_output_0'
11/17/2020 08:30:44 MainProcess     mask_extended_output_0 _base           _thread_process           DEBUG    threading: (function: 'process_output')
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'mask_extended_output': 1
11/17/2020 08:30:44 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launched mask_1 plugin
11/17/2020 08:30:44 MainProcess     MainThread      pipeline        detected_faces            DEBUG    Running Detection. Phase: '['mask_1']'
11/17/2020 08:30:44 MainProcess     _reload_0       image           load                      DEBUG    Closing Load Generator
11/17/2020 08:30:44 MainProcess     _reload_0       image           close                     DEBUG    Received Close
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  join                      DEBUG    Joining Threads: 'ImagesLoader'
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  join                      DEBUG    Joining Thread: 'ImagesLoader_0'
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  join                      DEBUG    Joined all Threads: 'ImagesLoader'
11/17/2020 08:30:44 MainProcess     _reload_0       image           close                     DEBUG    Closed
11/17/2020 08:30:44 MainProcess     _reload_0       extract         _reload                   DEBUG    Reload Images: Complete
11/17/2020 08:30:44 MainProcess     mask_extended_input_0 _base           _thread_process           DEBUG    Putting EOF
11/17/2020 08:30:44 MainProcess     mask_extended_predict_0 _base           _thread_process           DEBUG    Putting EOF
11/17/2020 08:30:44 MainProcess     mask_extended_output_0 _base           _thread_process           DEBUG    Putting EOF
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: 'mask_extended_input'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: 'mask_extended_input_0'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joined all Threads: 'mask_extended_input'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: 'mask_extended_predict'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: 'mask_extended_predict_0'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joined all Threads: 'mask_extended_predict'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: 'mask_extended_output'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: 'mask_extended_output_0'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  join                      DEBUG    Joined all Threads: 'mask_extended_output'
11/17/2020 08:30:44 MainProcess     MainThread      pipeline        detected_faces            DEBUG    Switching to phase: ['mask_2']
11/17/2020 08:30:44 MainProcess     MainThread      extract         _run_extraction           DEBUG    Reloading images
11/17/2020 08:30:44 MainProcess     MainThread      extract         _threaded_redirector      DEBUG    Threading task: (Task: 'reload')
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: '_reload', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: '_reload'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): '_reload'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: '_reload_0'
11/17/2020 08:30:44 MainProcess     _reload_0       extract         _reload                   DEBUG    Reload Images: Start. Detected Faces Count: 7
11/17/2020 08:30:44 MainProcess     _reload_0       image           load                      DEBUG    Initializing Load Generator
11/17/2020 08:30:44 MainProcess     _reload_0       image           _set_thread               DEBUG    Setting thread
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'ImagesLoader', thread_count: 1)
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  __init__                  DEBUG    Initialized MultiThread: 'ImagesLoader'
11/17/2020 08:30:44 MainProcess     _reload_0       image           _set_thread               DEBUG    Set thread: <lib.multithreading.MultiThread object at 0x000001D4278BE280>
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  start                     DEBUG    Starting thread(s): 'ImagesLoader'
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  start                     DEBUG    Starting thread 1 of 1: 'ImagesLoader_0'
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads '_reload': 1
11/17/2020 08:30:44 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launching mask_2 plugin
11/17/2020 08:30:44 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    in_qname: extract0_mask_2_in, out_qname: extract0_mask_3_in
11/17/2020 08:30:44 MainProcess     MainThread      _base           initialize                DEBUG    initialize Mask: (args: (), kwargs: {'in_queue': <queue.Queue object at 0x000001D3CAE5FD90>, 'out_queue': <queue.Queue object at 0x000001D3CAE5FF10>})
11/17/2020 08:30:44 MainProcess     MainThread      _base           initialize                INFO     Initializing VGG Clear (Mask)...
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'mask0_predict_vgg_clear'
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'mask0_predict_vgg_clear', maxsize: 1)
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'mask0_predict_vgg_clear')
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'mask0_predict_vgg_clear'
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'mask0_post_vgg_clear'
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'mask0_post_vgg_clear', maxsize: 1)
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'mask0_post_vgg_clear')
11/17/2020 08:30:44 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'mask0_post_vgg_clear'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiling mask threads
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_vgg_clear_input, function: <bound method Mask.process_input of <plugins.extract.mask.vgg_clear.Mask object at 0x000001D3CAA88FA0>>, in_queue: <queue.Queue object at 0x000001D3CAE5FD90>, out_queue: <queue.Queue object at 0x000001D4278ADF10>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_vgg_clear_input', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_vgg_clear_input'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_vgg_clear_input
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_vgg_clear_predict, function: <bound method Masker._predict of <plugins.extract.mask.vgg_clear.Mask object at 0x000001D3CAA88FA0>>, in_queue: <queue.Queue object at 0x000001D4278ADF10>, out_queue: <queue.Queue object at 0x000001D4278ADF40>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_vgg_clear_predict', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_vgg_clear_predict'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_vgg_clear_predict
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: mask_vgg_clear_output, function: <bound method Mask.process_output of <plugins.extract.mask.vgg_clear.Mask object at 0x000001D3CAA88FA0>>, in_queue: <queue.Queue object at 0x000001D4278ADF40>, out_queue: <queue.Queue object at 0x000001D3CAE5FF10>)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'mask_vgg_clear_output', thread_count: 1)
11/17/2020 08:30:44 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'mask_vgg_clear_output'
11/17/2020 08:30:44 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: mask_vgg_clear_output
11/17/2020 08:30:44 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiled mask threads: [<lib.multithreading.MultiThread object at 0x000001D4278ADD60>, <lib.multithreading.MultiThread object at 0x000001D4278AD610>, <lib.multithreading.MultiThread object at 0x000001D467764460>]
11/17/2020 08:30:44 MainProcess     MainThread      session         _set_session              INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
11/17/2020 08:30:44 MainProcess     ImagesLoader_0  image           _process                  DEBUG    Load iterator: <bound method ImagesLoader._from_folder of <lib.image.ImagesLoader object at 0x000001D3A4792520>>
11/17/2020 08:30:44 MainProcess     ImagesLoader_0  image           _from_folder              DEBUG    Loading frames from folder: 'C:\Users\ks\Desktop\faces'
11/17/2020 08:30:44 MainProcess     _reload_0       multithreading  start                     DEBUG    Started all threads 'ImagesLoader': 1
Traceback (most recent call last):
  File "C:\Users\ks\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\ks\faceswap\scripts\extract.py", line 118, in process
    self._run_extraction()
  File "C:\Users\ks\faceswap\scripts\extract.py", line 205, in _run_extraction
    self._extractor.launch()
  File "C:\Users\ks\faceswap\plugins\extract\pipeline.py", line 207, in launch
    self._launch_plugin(phase)
  File "C:\Users\ks\faceswap\plugins\extract\pipeline.py", line 560, in _launch_plugin
    plugin.initialize(**kwargs)
  File "C:\Users\ks\faceswap\plugins\extract\_base.py", line 344, in initialize
    self.init_model()
  File "C:\Users\ks\faceswap\plugins\extract\mask\vgg_clear.py", line 26, in init_model
    self.model = VGGClear(self.model_path,
  File "C:\Users\ks\faceswap\plugins\extract\mask\vgg_clear.py", line 86, in __init__
    self.define_model(self._model_definition)
  File "C:\Users\ks\faceswap\lib\model\session.py", line 176, in define_model
    self._model = Model(*function())
  File "C:\Users\ks\faceswap\plugins\extract\mask\vgg_clear.py", line 106, in _model_definition
    var_x = _ConvBlock(5, 512, 3)(pool4)
  File "C:\Users\ks\faceswap\plugins\extract\mask\vgg_clear.py", line 174, in __call__
    var_x = Conv2D(self._filters,
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 897, in __call__
    self._maybe_build(inputs)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2416, in _maybe_build
    self.build(input_shapes)  # pylint:disable=not-callable
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 156, in build
    self.kernel = self.add_weight(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 560, in add_weight
    variable = self._add_variable_with_custom_getter(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\training\tracking\base.py", line 738, in _add_variable_with_custom_getter
    new_variable = getter(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py", line 129, in make_variable
    return tf_variables.VariableV1(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\variables.py", line 259, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\variables.py", line 205, in _variable_v1_call
    return previous_getter(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\variables.py", line 198, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2584, in default_variable_creator
    return resource_variable_ops.ResourceVariable(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\variables.py", line 263, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1423, in __init__
    self._init_from_args(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1567, in _init_from_args
    initial_value() if init_from_fn else initial_value,
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py", line 121, in <lambda>
    init_val = lambda: initializer(shape, dtype=dtype)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\init_ops_v2.py", line 558, in __call__
    return self._random_generator.random_uniform(shape, -limit, limit, dtype)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\init_ops_v2.py", line 1067, in random_uniform
    return op(
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\random_ops.py", line 301, in random_uniform
    result = math_ops.add(result * (maxval - minval), minval, name=name)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 343, in add
    _ops.raise_from_not_ok_status(e, name)
  File "C:\Users\ks\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 6653, in raise_from_not_ok_status
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Add]

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         c24bf2b GUI - Revert Conda default font fix
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: GeForce GTX 1650
gpu_devices_active:  GPU_0
gpu_driver:          451.48
gpu_vram:            GPU_0: 4096MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\ks\faceswap\faceswap.py extract -i C:/Users/ks/Desktop/faces -o C:/Users/ks/Desktop/faces/Outputs -D mtcnn -A fan -M vgg-clear unet-dfl -nm none -min 20 -l 0.4 -een 1 -sz 256 -si 0 -sp -L VERBOSE -LF C:/Users/ks/Desktop/faces/Outputs/fslog.log -gui
py_conda_version:    conda 4.9.2
py_implementation:   CPython
py_version:          3.8.5
py_virtual_env:      True
sys_cores:           16
sys_processor:       Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
sys_ram:             Total: 65457MB, Available: 56294MB, Used: 9162MB, Free: 56294MB

=============== Pip Packages ===============
absl-py==0.11.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
cycler==0.10.0
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.3.3
google-auth==1.23.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.33.2
h5py==2.10.0
idna==2.10
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1601912903842/work
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1604014703538/work
Markdown==3.3.3
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.2.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.4.0.46
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1603823068645/work
protobuf==3.14.0
psutil @ file:///C:/ci/psutil_1598370330503/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pywin32==227
requests==2.25.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn @ file:///C:/ci/scikit-learn_1598377018496/work
scipy @ file:///C:/ci/scipy_1604596260408/work
sip==4.19.13
six @ file:///C:/ci/six_1605187374963/work
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow-gpu==2.2.1
tensorflow-gpu-estimator==2.2.0
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1605303662894/work
urllib3==1.26.2
Werkzeug==1.0.1
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\ks\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   0.11.0                   pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
ca-certificates 2020.10.14 0
cachetools 4.1.1 pypi_0 pypi certifi 2020.6.20 pyhd3eb1b0_3
chardet 3.0.4 pypi_0 pypi cudatoolkit 10.1.243 h74a9793_0
cudnn 7.6.5 cuda10.1_0
cycler 0.10.0 py38_0
fastcluster 1.1.26 py38h251f6bf_2 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.4 hd328e21_0
gast 0.3.3 pypi_0 pypi git 2.23.0 h6bb4b03_0
google-auth 1.23.0 pypi_0 pypi google-auth-oauthlib 0.4.2 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.33.2 pypi_0 pypi h5py 2.10.0 pypi_0 pypi icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pypi_0 pypi imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge intel-openmp 2020.2 254
joblib 0.17.0 py_0
jpeg 9b hb83a4c4_2
keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.3.0 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 hf4a77e7_3
markdown 3.3.3 pypi_0 pypi matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38hb782905_0
mkl_fft 1.2.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.18.5 pypi_0 pypi nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 pypi_0 pypi olefile 0.46 py_0
opencv-python 4.4.0.46 pypi_0 pypi openssl 1.1.1h he774522_0
opt-einsum 3.3.0 pypi_0 pypi pathlib 1.0.1 py_1
pillow 8.0.1 py38h4fa10fc_0
pip 20.2.4 py38haa95532_0
protobuf 3.14.0 pypi_0 pypi psutil 5.7.2 py38he774522_0
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 2.4.7 py_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.5 h5fd99cc_1
python-dateutil 2.8.1 py_0
python_abi 3.8 1_cp38 conda-forge pywin32 227 py38he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.25.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.6 pypi_0 pypi scikit-learn 0.23.2 py38h47e9c7a_0
scipy 1.5.2 py38h14eb087_0
setuptools 50.3.1 py38haa95532_1
sip 4.19.13 py38ha925a31_0
six 1.15.0 py38haa95532_0
sqlite 3.33.0 h2a8f88b_0
tensorboard 2.2.2 pypi_0 pypi tensorboard-plugin-wit 1.7.0 pypi_0 pypi tensorflow-gpu 2.2.1 pypi_0 pypi tensorflow-gpu-estimator 2.2.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.0.4 py38he774522_1
tqdm 4.51.0 pyhd3eb1b0_0
urllib3 1.26.2 pypi_0 pypi vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 1.0.1 pypi_0 pypi wheel 0.35.1 pyhd3eb1b0_0
wincertstore 0.2 py38_0
wrapt 1.12.1 pypi_0 pypi xz 5.2.5 h62dcd97_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 h04227a9_0 ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: True [align.fan] batch-size: 1 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 1 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 1 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 68.75 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 reflect_padding: False allow_growth: False mixed_precision: False convert_batchsize: 16 [global.loss] loss_function: ssim mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 disable_warp: False color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
torzdf wrote: Mon Nov 16, 2020 11:32 pm

...You may want to do some diagnosing in the meantime to see if anything is taking VRAM that shouldn't be....

I can't see anything other than python / faceswap using the GPU memory :
Image

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: You do not have enough GPU memory

Post by torzdf »

There is something weird going on here. You really shouldn't be running out of VRAM when doing this :/

This is failing on the vgg-clear mask.

What I suggest, in lieu of finding the actual problem, is to not selected any maskers when extracting.

Add masks in afterwards with the "mask" tool. At least this way, you should get an alignments file and you will have something to build from.

My word is final

User avatar
k1s
Posts: 4
Joined: Mon Nov 16, 2020 7:55 pm

Re: You do not have enough GPU memory

Post by k1s »

Thanks,

Here's what happened when I unselected the masks

Code: Select all

Loading...
Setting Faceswap backend to NVIDIA
11/17/2020 12:07:12 INFO     Log level set to: VERBOSE
2020-11-17 12:07:12.736832: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
11/17/2020 12:07:14 INFO     Output Directory: C:\Users\ks\Desktop\faces\Outputs
11/17/2020 12:07:14 VERBOSE  Alignments filepath: 'C:\Users\ks\Desktop\faces\alignments.fsa'
11/17/2020 12:07:14 INFO     Loading Detect from Mtcnn plugin...
11/17/2020 12:07:14 VERBOSE  Loading config: 'C:\Users\ks\faceswap\config\extract.ini'
11/17/2020 12:07:14 INFO     Loading Align from Fan plugin...
11/17/2020 12:07:14 VERBOSE  Loading config: 'C:\Users\ks\faceswap\config\extract.ini'
11/17/2020 12:07:14 INFO     Loading Mask from Components plugin...
11/17/2020 12:07:14 VERBOSE  Loading config: 'C:\Users\ks\faceswap\config\extract.ini'
11/17/2020 12:07:14 INFO     Loading Mask from Extended plugin...
11/17/2020 12:07:14 VERBOSE  Loading config: 'C:\Users\ks\faceswap\config\extract.ini'
11/17/2020 12:07:14 INFO     Starting, this may take a while...
11/17/2020 12:07:14 INFO     Initializing MTCNN (Detect)...
2020-11-17 12:07:14.297966: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-11-17 12:07:14.307074: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.68GHz coreCount: 14 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2020-11-17 12:07:14.307259: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-11-17 12:07:14.312511: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-11-17 12:07:14.315644: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-11-17 12:07:14.316905: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-11-17 12:07:14.320914: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-11-17 12:07:14.323106: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-11-17 12:07:14.336307: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
11/17/2020 12:07:14 INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
2020-11-17 12:07:14.336496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-11-17 12:07:14.343626: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-11-17 12:07:14.350095: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1e4fd1e6d60 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-17 12:07:14.350308: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-11-17 12:07:14.350495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1650 computeCapability: 7.5
coreClock: 1.68GHz coreCount: 14 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 119.24GiB/s
2020-11-17 12:07:14.350649: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-11-17 12:07:14.350727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-11-17 12:07:14.350803: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-11-17 12:07:14.350882: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-11-17 12:07:14.350957: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-11-17 12:07:14.351036: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-11-17 12:07:14.351327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-11-17 12:07:14.351420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-11-17 12:07:14.705491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-17 12:07:14.705632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0
2020-11-17 12:07:14.705688: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N
2020-11-17 12:07:14.705872: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2917 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
2020-11-17 12:07:14.708163: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1e4a4f26480 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-11-17 12:07:14.708270: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1650, Compute Capability 7.5
11/17/2020 12:07:14 VERBOSE  Initializing plugin model: MTCNN-PNet
11/17/2020 12:07:14 INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
11/17/2020 12:07:14 VERBOSE  Initializing plugin model: MTCNN-RNet
11/17/2020 12:07:14 INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
11/17/2020 12:07:14 VERBOSE  Initializing plugin model: MTCNN-ONet
11/17/2020 12:07:14 INFO     Initialized MTCNN (Detect) with batchsize of 8

2020-11-17 12:07:15.186433: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-11-17 12:07:15.975908: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
2020-11-17 12:07:16.070876: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
11/17/2020 12:07:17 WARNING  6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
11/17/2020 12:07:17 WARNING  7 out of the last 7 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
11/17/2020 12:07:18 WARNING  8 out of the last 8 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
11/17/2020 12:07:18 WARNING  9 out of the last 9 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
11/17/2020 12:07:18 WARNING  10 out of the last 10 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
11/17/2020 12:07:18 WARNING  11 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001E4FE633F70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.

11/17/2020 12:07:20 INFO     Initializing FAN (Align)...
11/17/2020 12:07:20 INFO     Setting allow growth for GPU: PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')
11/17/2020 12:07:20 VERBOSE  Initializing plugin model: FAN
2020-11-17 12:07:31.546480: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.00G (2147483648 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.744927: W tensorflow/core/common_runtime/bfc_allocator.cc:311] Garbage collection: deallocate free memory regions (i.e., allocations) so that we can re-allocate a larger region to avoid OOM due to memory fragmentation. If you see this message frequently, you are running near the threshold of the available device memory and re-allocation may incur great performance overhead. You may try smaller batch sizes to observe the performance impact. Set TF_ENABLE_GPU_GARBAGE_COLLECTION=false if you'd like to disable this feature.
2020-11-17 12:07:31.763435: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.763609: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.763808: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.763903: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.768747: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.768851: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 34.11MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.769014: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.769127: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 34.11MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.769293: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.769387: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 304.56MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.769551: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.769642: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 304.56MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.777889: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.777992: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 52.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.778172: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.778268: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 52.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.778454: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.778547: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.778708: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.778801: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2020-11-17 12:07:31.792629: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.792746: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.800067: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.800176: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.800297: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.800405: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.811156: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.811263: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.824824: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.825010: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.828933: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.829104: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.829281: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.829386: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.840248: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.840449: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.849834: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.850037: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.860023: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.860197: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.868460: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.868647: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.879031: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.879190: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.896596: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.896759: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.901005: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.901194: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.901304: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.901407: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.913071: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.913258: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.924560: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.924709: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.935069: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.935237: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.940161: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.940333: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.940437: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.940539: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.953137: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.953296: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.957271: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.957452: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.957555: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.957657: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.968333: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.968498: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.972210: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.972371: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.972475: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.972586: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.982143: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:31.982282: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.006483: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.006659: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.010237: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.010415: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.024492: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.024664: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.028312: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.028502: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.042242: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.042433: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.065308: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.065498: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.069475: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.069652: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.090458: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.090634: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.104493: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.104704: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.108444: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.108618: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.130386: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.130556: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.172503: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.172686: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.189063: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.189233: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.203153: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-17 12:07:32.203311: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.69G (2884055808 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
11/17/2020 12:07:32 INFO     Initialized FAN (Align) with batchsize of 1

11/17/2020 12:07:32 INFO     Initializing Components (Mask)...
11/17/2020 12:07:32 INFO     Initialized Components (Mask) with batchsize of 1

11/17/2020 12:07:32 INFO     Initializing Extended (Mask)...
11/17/2020 12:07:32 INFO     Initialized Extended (Mask) with batchsize of 1

11/17/2020 12:07:32 INFO     Writing alignments to: 'C:\Users\ks\Desktop\faces\alignments.fsa'
11/17/2020 12:07:33 INFO     -------------------------
11/17/2020 12:07:33 INFO     Images found:        7
11/17/2020 12:07:33 INFO     Faces detected:      7
11/17/2020 12:07:33 INFO     -------------------------
11/17/2020 12:07:33 INFO     Process Succesfully Completed. Shutting Down...
Process exited.

For the next step (Tools > Mask?) What should the output type be? Combined/Masked/Mask?

(I'm trying to follow this guide and assume I need to create masked versions before sorting?)

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 623 times

Re: You do not have enough GPU memory

Post by torzdf »

I hope you're intending to train on more than 7 images? 7 images is noway near enough!

Also, you really shouldn't be running out of memory with FAN. At this point, I really don't know what is up with your setup :/

You don't need an output type for mask. You just need to specify the mask type to generate.

My word is final

Locked