CRIT ERROR -- ArrayMemoryError: Unable to allocate 64.0 MiB for an array...

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
doggydaddy
Posts: 6
Joined: Wed Jan 18, 2023 2:45 am
Has thanked: 3 times

CRIT ERROR -- ArrayMemoryError: Unable to allocate 64.0 MiB for an array...

Post by doggydaddy »

my training keeps crashing after an hour or two. says it is unable to allocate 64.0 MiB for array of shape 1K x 16k, with data float 32.

this keeps happening several times now. I start and it's working fine, but after an hour or two it crashes. I can recover and restart, so it could be worse, but i have no idea why this is happening. I have an RTX 2060 (6GB VRAM), 32GB RAM, and there is roughly 13GB available on my SSD.

I can only guess it's due to some setting that i made, but i thought i was keeping it pretty close to the defaults. Any help is appreciated.

Thank you.

console:

Code: Select all

01/18/2023 09:23:03 CRITICAL Error caught! Exiting...
01/18/2023 09:23:03 ERROR    Caught exception in thread: '_training'
01/18/2023 09:23:18 ERROR    Got Exception on main handler:
Traceback (most recent call last):
  File "C:\Users\doggy\faceswap\lib\cli\launcher.py", line 217, in execute_script
    process.process()
  File "C:\Users\doggy\faceswap\scripts\train.py", line 218, in process
    self._end_thread(thread, err)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 258, in _end_thread
    thread.join()
  File "C:\Users\doggy\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\doggy\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 280, in _training
    raise err
  File "C:\Users\doggy\faceswap\scripts\train.py", line 270, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 372, in _run_training_cycle
    model.save(is_exit=False)
  File "C:\Users\doggy\faceswap\plugins\train\model\_base\model.py", line 436, in save
    self._io.save(is_exit=is_exit)
  File "C:\Users\doggy\faceswap\plugins\train\model\_base\io.py", line 207, in save
    self._plugin.model.save(self._filename, include_optimizer=include_optimizer)
  File "C:\Users\doggy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\doggy\MiniConda3\envs\faceswap\lib\site-packages\keras\backend.py", line 4240, in <listcomp>
    return [x.numpy() for x in tensors]
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 64.0 MiB for an array with shape (1024, 16384) and data type float32
01/18/2023 09:23:18 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\doggy\faceswap\crash_report.2023.01.18.092304010679.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
Process exited.

log:

Code: Select all

01/18/2023 09:19:37 MainProcess     _training                      serializer      save                           DEBUG    filename: C:\Users\doggy\Desktop\fsdir\modelAB\original_state.json, data type: <class 'dict'>
01/18/2023 09:19:37 MainProcess     _training                      serializer      _check_extension               DEBUG    Original filename: 'C:\Users\doggy\Desktop\fsdir\modelAB\original_state.json', final filename: 'C:\Users\doggy\Desktop\fsdir\modelAB\original_state.json'
01/18/2023 09:19:37 MainProcess     _training                      serializer      marshal                        DEBUG    data type: <class 'dict'>
01/18/2023 09:19:37 MainProcess     _training                      serializer      marshal                        DEBUG    returned data type: <class 'bytes'>
01/18/2023 09:19:37 MainProcess     _training                      model           save                           DEBUG    Saved State
01/18/2023 09:19:37 MainProcess     _training                      io              save                           INFO     [Saved models] - Average loss since last save: face_a: 0.02840, face_b: 0.03068
01/18/2023 09:19:38 MainProcess     _training                      _base           generate_preview               DEBUG    Generating preview (is_timelapse: False)
01/18/2023 09:19:38 MainProcess     _training                      _base           generate_preview               DEBUG    Generated samples: is_timelapse: False, images: {'feed': {'a': (14, 64, 64, 3), 'b': (14, 64, 64, 3)}, 'samples': {'a': (14, 94, 94, 3), 'b': (14, 94, 94, 3)}, 'sides': {'a': (14, 64, 64, 1), 'b': (14, 64, 64, 1)}}
01/18/2023 09:19:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
01/18/2023 09:19:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
01/18/2023 09:19:38 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiled Samples: {'a': [(14, 64, 64, 3), (14, 94, 94, 3), (14, 64, 64, 1)], 'b': [(14, 64, 64, 3), (14, 94, 94, 3), (14, 64, 64, 1)]}
01/18/2023 09:19:38 MainProcess     _training                      _base           show_sample                    DEBUG    Showing sample
01/18/2023 09:19:38 MainProcess     _training                      _base           _get_predictions               DEBUG    Getting Predictions
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 64, 64, 3), 'b_b': (14, 64, 64, 3), 'a_b': (14, 64, 64, 3), 'b_a': (14, 64, 64, 3)}
01/18/2023 09:19:39 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
01/18/2023 09:19:39 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 94, prediction_size: 64, color: (0.0, 0.0, 1.0)
01/18/2023 09:19:39 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    side: 'a', width: 94
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    height: 20, total_width: 282
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    texts: ['Original (A)', 'Original > Original', 'Original > Swap'], text_sizes: [(53, 7), (86, 7), (75, 7)], text_x: [20, 98, 197], text_y: 13
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    header_box.shape: (20, 282, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'b', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
01/18/2023 09:19:39 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 94, prediction_size: 64, color: (0.0, 0.0, 1.0)
01/18/2023 09:19:39 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    side: 'b', width: 94
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    height: 20, total_width: 282
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    texts: ['Swap (B)', 'Swap > Swap', 'Swap > Original'], text_sizes: [(44, 7), (64, 7), (75, 7)], text_x: [25, 109, 197], text_y: 13
01/18/2023 09:19:39 MainProcess     _training                      _base           _get_headers                   DEBUG    header_box.shape: (20, 282, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _duplicate_headers             DEBUG    side: a header.shape: (20, 282, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _duplicate_headers             DEBUG    side: b header.shape: (20, 282, 3)
01/18/2023 09:19:39 MainProcess     _training                      _base           _stack_images                  DEBUG    Stack images
01/18/2023 09:19:39 MainProcess     _training                      _base           get_transpose_axes             DEBUG    Even number of images to stack
01/18/2023 09:19:39 MainProcess     _training                      _base           _stack_images                  DEBUG    Stacked images
01/18/2023 09:19:39 MainProcess     _training                      _base           _compile_preview               DEBUG    Compiled sample
01/18/2023 09:19:39 MainProcess     _training                      train           _show                          DEBUG    Updating preview: (name: Training - 'S': Save Now. 'R': Refresh Preview. 'M': Toggle Mask. 'F': Toggle Screen Fit-Actual Size. 'ENTER': Save and Quit)
01/18/2023 09:19:39 MainProcess     _training                      train           _show                          DEBUG    Generating preview for GUI
01/18/2023 09:19:40 MainProcess     _training                      train           _show                          DEBUG    Generated preview for GUI: 'C:\Users\doggy\faceswap\lib\gui\.cache\preview\.gui_training_preview.png'
01/18/2023 09:19:40 MainProcess     _training                      train           _show                          DEBUG    Updated preview: (name: Training - 'S': Save Now. 'R': Refresh Preview. 'M': Toggle Mask. 'F': Toggle Screen Fit-Actual Size. 'ENTER': Save and Quit)
01/18/2023 09:19:40 MainProcess     _training                      train           _run_training_cycle            INFO     [Preview Updated]
01/18/2023 09:23:01 MainProcess     _training                      _base           output_timelapse               DEBUG    Ouputting time-lapse
01/18/2023 09:23:01 MainProcess     _training                      _base           output_timelapse               DEBUG    Getting time-lapse samples
01/18/2023 09:23:01 MainProcess     _training                      _base           generate_preview               DEBUG    Generating preview (is_timelapse: True)
01/18/2023 09:23:01 MainProcess     _training                      _base           generate_preview               DEBUG    Generated samples: is_timelapse: True, images: {'feed': {'a': (14, 64, 64, 3), 'b': (14, 64, 64, 3)}, 'samples': {'a': (14, 94, 94, 3), 'b': (14, 94, 94, 3)}, 'sides': {'a': (14, 64, 64, 1), 'b': (14, 64, 64, 1)}}
01/18/2023 09:23:01 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
01/18/2023 09:23:01 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
01/18/2023 09:23:01 MainProcess     _training                      _base           compile_sample                 DEBUG    Compiled Samples: {'a': [(14, 64, 64, 3), (14, 94, 94, 3), (14, 64, 64, 1)], 'b': [(14, 64, 64, 3), (14, 94, 94, 3), (14, 64, 64, 1)]}
01/18/2023 09:23:01 MainProcess     _training                      _base           output_timelapse               DEBUG    Got time-lapse samples: {'a': 3, 'b': 3}
01/18/2023 09:23:01 MainProcess     _training                      _base           show_sample                    DEBUG    Showing sample
01/18/2023 09:23:01 MainProcess     _training                      _base           _get_predictions               DEBUG    Getting Predictions
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 64, 64, 3), 'b_b': (14, 64, 64, 3), 'a_b': (14, 64, 64, 3), 'b_a': (14, 64, 64, 3)}
01/18/2023 09:23:02 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
01/18/2023 09:23:02 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 94, prediction_size: 64, color: (0.0, 0.0, 1.0)
01/18/2023 09:23:02 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    side: 'a', width: 94
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    height: 20, total_width: 282
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    texts: ['Original (A)', 'Original > Original', 'Original > Swap'], text_sizes: [(53, 7), (86, 7), (75, 7)], text_x: [20, 98, 197], text_y: 13
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    header_box.shape: (20, 282, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _to_full_frame                 DEBUG    side: 'b', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
01/18/2023 09:23:02 MainProcess     _training                      _base           _process_full                  DEBUG    full_size: 94, prediction_size: 64, color: (0.0, 0.0, 1.0)
01/18/2023 09:23:02 MainProcess     _training                      _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 94, 94, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    side: 'b', width: 94
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    height: 20, total_width: 282
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    texts: ['Swap (B)', 'Swap > Swap', 'Swap > Original'], text_sizes: [(44, 7), (64, 7), (75, 7)], text_x: [25, 109, 197], text_y: 13
01/18/2023 09:23:02 MainProcess     _training                      _base           _get_headers                   DEBUG    header_box.shape: (20, 282, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _duplicate_headers             DEBUG    side: a header.shape: (20, 282, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _duplicate_headers             DEBUG    side: b header.shape: (20, 282, 3)
01/18/2023 09:23:02 MainProcess     _training                      _base           _stack_images                  DEBUG    Stack images
01/18/2023 09:23:02 MainProcess     _training                      _base           get_transpose_axes             DEBUG    Even number of images to stack
01/18/2023 09:23:02 MainProcess     _training                      _base           _stack_images                  DEBUG    Stacked images
01/18/2023 09:23:02 MainProcess     _training                      _base           _compile_preview               DEBUG    Compiled sample
01/18/2023 09:23:02 MainProcess     _training                      _base           output_timelapse               DEBUG    Created time-lapse: 'C:\Users\doggy\Desktop\fsdir\timelapse_output\1674062582.jpg'
01/18/2023 09:23:02 MainProcess     _training                      train           _run_training_cycle            DEBUG    Saving (save_iterations: True, save_now: False) Iteration: (iteration: 9750)
01/18/2023 09:23:02 MainProcess     _training                      io              save                           DEBUG    Backing up and saving models
01/18/2023 09:23:02 MainProcess     _training                      io              _get_save_averages             DEBUG    Getting save averages
01/18/2023 09:23:02 MainProcess     _training                      io              _get_save_averages             DEBUG    Average losses since last save: [0.02846838898956776, 0.03124110671132803]
01/18/2023 09:23:02 MainProcess     _training                      io              _should_backup                 DEBUG    Should backup: False
01/18/2023 09:23:03 MainProcess     _training                      multithreading  run                            DEBUG    Error in thread (_training): Unable to allocate 64.0 MiB for an array with shape (1024, 16384) and data type float32
01/18/2023 09:23:03 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
01/18/2023 09:23:03 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
01/18/2023 09:23:03 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
01/18/2023 09:23:03 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
01/18/2023 09:23:03 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
01/18/2023 09:23:03 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training'
01/18/2023 09:23:03 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training'
Traceback (most recent call last):
  File "C:\Users\doggy\faceswap\lib\cli\launcher.py", line 217, in execute_script
    process.process()
  File "C:\Users\doggy\faceswap\scripts\train.py", line 218, in process
    self._end_thread(thread, err)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 258, in _end_thread
    thread.join()
  File "C:\Users\doggy\faceswap\lib\multithreading.py", line 217, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\doggy\faceswap\lib\multithreading.py", line 96, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 280, in _training
    raise err
  File "C:\Users\doggy\faceswap\scripts\train.py", line 270, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\doggy\faceswap\scripts\train.py", line 372, in _run_training_cycle
    model.save(is_exit=False)
  File "C:\Users\doggy\faceswap\plugins\train\model\_base\model.py", line 436, in save
    self._io.save(is_exit=is_exit)
  File "C:\Users\doggy\faceswap\plugins\train\model\_base\io.py", line 207, in save
    self._plugin.model.save(self._filename, include_optimizer=include_optimizer)
  File "C:\Users\doggy\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\doggy\MiniConda3\envs\faceswap\lib\site-packages\keras\backend.py", line 4240, in <listcomp>
    return [x.numpy() for x in tensors]
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 64.0 MiB for an array with shape (1024, 16384) and data type float32

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         bcef3b4 Merge branch 'staging'
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce RTX 2060
gpu_devices_active:  GPU_0
gpu_driver:          516.94
gpu_vram:            GPU_0: 6144MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19045-SP0
os_release:          10
py_command:          C:\Users\doggy\faceswap\faceswap.py train -A C:/Users/doggy/Desktop/fsdir/extract_A -B C:/Users/doggy/Desktop/fsdir/extract_B -m C:/Users/doggy/Desktop/fsdir/modelAB -t original -bs 8 -it 1000000 -D central-storage -s 250 -ss 25000 -tia C:/Users/doggy/Desktop/fsdir/extract_A -tib C:/Users/doggy/Desktop/fsdir/extract_B -to C:/Users/doggy/Desktop/fsdir/timelapse_output -nl -L INFO -gui
py_conda_version:    conda 22.11.1
py_implementation:   CPython
py_version:          3.9.15
py_virtual_env:      True
sys_cores:           12
sys_processor:       AMD64 Family 23 Model 1 Stepping 1, AuthenticAMD
sys_ram:             Total: 32718MB, Available: 12739MB, Used: 19979MB, Free: 12739MB

=============== Pip Packages ===============
absl-py @ file:///C:/b/abs_5babsu7y5x/croot/absl-py_1666362945682/work
astunparse==1.6.3
cachetools==5.2.0
certifi==2022.12.7
charset-normalizer==2.1.1
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
colorama @ file:///C:/Windows/TEMP/abs_9439aeb1-0254-449a-96f7-33ab5eb17fc8apleb4yn/croots/recipe/colorama_1657009099097/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
dm-tree @ file:///C:/b/abs_10z0iy5knj/croot/dm-tree_1671027465819/work
fastcluster @ file:///D:/bld/fastcluster_1649783471014/work
ffmpy==0.3.0
flatbuffers==22.12.6
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
fonttools==4.25.0
gast==0.4.0
google-auth==2.15.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.51.1
h5py==3.7.0
idna==3.4
imageio @ file:///C:/Windows/TEMP/abs_24c1b783-7540-4ca9-a1b1-0e8aa8e6ae64hb79ssux/croots/recipe/imageio_1658785038775/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==5.2.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
keras==2.10.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1653292407425/work
libclang==14.0.6
Markdown==3.4.1
MarkupSafe==2.1.1
matplotlib @ file:///C:/ci/matplotlib-suite_1660169687702/work
mkl-fft==1.3.1
mkl-random @ file:///C:/ci/mkl_random_1626186184308/work
mkl-service==2.4.0
munkres==1.1.4
numexpr @ file:///C:/b/abs_a7kbak88hk/croot/numexpr_1668713882979/work
numpy @ file:///C:/b/abs_5ct9ex77k9/croot/numpy_and_numpy_base_1668593740598/work
nvidia-ml-py==11.515.75
oauthlib==3.2.2
opencv-python==4.6.0.66
opt-einsum==3.3.0
packaging @ file:///tmp/build/80754af9/packaging_1637314298585/work
Pillow==9.3.0
ply==3.11
protobuf==3.19.6
psutil @ file:///C:/Windows/Temp/abs_b2c2fd7f-9fd5-4756-95ea-8aed74d0039flsd9qufz/croots/recipe/psutil_1656431277748/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==305.1
pywinpty @ file:///C:/ci_310/pywinpty_1644230983541/work/target/wheels/pywinpty-2.0.2-cp39-none-win_amd64.whl
requests==2.28.1
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///D:/bld/scikit-learn_1659726281030/work
scipy @ file:///C:/bld/scipy_1658811088396/work
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.10.0
tensorflow-gpu==2.10.1
tensorflow-io-gcs-filesystem==0.29.0
tensorflow-probability @ file:///tmp/build/80754af9/tensorflow-probability_1633017132682/work
termcolor==2.1.1
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tornado @ file:///C:/ci/tornado_1662458743919/work
tqdm @ file:///C:/b/abs_0axbz66qik/croots/recipe/tqdm_1664392691071/work
typing_extensions @ file:///C:/b/abs_89eui86zuq/croot/typing_extensions_1669923792806/work
urllib3==1.26.13
Werkzeug==2.2.2
wincertstore==0.2
wrapt==1.14.1
zipp==3.11.0

============== Conda Packages ==============
# packages in environment at C:\Users\doggy\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   1.3.0            py39haa95532_0  
astunparse 1.6.3 pypi_0 pypi blas 1.0 mkl
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
ca-certificates 2022.12.7 h5b45459_0 conda-forge cachetools 5.2.0 pypi_0 pypi certifi 2022.12.7 pyhd8ed1ab_0 conda-forge charset-normalizer 2.1.1 pypi_0 pypi cloudpickle 2.0.0 pyhd3eb1b0_0
colorama 0.4.5 py39haa95532_0
cudatoolkit 11.2.2 h933977f_10 conda-forge cudnn 8.1.0.77 h3e0f4f4_0 conda-forge cycler 0.11.0 pyhd3eb1b0_0
decorator 5.1.1 pyhd3eb1b0_0
dm-tree 0.1.7 py39hd77b12b_1
fastcluster 1.2.6 py39h2e25243_1 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.3.0 pypi_0 pypi flatbuffers 22.12.6 pypi_0 pypi flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
gast 0.4.0 pypi_0 pypi git 2.34.1 haa95532_0
glib 2.69.1 h5dc1a3c_2
google-auth 2.15.0 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.51.1 pypi_0 pypi gst-plugins-base 1.18.5 h9e645db_0
gstreamer 1.18.5 hd78058f_0
h5py 3.7.0 pypi_0 pypi icu 58.2 ha925a31_3
idna 3.4 pypi_0 pypi imageio 2.19.3 py39haa95532_0
imageio-ffmpeg 0.4.7 pyhd8ed1ab_0 conda-forge importlib-metadata 5.2.0 pypi_0 pypi intel-openmp 2021.4.0 haa95532_3556
joblib 1.2.0 pyhd8ed1ab_0 conda-forge jpeg 9e h2bbff1b_0
keras 2.10.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.4.2 py39hd77b12b_0
lerc 3.0 hd77b12b_0
libblas 3.9.0 1_h8933c1f_netlib conda-forge libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libcblas 3.9.0 5_hd5c7e75_netlib conda-forge libclang 14.0.6 pypi_0 pypi libdeflate 1.8 h2bbff1b_5
libffi 3.4.2 hd77b12b_6
libiconv 1.16 h2bbff1b_2
liblapack 3.9.0 5_hd5c7e75_netlib conda-forge libogg 1.3.5 h2bbff1b_1
libpng 1.6.37 h2a8f88b_0
libtiff 4.4.0 h8a3f274_2
libvorbis 1.3.7 he774522_0
libwebp 1.2.4 h2bbff1b_0
libwebp-base 1.2.4 h2bbff1b_0
libxml2 2.9.14 h0ad7f3c_0
libxslt 1.1.35 h2bbff1b_0
lz4-c 1.9.4 h2bbff1b_0
m2w64-gcc-libgfortran 5.3.0 6 conda-forge m2w64-gcc-libs 5.3.0 7 conda-forge m2w64-gcc-libs-core 5.3.0 7 conda-forge m2w64-gmp 6.1.0 2 conda-forge m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge markdown 3.4.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi matplotlib 3.5.2 py39haa95532_0
matplotlib-base 3.5.2 py39hd77b12b_0
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py39h2bbff1b_0
mkl_fft 1.3.1 py39h277e83a_0
mkl_random 1.2.2 py39hf11a4ad_0
msys2-conda-epoch 20160418 1 conda-forge munkres 1.1.4 py_0
numexpr 2.8.4 py39h5b0cc5e_0
numpy 1.23.4 py39h3b20f71_0
numpy-base 1.23.4 py39h4da318b_0
nvidia-ml-py 11.515.75 pypi_0 pypi oauthlib 3.2.2 pypi_0 pypi opencv-python 4.6.0.66 pypi_0 pypi openssl 1.1.1s h2bbff1b_0
opt-einsum 3.3.0 pypi_0 pypi packaging 21.3 pyhd3eb1b0_0
pcre 8.45 hd77b12b_0
pillow 9.3.0 py39hdc2b20a_0
pip 22.3.1 py39haa95532_0
ply 3.11 py39haa95532_0
protobuf 3.19.6 pypi_0 pypi psutil 5.9.0 py39h2bbff1b_0
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.9 py39haa95532_0
pyqt 5.15.7 py39hd77b12b_0
pyqt5-sip 12.11.0 py39hd77b12b_0
python 3.9.15 h6244533_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.9 2_cp39 conda-forge pywin32 305 py39h2bbff1b_0
pywinpty 2.0.2 py39h5da7b33_0
qt-main 5.15.2 he8e5bd7_7
qt-webengine 5.15.9 hb9a9bb5_4
qtwebkit 5.212 h3ad3cdb_4
requests 2.28.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.9 pypi_0 pypi scikit-learn 1.1.2 py39hfd4428b_0 conda-forge scipy 1.8.1 py39h5567194_2 conda-forge setuptools 65.5.0 py39haa95532_0
sip 6.6.2 py39hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.40.0 h2bbff1b_0
tensorboard 2.10.1 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow-estimator 2.10.0 pypi_0 pypi tensorflow-gpu 2.10.1 pypi_0 pypi tensorflow-io-gcs-filesystem 0.29.0 pypi_0 pypi tensorflow-probability 0.14.0 pyhd3eb1b0_0
termcolor 2.1.1 pypi_0 pypi threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge tk 8.6.12 h2bbff1b_0
toml 0.10.2 pyhd3eb1b0_0
tornado 6.2 py39h2bbff1b_0
tqdm 4.64.1 py39haa95532_0
typing-extensions 4.4.0 py39haa95532_0
typing_extensions 4.4.0 py39haa95532_0
tzdata 2022g h04d1e81_0
urllib3 1.26.13 pypi_0 pypi vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0
wincertstore 0.2 py39haa95532_2
winpty 0.4.3 4
wrapt 1.14.1 pypi_0 pypi xz 5.2.8 h8cc25b3_0
zipp 3.11.0 pypi_0 pypi zlib 1.2.13 h8cc25b3_0
zstd 1.5.2 h19a0ad4_0 =============== State File ================= { "name": "original", "sessions": { "1": { "timestamp": 1673918226.3963852, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 16, "iterations": 1750, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "2": { "timestamp": 1673919681.2474973, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 16, "iterations": 12500, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "3": { "timestamp": 1673930399.201518, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 8, "iterations": 750, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "4": { "timestamp": 1673931037.1198366, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 8, "iterations": 6000, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "5": { "timestamp": 1673935120.988235, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 8, "iterations": 29000, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "6": { "timestamp": 1674023633.187923, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 8, "iterations": 20750, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } }, "7": { "timestamp": 1674056024.2903256, "no_logs": true, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 8, "iterations": 9500, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2 } } }, "lowest_avg_loss": { "a": 0.027992103703320028, "b": 0.03091232803463936 }, "iterations": 80250, "mixed_precision_layers": [], "config": { "centering": "legacy", "coverage": 67.5, "optimizer": "adam", "learning_rate": 5e-05, "epsilon_exponent": -7, "autoclip": false, "allow_growth": false, "mixed_precision": true, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "loss_function_2": "mse", "loss_weight_2": 100, "loss_function_3": null, "loss_weight_3": 0, "loss_function_4": null, "loss_weight_4": 0, "mask_loss_function": "mse", "eye_multiplier": 3, "mouth_multiplier": 2, "penalized_mask_loss": true, "mask_type": "extended", "mask_blur_kernel": 3, "mask_threshold": 4, "learn_mask": false, "lowmem": false } } ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 erosion_top: 0.0 erosion_bottom: 0.0 erosion_left: 0.0 erosion_right: 0.0 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False separate_mask: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False separate_mask: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False aligner_min_scale: 0.07 aligner_max_scale: 2.0 aligner_distance: 22.5 aligner_roll: 45.0 aligner_features: True filter_refeed: True save_filtered: False realign_refeeds: True filter_realign: True [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 8 cpu: True threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 [detect.s3fd] confidence: 70 batch-size: 4 [mask.bisenet_fp] batch-size: 8 cpu: False weights: faceswap include_ears: False include_hair: False include_glasses: True [mask.custom] batch-size: 8 centering: face fill: False [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 [recognition.vgg_face2] batch-size: 16 cpu: False --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] centering: legacy coverage: 67.5 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 epsilon_exponent: -7 autoclip: False reflect_padding: False allow_growth: False mixed_precision: True nan_protection: True convert_batchsize: 16 [global.loss] loss_function: ssim loss_function_2: mse loss_weight_2: 100 loss_function_3: None loss_weight_3: 0 loss_function_4: None loss_weight_4: 0 mask_loss_function: mse eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfaker] output_size: 128 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.phaze_a] output_size: 128 shared_fc: None enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: fs_original enc_scaling: 7 enc_load_weights: True bottleneck_type: dense bottleneck_norm: None bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: upsample2d fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: subpixel dec_upscales_in_fc: 0 dec_norm: None dec_min_filters: 64 dec_max_filters: 512 dec_slope_mode: full dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers: keras_encoder load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 fs_original_use_alt: False mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 mobilenet_minimalistic: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
User avatar
bryanlyon
Site Admin
Posts: 793
Joined: Fri Jul 12, 2019 12:49 am
Answers: 44
Location: San Francisco
Has thanked: 4 times
Been thanked: 218 times
Contact:

Re: CRIT ERROR -- ArrayMemoryError: Unable to allocate 64.0 MiB for an array...

Post by bryanlyon »

User avatar
doggydaddy
Posts: 6
Joined: Wed Jan 18, 2023 2:45 am
Has thanked: 3 times

Re: CRIT ERROR -- ArrayMemoryError: Unable to allocate 64.0 MiB for an array...

Post by doggydaddy »

thank you

i thought i had fixed that earlier by setting both Mixed Precision and Central Storage. I'm also just using the Original trainer.

I feel like that should have been sufficiently low enough of a load given my hardware. I'm currently set to Batch size of 8, but i will take that down to 4, and see what happens.

one thing i've noticed is in my basic performance tab, it shows my GPU being very lightly loaded at any given time. I dont know if that's something, but i can research it further.

thanks again

User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: CRIT ERROR -- ArrayMemoryError: Unable to allocate 64.0 MiB for an array...

Post by torzdf »

re: GPU usage, see here: app.php/faqpage#f0r3

My word is final

Locked