ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
jbu
Posts: 2
Joined: Sat Aug 14, 2021 11:10 am

ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

Post by jbu »

Hi, i am trying to create a video for a weeding so it's vary urgent, i get this error when training with default values:
ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

full log:

Code: Select all

08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
08/14/2021 14:04:47 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 14, side: 'a', do_shuffle: False)
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
08/14/2021 14:04:47 MainProcess     _training_0                    _base           _load_generator                DEBUG    Loading generator
08/14/2021 14:04:47 MainProcess     _training_0                    _base           _load_generator                DEBUG    input_size: 64, output_shapes: [(64, 64, 3)]
08/14/2021 14:04:47 MainProcess     _training_0                    generator       __init__                       DEBUG    Initializing TrainingDataGenerator: (model_input_size: 64, model_output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, color_order: bgr, augment_color: True, no_flip: False, no_warp: False, warp_to_landmarks: False, config: {'centering': 'face', 'coverage': 68.75, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/14/2021 14:04:47 MainProcess     _training_0                    generator       __init__                       DEBUG    Initialized TrainingDataGenerator
08/14/2021 14:04:47 MainProcess     _training_0                    generator       minibatch_ab                   DEBUG    Queue batches: (image_count: 14, batchsize: 14, side: 'b', do_shuffle: False, is_preview, False, is_timelapse: True)
08/14/2021 14:04:47 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initializing ImageAugmentation: (batchsize: 14, is_display: True, input_size: 64, output_shapes: [(64, 64, 3)], coverage_ratio: 0.6875, config: {'centering': 'face', 'coverage': 68.75, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'mask_loss_function': 'mse', 'l2_reg_term': 100, 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/14/2021 14:04:47 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Output sizes: [64]
08/14/2021 14:04:47 MainProcess     _training_0                    augmentation    __init__                       DEBUG    Initialized ImageAugmentation
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  __init__                       DEBUG    Initialized BackgroundGenerator: '_run'
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread(s): '_run'
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 1 of 2: '_run_0'
08/14/2021 14:04:47 MainProcess     _run_0                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 14, side: 'b', do_shuffle: False)
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Starting thread 2 of 2: '_run_1'
08/14/2021 14:04:47 MainProcess     _run_1                         generator       _minibatch                     DEBUG    Loading minibatch generator: (image_count: 14, side: 'b', do_shuffle: False)
08/14/2021 14:04:47 MainProcess     _training_0                    multithreading  start                          DEBUG    Started all threads '_run': 2
08/14/2021 14:04:47 MainProcess     _training_0                    _base           set_timelapse_feed             DEBUG    Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x0000024B32B3B510>, 'b': <generator object BackgroundGenerator.iterator at 0x0000024B0400F580>}
08/14/2021 14:04:47 MainProcess     _training_0                    _base           _setup                         DEBUG    Set up time-lapse
08/14/2021 14:04:47 MainProcess     _training_0                    _base           output_timelapse               DEBUG    Getting time-lapse samples
08/14/2021 14:04:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['.facebook_1517689826874_0.png', '20171020_173613_0.png', '20171020_173615_0.png', '20171020_173616_0.png', '20171020_173618(1)_0.png', '20171020_173618_0.png', '20171020_173623_0.png', '20171020_173625_0.png', '20191016_105025_0.png', '20191019_091247_0.png', '20191019_091249_0.png', '20191019_112043_0.png', '20191019_112044_0.png', '20191019_160201_0.png']
08/14/2021 14:04:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Trinity _720P HD_000094_0.png', 'Trinity _720P HD_000095_0.png', 'Trinity _720P HD_000096_0.png', 'Trinity _720P HD_000097_0.png', 'Trinity _720P HD_000098_0.png', 'Trinity _720P HD_000099_0.png', 'Trinity _720P HD_000100_0.png', 'Trinity _720P HD_000101_0.png', 'Trinity _720P HD_000102_0.png', 'Trinity _720P HD_000112_0.png', 'Trinity _720P HD_000116_0.png', 'Trinity _720P HD_000117_0.png', 'Trinity _720P HD_000118_0.png', 'Trinity _720P HD_000119_0.png']
08/14/2021 14:04:47 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
08/14/2021 14:04:47 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(60, 324, None), 'warp_mapx': '[[[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]]', 'warp_mapy': '[[[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
08/14/2021 14:04:47 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initializing constants. training_size: 384
08/14/2021 14:04:47 MainProcess     _run_0                         augmentation    initialize                     DEBUG    Initialized constants: {'clahe_base_contrast': 3, 'tgt_slices': slice(60, 324, None), 'warp_mapx': '[[[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]\n\n [[ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]\n  [ 60. 126. 192. 258. 324.]]]', 'warp_mapy': '[[[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]\n\n [[ 60.  60.  60.  60.  60.]\n  [126. 126. 126. 126. 126.]\n  [192. 192. 192. 192. 192.]\n  [258. 258. 258. 258. 258.]\n  [324. 324. 324. 324. 324.]]]', 'warp_pad': 80, 'warp_slices': slice(8, -8, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]\n\n [[  0   0]\n  [  0 383]\n  [383 383]\n  [383   0]\n  [191   0]\n  [191 383]\n  [383 191]\n  [  0 191]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [381. 381. 381. ... 381. 381. 381.]\n  [382. 382. 382. ... 382. 382. 382.]\n  [383. 383. 383. ... 383. 383. 383.]]\n\n [[  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  ...\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]\n  [  0.   1.   2. ... 381. 382. 383.]]]'}
08/14/2021 14:04:47 MainProcess     _run_0                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['.facebook_1517689826874_0.png', '20171020_173613_0.png', '20171020_173615_0.png', '20171020_173616_0.png', '20171020_173618(1)_0.png', '20171020_173618_0.png', '20171020_173623_0.png', '20171020_173625_0.png', '20191016_105025_0.png', '20191019_091247_0.png', '20191019_091249_0.png', '20191019_112043_0.png', '20191019_112044_0.png', '20191019_160201_0.png']
08/14/2021 14:04:47 MainProcess     _run_0                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Trinity _720P HD_000094_0.png', 'Trinity _720P HD_000095_0.png', 'Trinity _720P HD_000096_0.png', 'Trinity _720P HD_000097_0.png', 'Trinity _720P HD_000098_0.png', 'Trinity _720P HD_000099_0.png', 'Trinity _720P HD_000100_0.png', 'Trinity _720P HD_000101_0.png', 'Trinity _720P HD_000102_0.png', 'Trinity _720P HD_000112_0.png', 'Trinity _720P HD_000116_0.png', 'Trinity _720P HD_000117_0.png', 'Trinity _720P HD_000118_0.png', 'Trinity _720P HD_000119_0.png']
08/14/2021 14:04:47 MainProcess     _training_0                    _base           compile_sample                 DEBUG    Compiling samples: (side: 'a', samples: 14)
08/14/2021 14:04:47 MainProcess     _training_0                    _base           compile_sample                 DEBUG    Compiling samples: (side: 'b', samples: 14)
08/14/2021 14:04:47 MainProcess     _training_0                    _base           output_timelapse               DEBUG    Got time-lapse samples: {'a': 3, 'b': 3}
08/14/2021 14:04:47 MainProcess     _training_0                    _base           show_sample                    DEBUG    Showing sample
08/14/2021 14:04:47 MainProcess     _training_0                    _base           _get_predictions               DEBUG    Getting Predictions
08/14/2021 14:04:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['.facebook_1517689826874_0.png', '20171020_173613_0.png', '20171020_173615_0.png', '20171020_173616_0.png', '20171020_173618(1)_0.png', '20171020_173618_0.png', '20171020_173623_0.png', '20171020_173625_0.png', '20191016_105025_0.png', '20191019_091247_0.png', '20191019_091249_0.png', '20191019_112043_0.png', '20191019_112044_0.png', '20191019_160201_0.png']
08/14/2021 14:04:47 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Trinity _720P HD_000094_0.png', 'Trinity _720P HD_000095_0.png', 'Trinity _720P HD_000096_0.png', 'Trinity _720P HD_000097_0.png', 'Trinity _720P HD_000098_0.png', 'Trinity _720P HD_000099_0.png', 'Trinity _720P HD_000100_0.png', 'Trinity _720P HD_000101_0.png', 'Trinity _720P HD_000102_0.png', 'Trinity _720P HD_000112_0.png', 'Trinity _720P HD_000116_0.png', 'Trinity _720P HD_000117_0.png', 'Trinity _720P HD_000118_0.png', 'Trinity _720P HD_000119_0.png']
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_predictions               DEBUG    Returning predictions: {'a_a': (14, 64, 64, 3), 'b_b': (14, 64, 64, 3), 'a_b': (14, 64, 64, 3), 'b_a': (14, 64, 64, 3)}
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _to_full_frame                 DEBUG    side: 'a', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _process_full                  DEBUG    full_size: 384, prediction_size: 64, color: (0, 0, 255)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _resize_sample                 DEBUG    Resizing sample: (side: 'a', sample.shape: (14, 384, 384, 3), target_size: 92, scale: 0.23958333333333334)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _resize_sample                 DEBUG    Resized sample: (side: 'a' shape: (14, 92, 92, 3))
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    side: 'a', width: 92
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    height: 20, total_width: 276
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    texts: ['Original (A)', 'Original > Original', 'Original > Swap'], text_sizes: [(52, 7), (84, 7), (73, 7)], text_x: [20, 96, 193], text_y: 13
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    header_box.shape: (20, 276, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _to_full_frame                 DEBUG    side: 'b', number of sample arrays: 3, prediction.shapes: [(14, 64, 64, 3), (14, 64, 64, 3)])
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _process_full                  DEBUG    full_size: 384, prediction_size: 64, color: (0, 0, 255)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _resize_sample                 DEBUG    Resizing sample: (side: 'b', sample.shape: (14, 384, 384, 3), target_size: 92, scale: 0.23958333333333334)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _resize_sample                 DEBUG    Resized sample: (side: 'b' shape: (14, 92, 92, 3))
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _process_full                  DEBUG    Overlayed background. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _compile_masked                DEBUG    masked shapes: [(14, 64, 64, 3), (14, 64, 64, 3), (14, 64, 64, 3)]
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _overlay_foreground            DEBUG    Overlayed foreground. Shape: (14, 92, 92, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    side: 'b', width: 92
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    height: 20, total_width: 276
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    texts: ['Swap (B)', 'Swap > Swap', 'Swap > Original'], text_sizes: [(43, 7), (63, 7), (73, 7)], text_x: [24, 106, 193], text_y: 13
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_headers                   DEBUG    header_box.shape: (20, 276, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _duplicate_headers             DEBUG    side: a header.shape: (20, 276, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _duplicate_headers             DEBUG    side: b header.shape: (20, 276, 3)
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _stack_images                  DEBUG    Stack images
08/14/2021 14:04:48 MainProcess     _training_0                    _base           get_transpose_axes             DEBUG    Even number of images to stack
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _stack_images                  DEBUG    Stacked images
08/14/2021 14:04:48 MainProcess     _training_0                    _base           show_sample                    DEBUG    Compiled sample
08/14/2021 14:04:48 MainProcess     _training_0                    _base           output_timelapse               DEBUG    Created time-lapse: 'C:\Users\eladi\Downloads\orit\movie\Trinity _720P HD\Timelapse_neo_elad\1628939088.jpg'
08/14/2021 14:04:48 MainProcess     _training_0                    train           _run_training_cycle            DEBUG    Save Iteration: (iteration: 1
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _save                          DEBUG    Backing up and saving models
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_save_averages             DEBUG    Getting save averages
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _get_save_averages             DEBUG    Average losses since last save: [0.3277677595615387, 0.4694632589817047]
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _should_backup                 DEBUG    Set initial save iteration loss average for 'a': 0.3277677595615387
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _should_backup                 DEBUG    Set initial save iteration loss average for 'b': 0.4694632589817047
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _should_backup                 DEBUG    Updated lowest historical save iteration averages from: {'a': 0.3277677595615387, 'b': 0.4694632589817047} to: {'a': 0.3277677595615387, 'b': 0.4694632589817047}
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _should_backup                 DEBUG    Should backup: True
08/14/2021 14:04:48 MainProcess     _training_0                    _base           save                           DEBUG    Saving State
08/14/2021 14:04:48 MainProcess     _training_0                    serializer      save                           DEBUG    filename: C:\Users\eladi\Downloads\orit\movie\matrix_train\neo_to_elad_model\original_state.json, data type: <class 'dict'>
08/14/2021 14:04:48 MainProcess     _training_0                    serializer      _check_extension               DEBUG    Original filename: 'C:\Users\eladi\Downloads\orit\movie\matrix_train\neo_to_elad_model\original_state.json', final filename: 'C:\Users\eladi\Downloads\orit\movie\matrix_train\neo_to_elad_model\original_state.json'
08/14/2021 14:04:48 MainProcess     _training_0                    serializer      marshal                        DEBUG    data type: <class 'dict'>
08/14/2021 14:04:48 MainProcess     _training_0                    serializer      marshal                        DEBUG    returned data type: <class 'bytes'>
08/14/2021 14:04:48 MainProcess     _training_0                    _base           save                           DEBUG    Saved State
08/14/2021 14:04:48 MainProcess     _training_0                    _base           _save                          INFO     [Saved models] - Average loss since last save: face_a: 0.32777, face_b: 0.46946
08/14/2021 14:04:51 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['Trinity _720P HD_002985_0.png', 'Trinity _720P HD_000804_0.png', 'Trinity _720P HD_000629_1.png', 'Trinity _720P HD_000664_0.png', 'Trinity _720P HD_001898_0.png', 'Trinity _720P HD_002510_0.png', 'Trinity _720P HD_000585_1.png', 'Trinity _720P HD_000803_0.png', 'Trinity _720P HD_002295_0.png', 'Trinity _720P HD_003337_1.png', 'Trinity _720P HD_001676_1.png', 'Trinity _720P HD_001529_0.png', 'Trinity _720P HD_002507_0.png', 'Trinity _720P HD_002441_0.png', 'Trinity _720P HD_001522_0.png', 'Trinity _720P HD_002171_0.png']
08/14/2021 14:04:51 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['IMG_20181127_172451_0.png', 'IMG_20191020_090223_0.png', '20200627_144739_0.png', '_DSC3963-HDR-2_0.png', '20191019_160204(0)_0.png', 'IMG_20190906_112231_0.png', 'IMG_20191017_143405_0.png', '20200627_144708_0.png', 'IMG_20190728_194355_0.png', 'IMG_20171121_231840_0.png', '20200627_144714_0.png', 'IMG_20191018_124504_0.png', 'IMG_20181008_154640_0.png', 'IMG_20191018_173035_0.png', '_DSC4823_0.png', '_DSC4594_0.png']
08/14/2021 14:04:57 MainProcess     _run_1                         generator       cache_metadata                 DEBUG    All metadata already cached for: ['IMG_20190728_193931_0.png', 'Screenshot_2018-06-24-08-11-43-419_com.okcupid.okcupid_0.png', 'IMG_20190713_010624_0.png', 'IMG_20190728_194320_0.png', 'IMG_20191020_204213_0.png', '20171020_173618_0.png', 'IMG_20191020_204216_0.png', 'IMG_20191020_201234_0.png', 'IMG_20191018_133519_0.png', '_DSC4595_0.png', 'IMG-20210621-WA0080_0.png', 'IMG_20190728_194326_0.png', 'IMG_20190728_195623_0.png', 'IMG_20190906_112224_0.png', '20200717_190636_0.png', '_DSC3122-HDR-2_0.png']
08/14/2021 14:05:00 MainProcess     _run_0                         multithreading  run                            DEBUG    Error in thread (_run_0): could not broadcast input array from shape (512,512,3) into shape (512,512)
08/14/2021 14:05:03 MainProcess     _training_0                    multithreading  check_and_raise_error          DEBUG    Thread error caught: [(<class 'ValueError'>, ValueError('could not broadcast input array from shape (512,512,3) into shape (512,512)'), <traceback object at 0x0000024B3AE48680>)]
08/14/2021 14:05:03 MainProcess     _training_0                    multithreading  run                            DEBUG    Error in thread (_training_0): could not broadcast input array from shape (512,512,3) into shape (512,512)
08/14/2021 14:05:03 MainProcess     _run_1                         multithreading  run                            DEBUG    Error in thread (_run_1): could not broadcast input array from shape (512,512,3) into shape (512,512)
08/14/2021 14:05:03 MainProcess     MainThread                     train           _monitor                       DEBUG    Thread error detected
08/14/2021 14:05:03 MainProcess     MainThread                     train           _monitor                       DEBUG    Closed Monitor
08/14/2021 14:05:03 MainProcess     MainThread                     train           _end_thread                    DEBUG    Ending Training thread
08/14/2021 14:05:03 MainProcess     MainThread                     train           _end_thread                    CRITICAL Error caught! Exiting...
08/14/2021 14:05:03 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Threads: '_training'
08/14/2021 14:05:03 MainProcess     MainThread                     multithreading  join                           DEBUG    Joining Thread: '_training_0'
08/14/2021 14:05:03 MainProcess     MainThread                     multithreading  join                           ERROR    Caught exception in thread: '_training_0'
Traceback (most recent call last):
  File "C:\Users\eladi\faceswap\lib\cli\launcher.py", line 182, in execute_script
    process.process()
  File "C:\Users\eladi\faceswap\scripts\train.py", line 190, in process
    self._end_thread(thread, err)
  File "C:\Users\eladi\faceswap\scripts\train.py", line 230, in _end_thread
    thread.join()
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\eladi\faceswap\scripts\train.py", line 252, in _training
    raise err
  File "C:\Users\eladi\faceswap\scripts\train.py", line 242, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\eladi\faceswap\scripts\train.py", line 327, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\eladi\faceswap\plugins\train\trainer\_base.py", line 191, in train_one_step
    model_inputs, model_targets = self._feeder.get_batch()
  File "C:\Users\eladi\faceswap\plugins\train\trainer\_base.py", line 416, in get_batch
    batch = next(self._feeds[side])
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 156, in iterator
    self.check_and_raise_error()
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 84, in check_and_raise_error
    raise error[1].with_traceback(error[2])
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\eladi\faceswap\lib\multithreading.py", line 145, in _run
    for item in self.generator(*self._gen_args, **self._gen_kwargs):
  File "C:\Users\eladi\faceswap\lib\training\generator.py", line 598, in _minibatch
    yield self._process_batch(img_paths, side)
  File "C:\Users\eladi\faceswap\lib\training\generator.py", line 621, in _process_batch
    batch = self._face_cache.cache_metadata(filenames)
  File "C:\Users\eladi\faceswap\lib\training\generator.py", line 205, in cache_metadata
    batch, metadata = read_image_batch(filenames, with_metadata=True)
  File "C:\Users\eladi\faceswap\lib\image.py", line 359, in read_image_batch
    batch = np.array(batch)
ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

============ System Information ============
encoding:            cp1252
git_branch:          Not Found
git_commits:         Not Found
gpu_cuda:            11.4
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: NVIDIA GeForce RTX 3060 Ti
gpu_devices_active:  GPU_0
gpu_driver:          471.41
gpu_vram:            GPU_0: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19043-SP0
os_release:          10
py_command:          C:\Users\eladi\faceswap\faceswap.py train -A C:/Users/eladi/Downloads/orit/movie/matrix_train/neo -B C:\Users\eladi\Downloads\orit\movie\elad_train -m C:/Users/eladi/Downloads/orit/movie/matrix_train/neo_to_elad_model -t original -bs 16 -it 1000000 -s 250 -ss 25000 -tia C:/Users/eladi/Downloads/orit/movie/matrix_train/neo -tib C:/Users/eladi/Downloads/orit/movie/elad_train -to C:/Users/eladi/Downloads/orit/movie/Trinity _720P HD/Timelapse_neo_elad -ps 100 -L INFO -gui
py_conda_version:    conda 4.10.3
py_implementation:   CPython
py_version:          3.8.11
py_virtual_env:      True
sys_cores:           16
sys_processor:       Intel64 Family 6 Model 165 Stepping 5, GenuineIntel
sys_ram:             Total: 32701MB, Available: 25396MB, Used: 7305MB, Free: 25396MB

=============== Pip Packages ===============
absl-py==0.13.0
astunparse==1.6.3
cachetools==4.2.2
certifi==2021.5.30
charset-normalizer==2.0.4
colorama==0.4.4
cycler==0.10.0
fastcluster==1.1.26
ffmpy==0.2.3
flatbuffers==1.12
gast==0.3.3
google-auth==1.34.0
google-auth-oauthlib==0.4.5
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
idna==3.2
imageio @ file:///tmp/build/80754af9/imageio_1617700267927/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1621542018480/work
joblib @ file:///tmp/build/80754af9/joblib_1613502643832/work
Keras-Preprocessing==1.1.2
kiwisolver @ file:///C:/ci/kiwisolver_1612282606037/work
Markdown==3.3.4
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.3.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy @ file:///C:/ci/numpy_and_numpy_base_1603466732592/work
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.1
olefile==0.46
opencv-python==4.5.3.56
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1625663293593/work
protobuf==3.17.3
psutil @ file:///C:/ci/psutil_1612298324802/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///home/linux1/recipes/ci/pyparsing_1610983426697/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
pywin32==228
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scikit-learn @ file:///C:/ci/scikit-learn_1622739500535/work
scipy @ file:///C:/ci/scipy_1616703433439/work
sip==4.19.13
six==1.15.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow-estimator==2.4.0
tensorflow-gpu==2.4.3
termcolor==1.1.0
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1628668762525/work
tornado @ file:///C:/ci/tornado_1606942392901/work
tqdm @ file:///tmp/build/80754af9/tqdm_1627710282869/work
typing-extensions==3.7.4.3
urllib3==1.26.6
Werkzeug==2.0.1
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\eladi\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   0.13.0                   pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
ca-certificates 2021.7.5 haa95532_1
cachetools 4.2.2 pypi_0 pypi certifi 2021.5.30 py38haa95532_0
charset-normalizer 2.0.4 pypi_0 pypi colorama 0.4.4 pypi_0 pypi cycler 0.10.0 py38_0
fastcluster 1.1.26 py38h251f6bf_2 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi flatbuffers 1.12 pypi_0 pypi freetype 2.10.4 hd328e21_0
gast 0.3.3 pypi_0 pypi git 2.23.0 h6bb4b03_0
google-auth 1.34.0 pypi_0 pypi google-auth-oauthlib 0.4.5 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.32.0 pypi_0 pypi h5py 2.10.0 pypi_0 pypi icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 3.2 pypi_0 pypi imageio 2.9.0 pyhd3eb1b0_0
imageio-ffmpeg 0.4.4 pyhd8ed1ab_0 conda-forge intel-openmp 2021.3.0 haa95532_3372
joblib 1.0.1 pyhd3eb1b0_0
jpeg 9b hb83a4c4_2
keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.3.1 py38hd77b12b_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.2.0 hd0e1b90_0
lz4-c 1.9.3 h2bbff1b_1
markdown 3.3.4 pypi_0 pypi matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38h196d8e1_0
mkl_fft 1.3.0 py38h46781fe_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.19.2 py38hadc3359_0
numpy-base 1.19.2 py38ha3acd2a_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.1 pypi_0 pypi olefile 0.46 py_0
opencv-python 4.5.3.56 pypi_0 pypi openssl 1.1.1k h2bbff1b_0
opt-einsum 3.3.0 pypi_0 pypi pathlib 1.0.1 py_1
pillow 8.3.1 py38h4fa10fc_0
pip 21.2.2 py38haa95532_0
protobuf 3.17.3 pypi_0 pypi psutil 5.8.0 py38h2bbff1b_1
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.11 h6244533_1
python-dateutil 2.8.2 pyhd3eb1b0_0
python_abi 3.8 2_cp38 conda-forge pywin32 228 py38hbaba5e8_1
qt 5.9.7 vc14h73c81de_0
requests 2.26.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.7.2 pypi_0 pypi scikit-learn 0.24.2 py38hf11a4ad_1
scipy 1.6.2 py38h14eb087_0
setuptools 52.0.0 py38haa95532_0
sip 4.19.13 py38ha925a31_0
six 1.15.0 pypi_0 pypi sqlite 3.36.0 h2bbff1b_0
tensorboard 2.6.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.0 pypi_0 pypi tensorflow-estimator 2.4.0 pypi_0 pypi tensorflow-gpu 2.4.3 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 2.2.0 pyhbf3da8f_0
tk 8.6.10 he774522_0
tornado 6.1 py38h2bbff1b_0
tqdm 4.62.0 pyhd3eb1b0_1
typing-extensions 3.7.4.3 pypi_0 pypi urllib3 1.26.6 pypi_0 pypi vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
werkzeug 2.0.1 pypi_0 pypi wheel 0.36.2 pyhd3eb1b0_0
wincertstore 0.2 py38_0
wrapt 1.12.1 pypi_0 pypi xz 5.2.5 h62dcd97_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.9 h19a0ad4_0 =============== State File ================= { "name": "original", "sessions": { "1": { "timestamp": 1628939071.7473178, "no_logs": false, "loss_names": [ "total", "face_a", "face_b" ], "batchsize": 16, "iterations": 1, "config": { "learning_rate": 5e-05, "epsilon_exponent": -7, "allow_growth": false, "nan_protection": true, "convert_batchsize": 16, "eye_multiplier": 3, "mouth_multiplier": 2 } } }, "lowest_avg_loss": { "a": 0.3277677595615387, "b": 0.4694632589817047 }, "iterations": 1, "config": { "centering": "face", "coverage": 68.75, "optimizer": "adam", "learning_rate": 5e-05, "epsilon_exponent": -7, "allow_growth": false, "mixed_precision": false, "nan_protection": true, "convert_batchsize": 16, "loss_function": "ssim", "mask_loss_function": "mse", "l2_reg_term": 100, "eye_multiplier": 3, "mouth_multiplier": 2, "penalized_mask_loss": true, "mask_type": "extended", "mask_blur_kernel": 3, "mask_threshold": 4, "learn_mask": false, "lowmem": false } } ================= Configs ================== --------- .faceswap --------- backend: nvidia --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: none amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 scalefactor: 0.709 batch-size: 8 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 [detect.s3fd] confidence: 70 batch-size: 4 [mask.bisenet_fp] batch-size: 8 include_ears: False include_hair: False include_glasses: True [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] centering: face coverage: 68.75 icnr_init: False conv_aware_init: False optimizer: adam learning_rate: 5e-05 epsilon_exponent: -7 reflect_padding: False allow_growth: False mixed_precision: False nan_protection: True convert_batchsize: 16 [global.loss] loss_function: ssim mask_loss_function: mse l2_reg_term: 100 eye_multiplier: 3 mouth_multiplier: 2 penalized_mask_loss: True mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False [model.dfaker] output_size: 128 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.phaze_a] output_size: 128 shared_fc: none enable_gblock: True split_fc: True split_gblock: False split_decoders: False enc_architecture: fs_original enc_scaling: 40 enc_load_weights: True bottleneck_type: dense bottleneck_norm: none bottleneck_size: 1024 bottleneck_in_encoder: True fc_depth: 1 fc_min_filters: 1024 fc_max_filters: 1024 fc_dimensions: 4 fc_filter_slope: -0.5 fc_dropout: 0.0 fc_upsampler: upsample2d fc_upsamples: 1 fc_upsample_filters: 512 fc_gblock_depth: 3 fc_gblock_min_nodes: 512 fc_gblock_max_nodes: 512 fc_gblock_filter_slope: -0.5 fc_gblock_dropout: 0.0 dec_upscale_method: subpixel dec_norm: none dec_min_filters: 64 dec_max_filters: 512 dec_filter_slope: -0.45 dec_res_blocks: 1 dec_output_kernel: 5 dec_gaussian: True dec_skip_last_residual: True freeze_layers: keras_encoder load_layers: encoder fs_original_depth: 4 fs_original_min_filters: 128 fs_original_max_filters: 1024 mobilenet_width: 1.0 mobilenet_depth: 1 mobilenet_dropout: 0.001 [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 128 lowmem: False clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 14 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

Post by torzdf »

If I were to guess, I'd say there are black and white images in your training set. This shouldn't matter, and I haven't seen this issue before, but it's worth checking. If you do, remove the black and white images from your training set and see if you can get it running.

My word is final

User avatar
jbu
Posts: 2
Joined: Sat Aug 14, 2021 11:10 am

Re: ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

Post by jbu »

Thanks, i don't have and it fail the same way on other training sets

User avatar
torzdf
Posts: 2651
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 129 times
Been thanked: 622 times

Re: ValueError: could not broadcast input array from shape (512,512,3) into shape (512,512)

Post by torzdf »

Please see this response here, as I believe it is also applicable to your situation:

viewtopic.php?p=6006#p6006

My word is final

Locked