"ValueError: setting an array element with a sequence." Before Starting Training

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Flimsy Fox
Posts: 5
Joined: Fri Aug 28, 2020 1:04 am
Been thanked: 1 time

"ValueError: setting an array element with a sequence." Before Starting Training

Post by Flimsy Fox »

I click on the "Train" button. It goes through analyzing ops, then crashes with the message in the attached log.
I am using Unbalanced, with 512x512 pixel training set (mainly because minute facial details need to be preserved).
All images all the same size. Can someone please tell me what I'm doing wrong?

Code: Select all

08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator
08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 320, output_shapes: [(320, 320, 3)]
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Requested coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Final coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 320, model_output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, augment_color: True, no_flip: False, warp_to_landmarks: False, alignments: [], config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
08/27/2020 18:59:55 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 1089, batchsize: 1, side: 'b', do_shuffle: True, is_preview, False, is_timelapse: False)
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 1, is_display: False, input_size: 320, output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [320]
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
08/27/2020 18:59:55 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 1089, side: 'b', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
08/27/2020 18:59:55 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 1089, side: 'b', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'a')
08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator
08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 320, output_shapes: [(320, 320, 3)]
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Requested coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Final coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 320, model_output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, augment_color: True, no_flip: False, warp_to_landmarks: False, alignments: [], config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
08/27/2020 18:59:55 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 818, batchsize: 2, side: 'a', do_shuffle: True, is_preview, True, is_timelapse: False)
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 2, is_display: True, input_size: 320, output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [320]
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
08/27/2020 18:59:55 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 818, side: 'a', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
08/27/2020 18:59:55 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 818, side: 'a', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Setting preview feed: (side: 'b')
08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    Loading generator
08/27/2020 18:59:55 MainProcess     _training_0     _base           _load_generator           DEBUG    input_size: 320, output_shapes: [(320, 320, 3)]
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Requested coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Final coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing TrainingDataGenerator: (model_input_size: 320, model_output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, augment_color: True, no_flip: False, warp_to_landmarks: False, alignments: [], config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized TrainingDataGenerator
08/27/2020 18:59:55 MainProcess     _training_0     training_data   minibatch_ab              DEBUG    Queue batches: (image_count: 1089, batchsize: 2, side: 'b', do_shuffle: True, is_preview, True, is_timelapse: False)
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initializing ImageAugmentation: (batchsize: 2, is_display: True, input_size: 320, output_shapes: [(320, 320, 3)], coverage_ratio: 0.6875, config: {'coverage': 68.75, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'icnr_init': True, 'conv_aware_init': True, 'reflect_padding': False, 'allow_growth': False, 'penalized_mask_loss': False, 'loss_function': 'mae', 'learning_rate': 5e-05, 'preview_images': 2, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Output sizes: [320]
08/27/2020 18:59:55 MainProcess     _training_0     training_data   __init__                  DEBUG    Initialized ImageAugmentation
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initializing BackgroundGenerator: (target: '_run', thread_count: 2)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  __init__                  DEBUG    Initialized BackgroundGenerator: '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread(s): '_run'
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 1 of 2: '_run_0'
08/27/2020 18:59:55 MainProcess     _run_0          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 1089, side: 'b', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Starting thread 2 of 2: '_run_1'
08/27/2020 18:59:55 MainProcess     _run_1          training_data   _minibatch                DEBUG    Loading minibatch generator: (image_count: 1089, side: 'b', do_shuffle: True)
08/27/2020 18:59:55 MainProcess     _training_0     multithreading  start                     DEBUG    Started all threads '_run': 2
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_preview_feed         DEBUG    Set preview feed. Batchsize: 2
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized _Feeder:
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Enabling TensorBoard Logging
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_tensorboard          DEBUG    Setting up TensorBoard Logging
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 128
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 1, 'tgt_slices': slice(20, 108, None), 'warp_mapx': '[[[ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]]]', 'warp_mapy': '[[[ 20.  20.  20.  20.  20.]\n  [ 42.  42.  42.  42.  42.]\n  [ 64.  64.  64.  64.  64.]\n  [ 86.  86.  86.  86.  86.]\n  [108. 108. 108. 108. 108.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 127]\n  [127 127]\n  [127   0]\n  [ 63   0]\n  [ 63 127]\n  [127  63]\n  [  0  63]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [125. 125. 125. ... 125. 125. 125.]\n  [126. 126. 126. ... 126. 126. 126.]\n  [127. 127. 127. ... 127. 127. 127.]]\n\n [[  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  ...\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]]]'}
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 128
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 512
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 512
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initializing constants. training_size: 512
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 1, 'tgt_slices': slice(20, 108, None), 'warp_mapx': '[[[ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]]\n\n [[ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]\n  [ 20.  42.  64.  86. 108.]]]', 'warp_mapy': '[[[ 20.  20.  20.  20.  20.]\n  [ 42.  42.  42.  42.  42.]\n  [ 64.  64.  64.  64.  64.]\n  [ 86.  86.  86.  86.  86.]\n  [108. 108. 108. 108. 108.]]\n\n [[ 20.  20.  20.  20.  20.]\n  [ 42.  42.  42.  42.  42.]\n  [ 64.  64.  64.  64.  64.]\n  [ 86.  86.  86.  86.  86.]\n  [108. 108. 108. 108. 108.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 127]\n  [127 127]\n  [127   0]\n  [ 63   0]\n  [ 63 127]\n  [127  63]\n  [  0  63]]\n\n [[  0   0]\n  [  0 127]\n  [127 127]\n  [127   0]\n  [ 63   0]\n  [ 63 127]\n  [127  63]\n  [  0  63]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [125. 125. 125. ... 125. 125. 125.]\n  [126. 126. 126. ... 126. 126. 126.]\n  [127. 127. 127. ... 127. 127. 127.]]\n\n [[  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  ...\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]\n  [  0.   1.   2. ... 125. 126. 127.]]]'}
08/27/2020 18:59:55 MainProcess     _run_1          multithreading  run                       DEBUG    Error in thread (_run_1): setting an array element with a sequence.
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initializing constants. training_size: 512
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 4, 'tgt_slices': slice(80, 432, None), 'warp_mapx': '[[[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]]', 'warp_mapy': '[[[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [509. 509. 509. ... 509. 509. 509.]\n  [510. 510. 510. ... 510. 510. 510.]\n  [511. 511. 511. ... 511. 511. 511.]]\n\n [[  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  ...\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]]]'}
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 4, 'tgt_slices': slice(80, 432, None), 'warp_mapx': '[[[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]]', 'warp_mapy': '[[[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [509. 509. 509. ... 509. 509. 509.]\n  [510. 510. 510. ... 510. 510. 510.]\n  [511. 511. 511. ... 511. 511. 511.]]\n\n [[  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  ...\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]]]'}
08/27/2020 18:59:55 MainProcess     _run_0          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 4, 'tgt_slices': slice(80, 432, None), 'warp_mapx': '[[[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]\n\n [[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]]', 'warp_mapy': '[[[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]\n\n [[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]\n\n [[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [509. 509. 509. ... 509. 509. 509.]\n  [510. 510. 510. ... 510. 510. 510.]\n  [511. 511. 511. ... 511. 511. 511.]]\n\n [[  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  ...\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]]]'}
08/27/2020 18:59:55 MainProcess     _training_0     _base           _set_tensorboard          INFO     Enabled TensorBoard Logging
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Requested coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Final coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing _Samples: model: '<plugins.train.model.unbalanced.Model object at 0x00000235D543A1C0>', coverage_ratio: 0.6875)
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized _Samples
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Requested coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           coverage_ratio            DEBUG    Final coverage_ratio: 0.6875
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing _Timelapse: model: <plugins.train.model.unbalanced.Model object at 0x00000235D543A1C0>, coverage_ratio: 0.6875, image_count: 2, feeder: '<plugins.train.trainer._base._Feeder object at 0x000002364EB2CBB0>')
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initializing _Samples: model: '<plugins.train.model.unbalanced.Model object at 0x00000235D543A1C0>', coverage_ratio: 0.6875)
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized _Samples
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized _Timelapse
08/27/2020 18:59:55 MainProcess     _training_0     _base           __init__                  DEBUG    Initialized Trainer
08/27/2020 18:59:55 MainProcess     _training_0     train           _load_trainer             DEBUG    Loaded Trainer
08/27/2020 18:59:55 MainProcess     _training_0     train           _run_training_cycle       DEBUG    Running Training Cycle
08/27/2020 18:59:55 MainProcess     _run_1          training_data   initialize                DEBUG    Initialized constants: {'clahe_base_contrast': 4, 'tgt_slices': slice(80, 432, None), 'warp_mapx': '[[[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]\n\n [[ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]\n  [ 80. 168. 256. 344. 432.]]]', 'warp_mapy': '[[[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]\n\n [[ 80.  80.  80.  80.  80.]\n  [168. 168. 168. 168. 168.]\n  [256. 256. 256. 256. 256.]\n  [344. 344. 344. 344. 344.]\n  [432. 432. 432. 432. 432.]]]', 'warp_pad': 400, 'warp_slices': slice(40, -40, None), 'warp_lm_edge_anchors': '[[[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]\n\n [[  0   0]\n  [  0 511]\n  [511 511]\n  [511   0]\n  [255   0]\n  [255 511]\n  [511 255]\n  [  0 255]]]', 'warp_lm_grids': '[[[  0.   0.   0. ...   0.   0.   0.]\n  [  1.   1.   1. ...   1.   1.   1.]\n  [  2.   2.   2. ...   2.   2.   2.]\n  ...\n  [509. 509. 509. ... 509. 509. 509.]\n  [510. 510. 510. ... 510. 510. 510.]\n  [511. 511. 511. ... 511. 511. 511.]]\n\n [[  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  ...\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]\n  [  0.   1.   2. ... 509. 510. 511.]]]'}
08/27/2020 18:59:59 MainProcess     _training_0     library         _logger_callback          INFO     Analyzing Ops: 220 of 1136 operations complete
08/27/2020 19:00:01 MainProcess     _training_0     library         _logger_callback          INFO     Analyzing Ops: 420 of 1136 operations complete
08/27/2020 19:00:03 MainProcess     _training_0     library         _logger_callback          INFO     Analyzing Ops: 1020 of 1136 operations complete
08/27/2020 19:00:29 MainProcess     _training_0     _base           generate_preview          DEBUG    Generating preview
08/27/2020 19:00:29 MainProcess     _training_0     multithreading  check_and_raise_error     DEBUG    Thread error caught: [(<class 'ValueError'>, ValueError('setting an array element with a sequence.'), <traceback object at 0x000002364FA17A80>)]
08/27/2020 19:00:29 MainProcess     _training_0     multithreading  run                       DEBUG    Error in thread (_training_0): setting an array element with a sequence.
08/27/2020 19:00:29 MainProcess     MainThread      train           _monitor                  DEBUG    Thread error detected
08/27/2020 19:00:29 MainProcess     MainThread      train           _monitor                  DEBUG    Closed Monitor
08/27/2020 19:00:29 MainProcess     MainThread      train           _end_thread               DEBUG    Ending Training thread
08/27/2020 19:00:29 MainProcess     MainThread      train           _end_thread               CRITICAL Error caught! Exiting...
08/27/2020 19:00:29 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Threads: '_training'
08/27/2020 19:00:29 MainProcess     MainThread      multithreading  join                      DEBUG    Joining Thread: '_training_0'
08/27/2020 19:00:29 MainProcess     MainThread      multithreading  join                      ERROR    Caught exception in thread: '_training_0'
TypeError: only size-1 arrays can be converted to Python scalars

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\flims\faceswap\lib\cli\launcher.py", line 156, in execute_script
    process.process()
  File "C:\Users\flims\faceswap\scripts\train.py", line 135, in process
    self._end_thread(thread, err)
  File "C:\Users\flims\faceswap\scripts\train.py", line 175, in _end_thread
    thread.join()
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 121, in join
    raise thread.err[1].with_traceback(thread.err[2])
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\flims\faceswap\scripts\train.py", line 197, in _training
    raise err
  File "C:\Users\flims\faceswap\scripts\train.py", line 187, in _training
    self._run_training_cycle(model, trainer)
  File "C:\Users\flims\faceswap\scripts\train.py", line 268, in _run_training_cycle
    trainer.train_one_step(viewer, timelapse)
  File "C:\Users\flims\faceswap\plugins\train\trainer\_base.py", line 219, in train_one_step
    self._feeder.generate_preview(do_preview)
  File "C:\Users\flims\faceswap\plugins\train\trainer\_base.py", line 472, in generate_preview
    batch = next(self._display_feeds["preview"][side])
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 156, in iterator
    self.check_and_raise_error()
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 84, in check_and_raise_error
    raise error[1].with_traceback(error[2])
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 37, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\flims\faceswap\lib\multithreading.py", line 145, in _run
    for item in self.generator(*self._gen_args, **self._gen_kwargs):
  File "C:\Users\flims\faceswap\lib\training_data.py", line 186, in _minibatch
    yield self._process_batch(img_paths, side)
  File "C:\Users\flims\faceswap\lib\training_data.py", line 222, in _process_batch
    processed["samples"] = batch[..., :3].astype("float32") / 255.0
ValueError: setting an array element with a sequence.

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         6b2aac6 Enable MTCNN for CPU extraction
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: Advanced Micro Devices, Inc. - gfx900 (experimental), GPU_1: Advanced Micro Devices, Inc. - gfx900 (supported)
gpu_devices_active:  GPU_0, GPU_1
gpu_driver:          ['3004.8 (PAL,HSAIL)', '3004.8 (PAL,HSAIL)']
gpu_vram:            GPU_0: 8176MB, GPU_1: 8176MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.19041-SP0
os_release:          10
py_command:          C:\Users\flims\faceswap\faceswap.py train -A E:/Machine Learning Projects/Dame Da Nae Rexouium/Dame Da Ne Input Set High Res -ala E:/Machine Learning Projects/Dame Da Nae Rexouium/Dame da ne template_alignments.fsa -B E:/Machine Learning Projects/Dame Da Nae Rexouium/Rexouium Input Set High Res -m E:/Machine Learning Projects/Dame Da Nae Rexouium/Training Data -t unbalanced -bs 1 -it 1000000 -s 50 -ss 25000 -ps 50 -L INFO -gui
py_conda_version:    conda 4.8.4
py_implementation:   CPython
py_version:          3.8.5
py_virtual_env:      True
sys_cores:           12
sys_processor:       AMD64 Family 23 Model 8 Stepping 2, AuthenticAMD
sys_ram:             Total: 16300MB, Available: 6242MB, Used: 10057MB, Free: 6242MB

=============== Pip Packages ===============
absl-py==0.9.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
cffi==1.14.1
chardet==3.0.4
cycler==0.10.0
enum34==1.1.10
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.3.3
google-auth==1.20.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.31.0
h5py==2.10.0
idna==2.10
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1594236160679/work
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver==1.2.0
Markdown==3.2.2
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy @ file:///C:/ci/numpy_and_numpy_base_1596215850360/work
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.4.0.40
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1594298230227/work
plaidml==0.7.0
plaidml-keras==0.7.0
protobuf==3.13.0
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
python-dateutil==2.8.1
pywin32==227
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn @ file:///C:/ci/scikit-learn_1592853510272/work
scipy==1.4.1
sip==4.19.13
six==1.15.0
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow==2.2.0
tensorflow-estimator==2.2.0
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1596810128862/work
urllib3==1.25.10
Werkzeug==1.0.1
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
# packages in environment at C:\Users\flims\MiniConda3\envs\faceswap:
#
# Name                    Version                   Build  Channel
absl-py                   0.9.0                    pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
blas                      1.0                         mkl  
ca-certificates 2020.6.24 0
cachetools 4.1.1 pypi_0 pypi certifi 2020.6.20 py38_0
cffi 1.14.1 pypi_0 pypi chardet 3.0.4 pypi_0 pypi cycler 0.10.0 py38_0
enum34 1.1.10 pypi_0 pypi fastcluster 1.1.26 py38hbe40bda_1 conda-forge ffmpeg 4.3.1 ha925a31_0 conda-forge ffmpy 0.2.3 pypi_0 pypi freetype 2.10.2 hd328e21_0
gast 0.3.3 pypi_0 pypi git 2.23.0 h6bb4b03_0
google-auth 1.20.1 pypi_0 pypi google-auth-oauthlib 0.4.1 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi grpcio 1.31.0 pypi_0 pypi h5py 2.10.0 pypi_0 pypi icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pypi_0 pypi imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge intel-openmp 2020.1 216
joblib 0.16.0 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 pypi_0 pypi keras-applications 1.0.8 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.2.0 py38h74a9793_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 h62dcd97_1
markdown 3.2.2 pypi_0 pypi matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.1 216
mkl-service 2.3.0 py38hb782905_0
mkl_fft 1.1.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.19.1 py38h5510c5b_0
numpy-base 1.19.1 py38ha3acd2a_0
nvidia-ml-py3 7.352.1 pypi_0 pypi oauthlib 3.1.0 pypi_0 pypi olefile 0.46 py_0
opencv-python 4.4.0.40 pypi_0 pypi openssl 1.1.1g he774522_1
opt-einsum 3.3.0 pypi_0 pypi pathlib 1.0.1 py_1
pillow 7.2.0 py38hcc1f983_0
pip 20.2.2 py38_0
plaidml 0.7.0 pypi_0 pypi plaidml-keras 0.7.0 pypi_0 pypi protobuf 3.13.0 pypi_0 pypi psutil 5.7.0 py38he774522_0
pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycparser 2.20 pypi_0 pypi pyparsing 2.4.7 py_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.5 he1778fa_0
python-dateutil 2.8.1 py_0
python_abi 3.8 1_cp38 conda-forge pywin32 227 py38he774522_1
pyyaml 5.3.1 pypi_0 pypi qt 5.9.7 vc14h73c81de_0
requests 2.24.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.6 pypi_0 pypi scikit-learn 0.23.1 py38h25d0782_0
scipy 1.4.1 pypi_0 pypi setuptools 49.6.0 py38_0
sip 4.19.13 py38ha925a31_0
six 1.15.0 py_0
sqlite 3.32.3 h2a8f88b_0
tensorboard 2.2.2 pypi_0 pypi tensorboard-plugin-wit 1.7.0 pypi_0 pypi tensorflow 2.2.0 pypi_0 pypi tensorflow-estimator 2.2.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.0.4 py38he774522_1
tqdm 4.48.2 py_0
urllib3 1.25.10 pypi_0 pypi vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 1.0.1 pypi_0 pypi wheel 0.34.2 py38_0
wincertstore 0.2 py38_0
wrapt 1.12.1 pypi_0 pypi xz 5.2.5 h62dcd97_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 h04227a9_0 ================= Configs ================== --------- .faceswap --------- backend: amd --------- convert.ini --------- [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] colorspace: HSV balance_1: 0.0 balance_2: 0.0 balance_3: 0.0 contrast: 0.0 brightness: 0.0 [color.match_hist] threshold: 99.0 [mask.box_blend] type: gaussian distance: 11.0 radius: 5.0 passes: 1 [mask.mask_blend] type: normalized kernel_size: 3 passes: 4 threshold: 4 erosion: 0.0 [scaling.sharpen] method: unsharp_mask amount: 150 radius: 0.3 threshold: 5.0 [writer.ffmpeg] container: mp4 codec: libx264 crf: 23 preset: medium tune: none profile: auto level: auto skip_mux: False [writer.gif] fps: 25 loop: 0 palettesize: 256 subrectangles: False [writer.opencv] format: png draw_transparent: False jpg_quality: 75 png_compress_level: 3 [writer.pillow] format: png draw_transparent: False optimize: False gif_interlace: True jpg_quality: 75 png_compress_level: 3 tif_compression: tiff_deflate --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 12 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] minsize: 20 threshold_1: 0.6 threshold_2: 0.7 threshold_3: 0.7 scalefactor: 0.709 batch-size: 8 [detect.s3fd] confidence: 70 batch-size: 4 [mask.unet_dfl] batch-size: 8 [mask.vgg_clear] batch-size: 6 [mask.vgg_obstructed] batch-size: 2 --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 icon_size: 14 font: default font_size: 9 autosave_last_session: prompt timeout: 120 auto_load_model_stats: True --------- train.ini --------- [global] coverage: 68.75 mask_type: extended mask_blur_kernel: 3 mask_threshold: 4 learn_mask: False icnr_init: True conv_aware_init: True reflect_padding: False allow_growth: False penalized_mask_loss: False loss_function: mae learning_rate: 5e-05 [model.dfl_h128] lowmem: False [model.dfl_sae] input_size: 128 clipnorm: True architecture: df autoencoder_dims: 0 encoder_dims: 42 decoder_dims: 21 multiscale_decoder: False [model.dlight] features: best details: good output_size: 256 [model.original] lowmem: False [model.realface] input_size: 64 output_size: 128 dense_nodes: 1536 complexity_encoder: 128 complexity_decoder: 512 [model.unbalanced] input_size: 320 lowmem: True clipnorm: True nodes: 1024 complexity_encoder: 128 complexity_decoder_a: 384 complexity_decoder_b: 512 [model.villain] lowmem: False [trainer.original] preview_images: 2 zoom_amount: 5 rotation_range: 10 shift_range: 5 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 color_clahe_max_size: 4
by torzdf » Fri Aug 28, 2020 8:34 am

Can you update to the latest code and see if the problem persists.

Go to full post
User avatar
torzdf
Posts: 2649
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 128 times
Been thanked: 622 times

Re: "ValueError: setting an array element with a sequence." Before Starting Training

Post by torzdf »

Can you update to the latest code and see if the problem persists.

My word is final

User avatar
Flimsy Fox
Posts: 5
Joined: Fri Aug 28, 2020 1:04 am
Been thanked: 1 time

Re: "ValueError: setting an array element with a sequence." Before Starting Training

Post by Flimsy Fox »

torzdf wrote: Fri Aug 28, 2020 8:34 am

Can you update to the latest code and see if the problem persists.

I updated, and it worked, thank you!

Locked