Convert + Alignments - Common Problems MegaThread

Got questions or tips about the Conversion process? This is the place to discuss them.


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Converting faces from your trained model.

If you are having issues with the Convert process not working as you would expect, then you should post in the Convert Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

User avatar
augustfr
Posts: 1
Joined: Mon Jun 01, 2020 9:01 pm

ERROR No alignments file found.

Post by augustfr »

Hey!

I'm trying to do this all from only the linux terminal so keep in mind that I don't have access to the GUI. I've run about 20k iterations but when I try the convert command, I always get this: ERROR No alignments file found. But when I go and check my folders where my source videos are, one of them has a data_dst_alignments.fsa file and the other has a data_src_alignments.fsa file.

Do I need to do something different?


User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: ERROR No alignments file found.

Post by torzdf »

The video that you are trying to convert for needs an alignments file generated so that it knows where the faces are.

My word is final


User avatar
Mr_Dalek
Posts: 1
Joined: Tue Jun 09, 2020 12:11 am
Has thanked: 1 time

Where is my alignment folder?

Post by Mr_Dalek »

I have looked for the alignment for the video I am trying to convet, the .fsa, but can only come up with "Original State" .fsa files.
What should I do/look for instead


User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: Where is my alignment folder?

Post by torzdf »

The .fsa file is generated at the extract stage and will be placed with the video that you extracted from.

My word is final


User avatar
CarlosGrether
Posts: 14
Joined: Wed May 20, 2020 11:28 pm

almost got it

Post by CarlosGrether »

Ok, the program worked fine once, but I am not sure how I did it, because I can´t make it to work fine again... so let me see if I understand the "convert" part

I have 2 videos, the video "X" is the video where I want to extract faces, and the video "Y" is the video is the video where I want to replace faces... so, after fo the training, in convert I put in Input Dir the video "Y", in output directory I put the directory where I want to place the result, in alignments I put the alignment file result of the training, in reference video I put nothing or I put the video "Y" again? and in Model Dir I put the model directory with more interactions, right?

Then in color adjustment I leave it in Avg Color

what do I put in Mask Type? because I think I pick something in the past, it is in Extended, but I did nothing with mask before

in Scaling I put Sharpen?

In Writer I put Opencv?

in Output scale it is in 100

Frame Ranges have nothing

Keep Unchanged is not marked

Filter Processing have nothing

ref Theshold is in 0.4
settings: Jobs is in 0
GPUs is in 1
trainer have nothing
not marked allow growth, on the fly, swap model or singleprocess

Global options: Config file have nothing
loog level have INFGO
Logfile have nothing

please tell me what is wrong with it


User avatar
CarlosGrether
Posts: 14
Joined: Wed May 20, 2020 11:28 pm

Re: almost got it

Post by CarlosGrether »

by the way, I create a video with many images of myself modifyed with faceapp, that´s how I created imput B,


User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: almost got it

Post by torzdf »

Hopefully this should get you to where you want to be:

viewtopic.php?p=1989#p1989

If you want the final output to be a video, use the ffpmeg writer

Sharpen/mask settings should be set after using the preview tool to see/modify their effects.

My word is final


User avatar
CarlosGrether
Posts: 14
Joined: Wed May 20, 2020 11:28 pm

Re: almost got it

Post by CarlosGrether »

ok but In masker I have only 3 options, Unet Dfi, Vgg. Clear, and Vgg Obstructed


User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: almost got it

Post by torzdf »

For extract?

The Components and Extended masks are automatically generated.

My word is final


User avatar
fixer2sonic
Posts: 4
Joined: Tue Jun 16, 2020 11:25 pm
Location: Seattle, Wa
Has thanked: 2 times
Been thanked: 1 time

How to Convert

Post by fixer2sonic »

Hi! I'm a noob and feel lucky I have made it this far but hopefully someone can get me over the hump to the finish line. I have finished training my model to about 14000 iterations. Now I want to swap the faces in the final video by doing a convert. Please tell me what I've done wrong here and how to fix it if you'd all be so kind.

  • input dir: to the original video I wish to swap my face onto.
  • output dir: to a new folder I created for the final finished video to output to
  • alignments: (I can't figure out if I point this to the alignment file created in my source video dir. after I cleaned it? Because I didn't know which one to use, I went ahead and tried that file.)
  • reference video : I have seen posts to not put anything here and post to put something here so I'm not sure what to put
  • model dir: I used Villain and pointed it to the "villain logs" dir. Is this right?

I then left most everything else alone except I changed mask type to "none" as I didn't use a mask.
And I changed the writer to Ffmpeg.

When I hit convert, It reads the alignments file I specified above, loads the ffmpeg plugin and then says "process exited." with a status at the bottom of the window that says; "Filed - convert.py. Return Code: 1"

Also, I just want to make sure I'm understanding something else correctly. I read that before I convert, I need to clean my alignments file. I already cleaned it in the previous step for extraction so am I supposed to do this again after training or is the one cleaned one I created before I trained the one I should use.

I have read all the FAQ's on this site, poured through the "convert' part of the forum but I can't find the answers I need to get the convert process to start. Any help is greatly appreciated.

Thanks!


User avatar
bryanlyon
Site Admin
Posts: 622
Joined: Fri Jul 12, 2019 12:49 am
Answers: 40
Location: San Francisco
Has thanked: 3 times
Been thanked: 159 times
Contact:

Re: How to Convert

Post by bryanlyon »

You need to use the alignments file that corresponds to the video you're converting. If you followed our recommendations during extract for training you probably used EEN to skip some faces, for convert you can't skip faces so you likely need to create a new alignment file with all the faces in it. If you did extract every face for training, then you already have the full alignments file and should use that one.


User avatar
fixer2sonic
Posts: 4
Joined: Tue Jun 16, 2020 11:25 pm
Location: Seattle, Wa
Has thanked: 2 times
Been thanked: 1 time

Re: How to Convert

Post by fixer2sonic »

Thanks, bryanlyon, I appreciate the help! And I did follow your recommendations during extract for training and skipped some faces.

So, just so I understand what you're saying, once training both faces is done, I need to go back and do a new extract on both videos again (or would it be for just the video I wish to put a new face on) and this time I extract ALL faces, and after the faces are extracted, do I then go back through and remove all the unwanted faces (blurry, hand in front of face, other faces in scene I don't want, etc.) like I did the first time and then do a cleanup of the alignments file?

Sorry if that's a stupid question, I'm a noob and I am doing my best to understand why I'm doing what I'm doing in addition to just following a guide and doing it.

Thanks. :)

Time is the fire in which we burn.................

User avatar
bryanlyon
Site Admin
Posts: 622
Joined: Fri Jul 12, 2019 12:49 am
Answers: 40
Location: San Francisco
Has thanked: 3 times
Been thanked: 159 times
Contact:

Re: How to Convert

Post by bryanlyon »

Yes, this is covered in the Extract guide, but I know it can be confusing.

We generally recommend just doing one extract, but thinking ahead if it's going to be used for convert or just for training.

From the guide:

If you are extracting for convert, or you are extracting for convert AND will be using some of the faces for training, then leave this on 1 (i.e. extract from every frame)

Further, you can read about cleaning your alignments for convert here:
viewtopic.php?f=5&t=27#manual_conv

However unlike what you said, instead of removing blurry images, you want to keep them since you are converting here and want every frame to be covered.


User avatar
fixer2sonic
Posts: 4
Joined: Tue Jun 16, 2020 11:25 pm
Location: Seattle, Wa
Has thanked: 2 times
Been thanked: 1 time

Re: How to Convert

Post by fixer2sonic »

Right on, thanks. Yeah, your extract guide is what I have been using, thank you.

To clarify, I have a video of a celebrity (Video A) that I want to put my face on. The video contains several other celebrities in it, too, that I don't want to put a face on. I only want to put my face on the one specific celebrity in that video.

I also have a video of myself (Video B) talking for about 20 mins. turning my head this way and that, different expressions, etc. that I used for training to be able to put on that celebrity in the other video.

So, if I'm understanding you correctly, I will run the extraction on both video A and video B and clean both alignment files and leave ALL faces extracted for both videos, right or when you said; "We generally recommend just doing one extract" you meant I only extract on Video A (destination video I want to put my face onto)? Then after the alignment file is cleaned, then I can do my convert?

Time is the fire in which we burn.................

User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: How to Convert

Post by torzdf »

You just need to generate an alignments file for video A. You WILL need to clean this file.

This file tells the conversion process where, in the frame, the faces are to be swapped. If you don't clean the alignments file, then all faces in the video will be swapped.

Video B (i.e. the face you are swapping ONTO A) has no bearing in the convert process. All of this information is held within the model.

My word is final


User avatar
fixer2sonic
Posts: 4
Joined: Tue Jun 16, 2020 11:25 pm
Location: Seattle, Wa
Has thanked: 2 times
Been thanked: 1 time

Re: How to Convert

Post by fixer2sonic »

Thanks a bunch, man, I really appreciate the help! :)

Time is the fire in which we burn.................

User avatar
Antipope1
Posts: 35
Joined: Thu Jul 02, 2020 10:51 am
Has thanked: 6 times
Been thanked: 3 times

Another missing alignments post...

Post by Antipope1 »

I know this has been a problem for a lot of people so I have read through a lot of the previous posts to try and fix my issue.

I'm getting a list of " alignment missing for Xxxx.png errors and cannot convert.

I have tried using all the alignment files I have for my video A to no avail.

I fear I may have deleted my original cleaned alignment file for video A, so I have tried to re-extract from my original video A using the original extract process, and then using the alignment file generated but still get the same error.
My video A is barely 30 seconds long as this is only for testing to make sure I can use the program properly so it only has 260 frames, and this is what has come out of my newly generated extract so I'd guess the alignment file is okay, the preview from my training is looking really good!

What Am I missing?
I'm not sure how else to create the alignments needed!

Edit: I have also tried running the missing alignments tool amongst any others that seemed like they could be relivent.

The beginning of the end.

User avatar
torzdf
Posts: 1438
Joined: Fri Jul 12, 2019 12:53 am
Answers: 125
Has thanked: 47 times
Been thanked: 278 times

Re: Another missing alignments post...

Post by torzdf »

Most likely frames missing from the alignments file. Are you sure you haven't used an alignments file where "Extract Every N" was set to a number higher than 1?

My word is final


User avatar
bryanlyon
Site Admin
Posts: 622
Joined: Fri Jul 12, 2019 12:49 am
Answers: 40
Location: San Francisco
Has thanked: 3 times
Been thanked: 159 times
Contact:

Re: Another missing alignments post...

Post by bryanlyon »

It''s also important to note that if you re-extract, the generated fsa will almost definitely not work with files that you had previously extracted. That's just not how the tool works. You need to use the files that were extracted all together.


User avatar
Antipope1
Posts: 35
Joined: Thu Jul 02, 2020 10:51 am
Has thanked: 6 times
Been thanked: 3 times

Re: Another missing alignments post...

Post by Antipope1 »

so i tried again this evening, and ive had different issues...

i could not extract from the original video again initially, i only managed to get it to do so with single process enabled.

once i had managed to extract, i changed to the new alignment file in convert, it seemed to recognize the alignments this time, however it keeps getting 2% in and crashing.

Code: Select all

07/10/2020 00:06:26 MainProcess     _load_0         multithreading  start                     DEBUG    Starting thread 1 of 1: 'ImagesLoader_0'
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads '_load': 1
07/10/2020 00:06:26 MainProcess     MainThread      image           __init__                  DEBUG    Initializing ImagesSaver: (path: I:\Faceswap Resources\cl1\FaceSwap\InputA\Video, queue_size: 8, as_bytes: True)
07/10/2020 00:06:26 MainProcess     MainThread      image           __init__                  DEBUG    Initializing ImagesSaver: (path: I:\Faceswap Resources\cl1\FaceSwap\InputA\Video, queue_size: 8, args: None)
07/10/2020 00:06:26 MainProcess     ImagesLoader_0  image           _process                  DEBUG    Load iterator: <bound method ImagesLoader._from_video of <lib.image.ImagesLoader object at 0x000002141A88C508>>
07/10/2020 00:06:26 MainProcess     ImagesLoader_0  image           _from_video               DEBUG    Loading frames from video: 'I:\Faceswap Resources\v1\clchkshrt.mp4'
07/10/2020 00:06:26 MainProcess     _load_0         multithreading  start                     DEBUG    Started all threads 'ImagesLoader': 1
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'ImagesSaver'
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'ImagesSaver', maxsize: 8)
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'ImagesSaver')
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'ImagesSaver'
07/10/2020 00:06:26 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launching detect plugin
07/10/2020 00:06:26 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    in_qname: extract0_detect_in, out_qname: extract0_align_in
07/10/2020 00:06:26 MainProcess     MainThread      _base           initialize                DEBUG    initialize Detect: (args: (), kwargs: {'in_queue': <queue.Queue object at 0x000002141AB5A488>, 'out_queue': <queue.Queue object at 0x000002141A8B7A08>})
07/10/2020 00:06:26 MainProcess     MainThread      _base           initialize                INFO     Initializing MTCNN (Detect)...
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'detect0_predict_mtcnn'
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'detect0_predict_mtcnn', maxsize: 1)
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'detect0_predict_mtcnn')
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'detect0_predict_mtcnn'
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'detect0_post_mtcnn'
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'detect0_post_mtcnn', maxsize: 1)
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'detect0_post_mtcnn')
07/10/2020 00:06:26 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'detect0_post_mtcnn'
07/10/2020 00:06:26 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiling detect threads
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: detect_mtcnn_input, function: <bound method Detect.process_input of <plugins.extract.detect.mtcnn.Detect object at 0x000002141AB5A088>>, in_queue: <queue.Queue object at 0x000002141AB5A488>, out_queue: <queue.Queue object at 0x000002141ADA6448>)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_mtcnn_input', thread_count: 1)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_mtcnn_input'
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_mtcnn_input
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: detect_mtcnn_predict, function: <bound method Detector._predict of <plugins.extract.detect.mtcnn.Detect object at 0x000002141AB5A088>>, in_queue: <queue.Queue object at 0x000002141ADA6448>, out_queue: <queue.Queue object at 0x000002141ADA6E08>)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_mtcnn_predict', thread_count: 1)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_mtcnn_predict'
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_mtcnn_predict
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: detect_mtcnn_output, function: <bound method Detect.process_output of <plugins.extract.detect.mtcnn.Detect object at 0x000002141AB5A088>>, in_queue: <queue.Queue object at 0x000002141ADA6E08>, out_queue: <queue.Queue object at 0x000002141A8B7A08>)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'detect_mtcnn_output', thread_count: 1)
07/10/2020 00:06:26 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'detect_mtcnn_output'
07/10/2020 00:06:26 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: detect_mtcnn_output
07/10/2020 00:06:26 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiled detect threads: [<lib.multithreading.MultiThread object at 0x000002141ADA6948>, <lib.multithreading.MultiThread object at 0x000002141ADA6B88>, <lib.multithreading.MultiThread object at 0x000002141ADA29C8>]
07/10/2020 00:06:26 MainProcess     MainThread      mtcnn           __init__                  DEBUG    Initializing: MTCNN: (model_path: '['C:\\Users\\Matt\\faceswap\\plugins\\extract\\detect\\.cache\\mtcnn_det_v2.1.h5', 'C:\\Users\\Matt\\faceswap\\plugins\\extract\\detect\\.cache\\mtcnn_det_v2.2.h5', 'C:\\Users\\Matt\\faceswap\\plugins\\extract\\detect\\.cache\\mtcnn_det_v2.3.h5']', allow_growth: True, minsize: 20, threshold: [0.6, 0.7, 0.7], factor: 0.709)
07/10/2020 00:06:26 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\faceswap\lib\model\session.py:112: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n
07/10/2020 00:06:26 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\faceswap\lib\model\session.py:116: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      session         _set_session              DEBUG    Created tf.session: (graph: <tensorflow.python.framework.ops.Graph object at 0x000002141AD79688>, session: <tensorflow.python.client.session.Session object at 0x000002141ADABB88>, config: gpu_options {\n  allow_growth: true\n}\n)
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      session         load_model_weights        VERBOSE  Initializing plugin model: MTCNN-PNet
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n
07/10/2020 00:06:29 MainProcess     MainThread      session         _set_session              DEBUG    Created tf.session: (graph: <tensorflow.python.framework.ops.Graph object at 0x000002141AF35A08>, session: <tensorflow.python.client.session.Session object at 0x000002141AF3DB48>, config: gpu_options {\n  allow_growth: true\n}\n)
07/10/2020 00:06:29 MainProcess     MainThread      session         load_model_weights        VERBOSE  Initializing plugin model: MTCNN-RNet
07/10/2020 00:06:29 MainProcess     MainThread      session         _set_session              DEBUG    Created tf.session: (graph: <tensorflow.python.framework.ops.Graph object at 0x0000021422E7CC08>, session: <tensorflow.python.client.session.Session object at 0x0000021422E91848>, config: gpu_options {\n  allow_growth: true\n}\n)
07/10/2020 00:06:30 MainProcess     MainThread      session         load_model_weights        VERBOSE  Initializing plugin model: MTCNN-ONet
07/10/2020 00:06:30 MainProcess     MainThread      mtcnn           __init__                  DEBUG    Initialized: MTCNN
07/10/2020 00:06:30 MainProcess     MainThread      _base           initialize                INFO     Initialized MTCNN (Detect) with batchsize of 8
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_mtcnn_input'
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_mtcnn_input_0'
07/10/2020 00:06:30 MainProcess     detect_mtcnn_input_0 _base           _thread_process           DEBUG    threading: (function: 'process_input')
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_mtcnn_input': 1
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_mtcnn_predict'
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_mtcnn_predict_0'
07/10/2020 00:06:30 MainProcess     detect_mtcnn_predict_0 _base           _thread_process           DEBUG    threading: (function: '_predict')
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_mtcnn_predict': 1
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread(s): 'detect_mtcnn_output'
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Starting thread 1 of 1: 'detect_mtcnn_output_0'
07/10/2020 00:06:30 MainProcess     detect_mtcnn_output_0 _base           _thread_process           DEBUG    threading: (function: 'process_output')
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  start                     DEBUG    Started all threads 'detect_mtcnn_output': 1
07/10/2020 00:06:30 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launched detect plugin
07/10/2020 00:06:30 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    Launching align plugin
07/10/2020 00:06:30 MainProcess     MainThread      pipeline        _launch_plugin            DEBUG    in_qname: extract0_align_in, out_qname: extract0_mask_0_in
07/10/2020 00:06:30 MainProcess     MainThread      _base           initialize                DEBUG    initialize Align: (args: (), kwargs: {'in_queue': <queue.Queue object at 0x000002141A8B7A08>, 'out_queue': <queue.Queue object at 0x000002141ADAF748>})
07/10/2020 00:06:30 MainProcess     MainThread      _base           initialize                INFO     Initializing FAN (Align)...
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'align0_predict_fan'
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'align0_predict_fan', maxsize: 1)
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'align0_predict_fan')
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'align0_predict_fan'
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager getting: 'align0_post_fan'
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager adding: (name: 'align0_post_fan', maxsize: 1)
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   add_queue                 DEBUG    QueueManager added: (name: 'align0_post_fan')
07/10/2020 00:06:30 MainProcess     MainThread      queue_manager   get_queue                 DEBUG    QueueManager got: 'align0_post_fan'
07/10/2020 00:06:30 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiling align threads
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: align_fan_input, function: <bound method Align.process_input of <plugins.extract.align.fan.Align object at 0x000002141AB5A248>>, in_queue: <queue.Queue object at 0x000002141A8B7A08>, out_queue: <queue.Queue object at 0x0000021424438E88>)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'align_fan_input', thread_count: 1)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'align_fan_input'
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: align_fan_input
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: align_fan_predict, function: <bound method Aligner._predict of <plugins.extract.align.fan.Align object at 0x000002141AB5A248>>, in_queue: <queue.Queue object at 0x0000021424438E88>, out_queue: <queue.Queue object at 0x0000021424438BC8>)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'align_fan_predict', thread_count: 1)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'align_fan_predict'
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: align_fan_predict
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Adding thread: (name: align_fan_output, function: <bound method Align.process_output of <plugins.extract.align.fan.Align object at 0x000002141AB5A248>>, in_queue: <queue.Queue object at 0x0000021424438BC8>, out_queue: <queue.Queue object at 0x000002141ADAF748>)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initializing MultiThread: (target: 'align_fan_output', thread_count: 1)
07/10/2020 00:06:30 MainProcess     MainThread      multithreading  __init__                  DEBUG    Initialized MultiThread: 'align_fan_output'
07/10/2020 00:06:30 MainProcess     MainThread      _base           _add_thread               DEBUG    Added thread: align_fan_output
07/10/2020 00:06:30 MainProcess     MainThread      _base           _compile_threads          DEBUG    Compiled align threads: [<lib.multithreading.MultiThread object at 0x0000021424489208>, <lib.multithreading.MultiThread object at 0x000002142448B488>, <lib.multithreading.MultiThread object at 0x000002142443F648>]
07/10/2020 00:06:30 MainProcess     MainThread      session         _set_session              DEBUG    Created tf.session: (graph: <tensorflow.python.framework.ops.Graph object at 0x00000214244948C8>, session: <tensorflow.python.client.session.Session object at 0x0000021424494CC8>, config: gpu_options {\n  allow_growth: true\n}\n)
07/10/2020 00:06:30 MainProcess     MainThread      session         load_model                VERBOSE  Initializing plugin model: FAN
07/10/2020 00:06:31 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n
07/10/2020 00:06:32 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.\n
07/10/2020 00:06:48 MainProcess     MainThread      module_wrapper  _tfmw_add_deprecation_warning DEBUG    From C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n
Traceback (most recent call last):
  File "C:\Users\Matt\faceswap\lib\cli\launcher.py", line 155, in execute_script
    process.process()
  File "C:\Users\Matt\faceswap\scripts\extract.py", line 116, in process
    self._run_extraction()
  File "C:\Users\Matt\faceswap\scripts\extract.py", line 202, in _run_extraction
    self._extractor.launch()
  File "C:\Users\Matt\faceswap\plugins\extract\pipeline.py", line 203, in launch
    self._launch_plugin(phase)
  File "C:\Users\Matt\faceswap\plugins\extract\pipeline.py", line 553, in _launch_plugin
    plugin.initialize(**kwargs)
  File "C:\Users\Matt\faceswap\plugins\extract\_base.py", line 341, in initialize
    self.init_model()
  File "C:\Users\Matt\faceswap\plugins\extract\align\fan.py", line 41, in init_model
    self.model.predict(placeholder)
  File "C:\Users\Matt\faceswap\lib\model\session.py", line 68, in predict
    return self._model.predict(feed, batch_size=batch_size)
  File "C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1169, in predict
    steps=steps)
  File "C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\training_arrays.py", line 294, in predict_loop
    batch_outs = f(ins_batch)
  File "C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
    return self._call(inputs)
  File "C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
  File "C:\Users\Matt\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\client\session.py", line 1472, in __call__
    run_metadata_ptr)
tensorflow.python.framework.errors_impl.InternalError: Could not allocate ndarray

============ System Information ============
encoding:            cp1252
git_branch:          master
git_commits:         03d2d17 scripts.extract - Save jpg thumbnails to alignments file. ab21033 argparse bugfix. 5f9d8fa lib.gui.custom_widgets - Force popup progress bar to top. f634f52 lib.gui.utils - Spelling fixes lib.gui.custom_widgets - popup progressbar. 59ade74 lib.alignments - Pad pts timestamps when video does not start on a keyframe
gpu_cuda:            No global version found. Check Conda packages for Conda Cuda
gpu_cudnn:           No global version found. Check Conda packages for Conda cuDNN
gpu_devices:         GPU_0: GeForce RTX 2060 SUPER
gpu_devices_active:  GPU_0
gpu_driver:          451.48
gpu_vram:            GPU_0: 8192MB
os_machine:          AMD64
os_platform:         Windows-10-10.0.18362-SP0
os_release:          10
py_command:          C:\Users\Matt\faceswap\faceswap.py extract -i I:/Faceswap Resources/v1/clchkshrt.mp4 -o I:/Faceswap Resources/cl1/FaceSwap/InputA/Video -D mtcnn -A fan -nm hist -min 0 -l 0.4 -een 1 -sz 256 -si 0 -L INFO -gui
py_conda_version:    conda 4.8.3
py_implementation:   CPython
py_version:          3.7.7
py_virtual_env:      True
sys_cores:           4
sys_processor:       AMD64 Family 23 Model 17 Stepping 0, AuthenticAMD
sys_ram:             Total: 8139MB, Available: 3038MB, Used: 5100MB, Free: 3038MB

=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
brotlipy==0.7.0
cachetools==4.1.0
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cloudpickle==1.4.1
cryptography==2.9.2
cycler==0.10.0
cytoolz==0.10.1
dask @ file:///tmp/build/80754af9/dask-core_1592842333140/work
decorator==4.4.2
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-auth==1.14.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.10.0
idna @ file:///tmp/build/80754af9/idna_1593446292537/work
imageio==2.8.0
imageio-ffmpeg==0.4.2
joblib==0.15.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.2
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib @ file:///C:/ci/matplotlib-base_1592846084747/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
networkx==2.4
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.2.0.34
opt-einsum==3.1.0
pathlib==1.0.1
Pillow==7.1.2
protobuf==3.12.3
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser==2.20
PyJWT==1.7.1
pyOpenSSL==19.1.0
pyparsing==2.4.7
pyreadline==2.1
PySocks==1.7.1
python-dateutil==2.8.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1592841827918/work
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn @ file:///C:/ci/scikit-learn_1592847564598/work
scipy @ file:///C:/ci/scipy_1592916958183/work
six==1.15.0
tensorboard==2.2.1
tensorboard-plugin-wit==1.6.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1593446365756/work
urllib3==1.25.9
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1

============== Conda Packages ==============
Could not get package list

================= Configs ==================
--------- .faceswap ---------
backend:                  nvidia

--------- convert.ini ---------

[color.color_transfer]
clip:                     True
preserve_paper:           True

[color.manual_balance]
colorspace:               HSV
balance_1:                0.0
balance_2:                0.0
balance_3:                0.0
contrast:                 0.0
brightness:               0.0

[color.match_hist]
threshold:                99.0

[mask.box_blend]
type:                     gaussian
distance:                 11.0
radius:                   5.0
passes:                   1

[mask.mask_blend]
type:                     normalized
kernel_size:              3
passes:                   4
threshold:                4
erosion:                  0.0

[scaling.sharpen]
method:                   unsharp_mask
amount:                   150
radius:                   0.3
threshold:                5.0

[writer.ffmpeg]
container:                mp4
codec:                    libx264
crf:                      23
preset:                   medium
tune:                     none
profile:                  auto
level:                    auto

[writer.gif]
fps:                      25
loop:                     0
palettesize:              256
subrectangles:            False

[writer.opencv]
format:                   png
draw_transparent:         False
jpg_quality:              75
png_compress_level:       3

[writer.pillow]
format:                   png
draw_transparent:         False
optimize:                 False
gif_interlace:            True
jpg_quality:              75
png_compress_level:       3
tif_compression:          tiff_deflate

--------- extract.ini ---------

[global]
allow_growth:             True

[align.fan]
batch-size:               12

[detect.cv2_dnn]
confidence:               50

[detect.mtcnn]
minsize:                  20
threshold_1:              0.6
threshold_2:              0.7
threshold_3:              0.7
scalefactor:              0.709
batch-size:               8

[detect.s3fd]
confidence:               70
batch-size:               4

[mask.unet_dfl]
batch-size:               8

[mask.vgg_clear]
batch-size:               6

[mask.vgg_obstructed]
batch-size:               2

--------- gui.ini ---------

[global]
fullscreen:               True
tab:                      extract
options_panel_width:      30
console_panel_height:     20
icon_size:                14
font:                     default
font_size:                9
autosave_last_session:    prompt
timeout:                  120
auto_load_model_stats:    True

--------- train.ini ---------

[global]
coverage:                 68.75
mask_type:                none
mask_blur_kernel:         3
mask_threshold:           4
learn_mask:               False
icnr_init:                False
conv_aware_init:          False
reflect_padding:          False
penalized_mask_loss:      True
loss_function:            mae
learning_rate:            5e-05

[model.dfl_h128]
lowmem:                   False

[model.dfl_sae]
input_size:               128
clipnorm:                 True
architecture:             df
autoencoder_dims:         0
encoder_dims:             42
decoder_dims:             21
multiscale_decoder:       False

[model.dlight]
features:                 best
details:                  good
output_size:              256

[model.original]
lowmem:                   False

[model.realface]
input_size:               64
output_size:              128
dense_nodes:              1536
complexity_encoder:       128
complexity_decoder:       512

[model.unbalanced]
input_size:               128
lowmem:                   False
clipnorm:                 True
nodes:                    1024
complexity_encoder:       128
complexity_decoder_a:     384
complexity_decoder_b:     512

[model.villain]
lowmem:                   False

[trainer.original]
preview_images:           10
zoom_amount:              5
rotation_range:           10
shift_range:              5
flip_chance:              50
color_lightness:          30
color_ab:                 8
color_clahe_chance:       50
color_clahe_max_size:     4

I have no other programs running at all, this is from a restarted pc with all startup programs closed before starting faceswap.

i've also tried various things this evening to get it to work so i hope i have added the right reports.. :?

The beginning of the end.

Locked