Models in different GPU/CPU

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Post Reply
User avatar
Hizaki
Posts: 2
Joined: Sat Oct 28, 2023 1:45 pm

Models in different GPU/CPU

Post by Hizaki »

Hi everyone

i have an AMD GPU which i used when i started in faceswap, one day while i updated the ADM stoped working in faceswap, so i installed using CPU. which is slower but it does the job in original trainer.

my question is, that i f change my GPU to an nvidia my prevous models would continue to work?

i wouldnt like to buy the GPU and start all over.

sorry for my bad english, its not my mother lenguage.

thnks in advance

User avatar
torzdf
Posts: 2687
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 135 times
Been thanked: 628 times

Re: Models in different GPU/CPU

Post by torzdf »

So, assuming you install Faceswap using the old (deprecated) AMD PlaidML backend, then the models created will not be compatible with either the Nvidia or the CPU version. Unfortunately there is no way around that. The Nvidia/CPU versions used Tensorflow as the backend. The AMD version used PlaidML as the backend. They are not compatible with each other.

We replaced the PlaidML backend with DirectML in the last few months. This version works with AMD GPUs and is compatible with Nvidia/CPU, as the DirectML backend uses Tensorflow as well.

Unfortunately DirectML is also not compatible with the old PlaidML backend.

So, in short. If you were using PlaidML in the past, then any of your old models will not be compatible with newer Faceswap. If you used DirectML/CPU then it will be compatible.

If you need to use an older version of Faceswap that still supports PlaidML then you can download it here: https://github.com/deepfakes/faceswap/r ... tag/v2.2.0

If you wish to start any new models with your AMD card, then I highly recommend that you use the latest version of Faceswap, and select DirectML as the version to install.

My word is final

User avatar
Hizaki
Posts: 2
Joined: Sat Oct 28, 2023 1:45 pm

Re: Models in different GPU/CPU

Post by Hizaki »

Hi, thanks alot for the repply

by the way i end up buying and nvidia card (GTX1650) it was the best option without bottleneck in my processor.

i decided to try a new model, so i started training with model DHL-128, it made 40k+ iterations, but then cashed, the converted file was way better than a previous original model with 1M iterations.,

ever since then i cannot resume training: status failed train.py. return code 1.

ive tried the suggestions but non of them worked.
even with other models,

11/08/2023 17:04:08 CRITICAL Error caught! Exiting...
11/08/2023 17:04:08 ERROR Caught exception in thread: '_training'
11/08/2023 17:04:08 ERROR You do not have enough GPU memory available to train the selected model at the selected settings. You can try a number of things:
11/08/2023 17:04:08 ERROR 1) Close any other application that is using your GPU (web browsers are particularly bad for this).
11/08/2023 17:04:08 ERROR 2) Lower the batchsize (the amount of images fed into the model each iteration).
11/08/2023 17:04:08 ERROR 3) Try enabling 'Mixed Precision' training.
11/08/2023 17:04:08 ERROR 4) Use a more lightweight model, or select the model's 'LowMem' option (in config) if it has one.

got any idea of why is this happening.

thanks in advance.

User avatar
torzdf
Posts: 2687
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 135 times
Been thanked: 628 times

Re: Models in different GPU/CPU

Post by torzdf »

This is an "Out of Memory" error. If it worked before and doesn't now, then most likely something is taking precious VRAM away from your GPU (Google Chrome can be bad for this, also games etc.)

You should try following the steps listed in the error message output.

My word is final

Post Reply