Mixed_float16 error and extremely slow training

If training is failing to start, and you are not receiving an error message telling you what to do, tell us about it here


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for reporting errors with the Training process. If you want to get tips, or better understand the Training process, then you should look in the Training Discussion forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
unkempt
Posts: 81
Joined: Wed Dec 28, 2022 2:09 pm
Has thanked: 1 time
Been thanked: 9 times

Mixed_float16 error and extremely slow training

Post by unkempt »

I just downloaded and installed the newest version of Faceswap. When I tried training on this new file (which is 1080p - I usually go smaller than that, but I've been having blur issues and thought a larger source file would be better). I got this error:

08/18/2023 07:16:40 WARNING Mixed precision compatibility check (mixed_float16): WARNING
The dtype policy mixed_float16 may run slowly because this machine does not have a GPU. Only Nvidia GPUs with compute capability of at least 7.0 run quickly with mixed_float16.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once

I do have an Nvidia GPU though and specifically set Python to use it in Windows settings. However, Faceswap is tanking my CPU instead and my GPU is showing next to no use. Iterations are usually much faster, but are crawling now (less than 100 in the last 5 minutes). I lost all my settings when I reinstalled (since I can't seem to update properly), but I don't recall one related to this issue and a forum search hasn't revealed any answers. What do I do?

User avatar
torzdf
Posts: 2687
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 135 times
Been thanked: 628 times

Re: Mixed_float16 error and extremely slow training

Post by torzdf »

1) It's a warning not an error
2) You don't say which GPU you are using, which will impact whether you will get optimal speeds with mixed precision or not
3) speed may not be related to 2, but until I know the answer to 2, I cannot tell you.

Last edited by torzdf on Fri Aug 18, 2023 10:15 pm, edited 1 time in total.

My word is final

User avatar
unkempt
Posts: 81
Joined: Wed Dec 28, 2022 2:09 pm
Has thanked: 1 time
Been thanked: 9 times

Re: Mixed_float16 error and extremely slow training

Post by unkempt »

I'm using Nvidia RTX 3070 M.

Regardless, I had a suspicion it had something to do with me forcing python through the administrator account/permissions and removed that and it seemed to do the job. It's using my GP U again.

I had changed it to try and solve that file in use error, but also had luck changing the preview to every 1000 iterations instead of 250. I'll keep trying.

User avatar
unkempt
Posts: 81
Joined: Wed Dec 28, 2022 2:09 pm
Has thanked: 1 time
Been thanked: 9 times

Re: Mixed_float16 error and extremely slow training

Post by unkempt »

Yup, I had to change it back to admin to train because otherwise it dies after a while due to "file in use errors". And when I do, it only uses the CPU - no idea why that makes a difference.

User avatar
Eace1971
Posts: 1
Joined: Tue Sep 26, 2023 9:03 am

Re: Mixed_float16 error and extremely slow training

Post by Eace1971 »

unkempt wrote: Sat Aug 19, 2023 1:48 pm

Yup, I had to change it back to admin to train because otherwise it dies after a while due to "file in use errors". And when I do, it only uses the CPU - no idea why that makes a difference.

An then, you solve the problem?

Locked