Page 1 of 1

Central storage

Posted: Sat Feb 04, 2023 11:34 pm
by VickySingh

I started using faceswap about a month ago. I tried ever possible setting (according to forum discussion) in phaze - A to get the best quality output with maximum setting enabled in 8gb VRAM (3070ti).

Before enabling central storage I tried subpixel and upscale hybrid for Decoder and hiddenlayer - FAIL

efficientnet V2 All @ 50 % - FAIL

I had to use "dny256" with (little modification) to get my training start @ batchsize of 8.

So basically:

I think with my current setup (8gb VRAM 16 GB RAM)and central storage enable. I can use subpixel, efficientnet v2 b3 @ 100% etc @ batchsize of 4 is a big deal.

After just 14 hours of training and just 23000 iteration the model looks phenomenal.

Details are awesome to say the least.

"I have attached my phaze - a, loss and global setting in this draft"

https://ibb.co/hgQt8n6

https://ibb.co/jLKTWVg

https://ibb.co/stqHX9p

https://ibb.co/30Cn8Qb

My only concern is when ever I am enabling GBLOCK my model get corrupted i.e I start getting solid colors from the very start itself.

https://ibb.co/Dtzpb4d

Additionally if any one can suggest more improvement in my setting, suggestions are more then welcome.

CENTRAL STROAGE IS AWESOM.

Since direct storage is now a thing can we see faceswap utilizing this feature?


Re: Central storage

Posted: Sun Feb 05, 2023 3:07 pm
by torzdf

I'm glad you have had success with Central Storage strategy. I was concerned, when implemented, that it did not give the VRAM savings that I had hoped for (I would expect it to save more VRAM than the 1 or 2 improvement of batch size that I achieved in testing).

But, any little helps with VRAM savings imho.