Search found 143 matches

by abigflea
Sun Jan 10, 2021 2:14 am
Forum: Installation
Topic: [Guide] Using Faceswap on Nvidia RTX 30xx cards
Replies: 35
Views: 2490

Re: [Guide] Using Faceswap on Nvidia RTX 30xx cards

Using Win10 Nvidia 457.51 (Do not use 460.xx - bad juju) Only : 3060Ti + 2070 Followed post exactly. Kept getting the "brotli" error, no matter how many drivers/software I uninstalled, environments removed. Google search gave me: conda install -c anaconda urllib3 and it worked. 3060Ti now ...
by abigflea
Fri Jan 08, 2021 4:27 pm
Forum: Installation
Topic: [Guide] Using Faceswap on Nvidia RTX 30xx cards
Replies: 35
Views: 2490

Re: [Guide] Using Faceswap on Nvidia RTX 30xx cards

Looks as though you skipped one of the first steps. . Looks as though the conda environment wasn't created , or you named the environment something other than "faceswap" Install FaceSwap first, per the instructions. That will create the conda environment... Then continue with the instructi...
by abigflea
Thu Jan 07, 2021 3:36 pm
Forum: Hardware
Topic: Hardware best practices
Replies: 51
Views: 19941

Re: Hardware best practices

Thanks for the write up. How much RAM would you recommend? How much of a performance upgrade would it be from a 3700 to a 3900 if running two rtx 2070 supers? Thanks! John I think 16Gb would be fine, but I have 32 GB myself. FaceSwap doesn't require much, but peripheral software like video editors,...
by abigflea
Thu Jan 07, 2021 4:43 am
Forum: Installation
Topic: [Guide] Using Faceswap on Nvidia RTX 30xx cards
Replies: 35
Views: 2490

Re: [Guide] Using Faceswap on Nvidia RTX 30xx cards

No, not tried it in Admin.
When I do, Bryan beats me with a old videocard in a sock.

by abigflea
Wed Jan 06, 2021 4:26 pm
Forum: Installation
Topic: [Guide] Using Faceswap on Nvidia RTX 30xx cards
Replies: 35
Views: 2490

Re: [Guide] Using Faceswap on Nvidia RTX 30xx cards

write permissions error on second attempt after reinstalling miniconda. (faceswap) C:\Users\abigf>conda remove tensorflow Collecting package metadata (repodata.json): done Solving environment: done ## Package Plan ## environment location: C:\ProgramData\Miniconda3\envs\faceswap removed specs: - tens...
by abigflea
Wed Dec 16, 2020 8:15 pm
Forum: Training
Topic: Distributed Training does not start
Replies: 2
Views: 123

Re: Distributed Training does not start

I suppose I earned "expert" the hard way. Torzdf is correct about the PCI lanes. The cards need to communicate with each other. The performance is hindered by the slowest cards. On typical mainboards the first PCIe slot will go 16x in the others are not populated. If the 'first' 2 are popu...
by abigflea
Fri Dec 11, 2020 10:42 pm
Forum: Hardware
Topic: Hardware best practices
Replies: 51
Views: 19941

Re: Hardware best practices

stepping-raz0r wrote: Fri Dec 11, 2020 3:57 pm

Could you give an example...
(Based solely on faceswap considerations...)

Will say Nvidia whatever you do.
30x0 series isn't working quite yet. Nvidia drivers and some other software are not updated to support FS(among other things), but may in a couple months.

by abigflea
Wed Dec 02, 2020 4:18 am
Forum: Training
Topic: Need more details
Replies: 1
Views: 227

Re: Need more details

Yes , more time.
Also there are more variables to making a reasonable/high quality swap.

Number of training images on both sides.
Image quality of training data.
Which model.

by abigflea
Sat Nov 28, 2020 11:53 pm
Forum: Hardware
Topic: MULTI GPU - double speed at same batc size or not?
Replies: 10
Views: 513

Re: MULTI GPU - double speed at same batc size or not?

Sure DFL is set up differently than FS. Especially things like tensor support, make comparing them not very easy.

by abigflea
Sat Nov 28, 2020 11:49 pm
Forum: Hardware
Topic: MULTI GPU - double speed at same batc size or not?
Replies: 10
Views: 513

Re: MULTI GPU - double speed at same batc size or not?

With the condition of the same batch size, multi gpu verses single. Sometimes it will train only somewhat faster 120% or so.. What you can do, is increase your batch size to almost 2X of a single card. That's when you really see the benefits. Your EGs/sec will go way up. Maybe not 200% more like 160...
by abigflea
Mon Nov 16, 2020 2:06 am
Forum: Training
Topic: Understanding how variations in data impact results
Replies: 1
Views: 198

Re: Understanding how variations in data impact results

This is a curious question. I like it.

For the mole question, I would suspect it would do like beards and glasses, depending on frame and lighting, it would appear and disappear, maybe hang around faded.

For the entirety of your question.. who knows. Could be an interesting test.

by abigflea
Wed Nov 11, 2020 2:18 am
Forum: Training
Topic: Double Eyebrow still
Replies: 3
Views: 342

Re: Double Eyebrow still

Yes, restart

by abigflea
Sat Nov 07, 2020 3:18 am
Forum: Hardware
Topic: What Do you think of this MB
Replies: 4
Views: 407

Re: What Do you think of this MB

That looks just fine. Most mb split the pcie lanes to 8x8x You can find some that don't, I think, but those are the $500+ boards. It's also sometimes difficult to find boards with enough room to physically place 2-4 cards in simultaneously. On mine I have to remove one to remove the next, no clearan...
by abigflea
Fri Oct 30, 2020 3:33 pm
Forum: Training
Topic: Smaller batch size = Worse training?
Replies: 2
Views: 529

Re: Smaller batch size = Worse training?

Of your 2 subjects, what is the the least amount of data, least photos? What model are you using? Go the biggest batch size you can, although I wouldn't go over a batch size of 80-100. Tends to actually make it train worse depending on model. Near the end of training (usually after 600K for me) , I ...
by abigflea
Thu Oct 29, 2020 8:48 pm
Forum: Hardware
Topic: Hardware best practices
Replies: 51
Views: 19941

Re: Hardware best practices

AMD doesn't focus on AI/compute. Highly unlikely they will make much changes to even begin to compete. Also there's an consideration that the software layer that allows AMD cards to do the compute will need to be updated. I did see a rumor that they were making some compute only cards instead of foc...
by abigflea
Wed Oct 28, 2020 4:33 pm
Forum: Extract
Topic: Error during extract - CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
Replies: 5
Views: 626

Re: Error during extract - CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered

While I won't swear you're having the same issues I've had, one of my 2070's is factory overclocked a whopping (sarcasm) 50 MHz and gives me this error after a couple hours training. I just underclock them both to 1700mhz and it stops. Maybe it is a factory overclock, maybe not, butI agree with Torz...
by abigflea
Sat Oct 24, 2020 5:30 pm
Forum: General Chat
Topic: Question about 30xx support
Replies: 6
Views: 725

Re: Question about 30xx support

There are many. I feel its faster due to the work on supporting the models with mixed precision .
The UI is way more intuitive.
The manual alignments tool is pretty sweet.
A bit more intuitive for sure.

by abigflea
Fri Oct 23, 2020 12:18 am
Forum: Convert
Topic: Conversion seems to swap Original > Original and not Original >Swap?
Replies: 5
Views: 455

Re: Conversion seems to swap Original > Original and not Original >Swap?

Huh. Dunno.
If it was me, my next thoughts would be to be sure I'm using the right mask and alignments.
Check the convert settings if I happened to do something odd there.

by abigflea
Thu Oct 22, 2020 8:58 pm
Forum: Convert
Topic: Conversion seems to swap Original > Original and not Original >Swap?
Replies: 5
Views: 455

Re: Conversion seems to swap Original > Original and not Original >Swap?

If "A" was trained as the original actor and "b" the one you want to swap-in, it should.

Try activating swap model

Screenshot from 2020-10-22 16-57-23.png
Screenshot from 2020-10-22 16-57-23.png (8.44 KiB) Viewed 448 times
by abigflea
Thu Oct 22, 2020 2:40 am
Forum: Training
Topic: Optimal Convergence
Replies: 1
Views: 287

Re: Optimal Convergence

Short answer is when you don't personally see it, with your eyes , it getting any better. The graph is just measuring the loss so you have an idea of how quickly it's learning. But even when it looks near flat it's still learning and trying different combinations trying to get to "the best"...