Search found 177 matches

by abigflea
Wed Dec 16, 2020 8:15 pm
Forum: Training Support
Topic: Distributed Training does not start
Replies: 2
Views: 1490

Re: Distributed Training does not start

I suppose I earned "expert" the hard way. Torzdf is correct about the PCI lanes. The cards need to communicate with each other. The performance is hindered by the slowest cards. On typical mainboards the first PCIe slot will go 16x in the others are not populated. If the 'first' 2 are popu...
by abigflea
Fri Dec 11, 2020 10:42 pm
Forum: Hardware
Topic: Hardware best practices
Replies: 83
Views: 183179

Re: Hardware best practices

stepping-raz0r wrote: Fri Dec 11, 2020 3:57 pm

Could you give an example...
(Based solely on faceswap considerations...)

Will say Nvidia whatever you do.
30x0 series isn't working quite yet. Nvidia drivers and some other software are not updated to support FS(among other things), but may in a couple months.

by abigflea
Wed Dec 02, 2020 4:18 am
Forum: Training Discussion
Topic: Need more details
Replies: 1
Views: 1548

Re: Need more details

Yes , more time.
Also there are more variables to making a reasonable/high quality swap.

Number of training images on both sides.
Image quality of training data.
Which model.

by abigflea
Sat Nov 28, 2020 11:53 pm
Forum: Hardware
Topic: MULTI GPU - double speed at same batc size or not?
Replies: 10
Views: 15525

Re: MULTI GPU - double speed at same batc size or not?

Sure DFL is set up differently than FS. Especially things like tensor support, make comparing them not very easy.

by abigflea
Sat Nov 28, 2020 11:49 pm
Forum: Hardware
Topic: MULTI GPU - double speed at same batc size or not?
Replies: 10
Views: 15525

Re: MULTI GPU - double speed at same batc size or not?

With the condition of the same batch size, multi gpu verses single. Sometimes it will train only somewhat faster 120% or so.. What you can do, is increase your batch size to almost 2X of a single card. That's when you really see the benefits. Your EGs/sec will go way up. Maybe not 200% more like 160...
by abigflea
Mon Nov 16, 2020 2:06 am
Forum: Training Discussion
Topic: Understanding how variations in data impact results
Replies: 1
Views: 1485

Re: Understanding how variations in data impact results

This is a curious question. I like it.

For the mole question, I would suspect it would do like beards and glasses, depending on frame and lighting, it would appear and disappear, maybe hang around faded.

For the entirety of your question.. who knows. Could be an interesting test.

by abigflea
Wed Nov 11, 2020 2:18 am
Forum: Training Discussion
Topic: Double Eyebrow still
Replies: 6
Views: 3308

Re: Double Eyebrow still

Yes, restart

by abigflea
Sat Nov 07, 2020 3:18 am
Forum: Hardware
Topic: What Do you think of this MB
Replies: 4
Views: 10260

Re: What Do you think of this MB

That looks just fine. Most mb split the pcie lanes to 8x8x You can find some that don't, I think, but those are the $500+ boards. It's also sometimes difficult to find boards with enough room to physically place 2-4 cards in simultaneously. On mine I have to remove one to remove the next, no clearan...
by abigflea
Fri Oct 30, 2020 3:33 pm
Forum: Training Discussion
Topic: Smaller batch size = Worse training?
Replies: 2
Views: 4871

Re: Smaller batch size = Worse training?

Of your 2 subjects, what is the the least amount of data, least photos? What model are you using? Go the biggest batch size you can, although I wouldn't go over a batch size of 80-100. Tends to actually make it train worse depending on model. Near the end of training (usually after 600K for me) , I ...
by abigflea
Thu Oct 29, 2020 8:48 pm
Forum: Hardware
Topic: Hardware best practices
Replies: 83
Views: 183179

Re: Hardware best practices

AMD doesn't focus on AI/compute. Highly unlikely they will make much changes to even begin to compete. Also there's an consideration that the software layer that allows AMD cards to do the compute will need to be updated. I did see a rumor that they were making some compute only cards instead of foc...
by abigflea
Wed Oct 28, 2020 4:33 pm
Forum: Extract Support
Topic: Error during extract - CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
Replies: 5
Views: 4784

Re: Error during extract - CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered

While I won't swear you're having the same issues I've had, one of my 2070's is factory overclocked a whopping (sarcasm) 50 MHz and gives me this error after a couple hours training. I just underclock them both to 1700mhz and it stops. Maybe it is a factory overclock, maybe not, butI agree with Torz...
by abigflea
Sat Oct 24, 2020 5:30 pm
Forum: General Discussion
Topic: Question about 30xx support
Replies: 6
Views: 5309

Re: Question about 30xx support

There are many. I feel its faster due to the work on supporting the models with mixed precision .
The UI is way more intuitive.
The manual alignments tool is pretty sweet.
A bit more intuitive for sure.

by abigflea
Fri Oct 23, 2020 12:18 am
Forum: Convert Discussion
Topic: Convert is not swapping faces
Replies: 29
Views: 34560

Re: Conversion seems to swap Original > Original and not Original >Swap?

Huh. Dunno.
If it was me, my next thoughts would be to be sure I'm using the right mask and alignments.
Check the convert settings if I happened to do something odd there.

by abigflea
Thu Oct 22, 2020 8:58 pm
Forum: Convert Discussion
Topic: Convert is not swapping faces
Replies: 29
Views: 34560

Re: Conversion seems to swap Original > Original and not Original >Swap?

If "A" was trained as the original actor and "b" the one you want to swap-in, it should.

Try activating swap model

Screenshot from 2020-10-22 16-57-23.png
Screenshot from 2020-10-22 16-57-23.png (8.44 KiB) Viewed 15222 times
by abigflea
Thu Oct 22, 2020 2:40 am
Forum: Training Discussion
Topic: Optimal Convergence
Replies: 1
Views: 1544

Re: Optimal Convergence

Short answer is when you don't personally see it, with your eyes , it getting any better. The graph is just measuring the loss so you have an idea of how quickly it's learning. But even when it looks near flat it's still learning and trying different combinations trying to get to "the best"...
by abigflea
Tue Oct 20, 2020 6:59 pm
Forum: Hardware
Topic: Best software/where to start
Replies: 1
Views: 6763

Re: Best software/where to start

Read thought the guides here viewforum.php?f=3&sid=c8b5c4286b5887100865db36488ba759

Come up with some 'test' and follow through following the instructions.

I've read through the guides 10 times and pick up stuff I missed.
95% of your questions are answered there.

by abigflea
Tue Oct 20, 2020 4:42 am
Forum: Training Support
Topic: Distributed with Dual 2060 supers
Replies: 46
Views: 16313

Re: Distributed with Dual 2060 supers

Oh! thats almost my old computer. Had a 8300 and a 990fx mainboard.
Yea FS really didn't care for any multi gpu stuff.

I was able to do 2 separate training sessions just fine. Had 16Gb ram.

by abigflea
Mon Oct 19, 2020 2:08 pm
Forum: Hardware
Topic: [SOLVED] GPU usage is only 2%
Replies: 2
Views: 7854

Re: GPU usage is only 2%

Other have reported windows doesn't report CUDA usage well.
I believe you can change what being monitored somewhere, but its not super important.
Edit: where it says 3D, copy, encode...ect....you can change those.
Other monitoring like gpuz or afterburner will show its working :-)

by abigflea
Sun Oct 18, 2020 9:46 am
Forum: Training Support
Topic: Distributed with Dual 2060 supers
Replies: 46
Views: 16313

Re: Distributed with Dual 2060 supers

With your setup, a 4 min startup vs my 2.3 min startup sounds reasonable. I have a x470 chipset. Doesn't rx/tx non stop at max rate every time NVtop displays sample, sure it jumps around. Although, most looked like this. Screenshot from 2020-10-18 05-20-33.png I feel the need to take care of what I'...
by abigflea
Sun Oct 18, 2020 2:03 am
Forum: Training Support
Topic: Distributed with Dual 2060 supers
Replies: 46
Views: 16313

Re: Distributed with Dual 2060 supers

Ok , this was for some reason a painful test. Doesn't help with the startup ideas so much but. Question: Are you increasing your batch size when using distributed? Should allow roughly a 85% higher batch and EGs/sec goes up.. I'm grasping at straws with this one. Test Results: Villain mode, 2X2070, ...