Page 1 of 1

How to promote the training effect and convert effect?

Posted: Mon Apr 13, 2020 3:09 am
by yueshitian

Hello, Author, First, Thank you for great work. I study and run the code those days , but the training effect and convert effect is not good.
And training loss does not decrease, or decrease very very slow:
......
[#15601] Loss A: 0.03253, Loss B: 0.03333

How can I promote training effect and decrease loss ?


Re: How to promote the training effect and convert effect?

Posted: Mon Apr 13, 2020 6:50 am
by bryanlyon

15,000 iterations is not very much. You're probably going to need a lot more than that. See also the extract guide in viewtopic.php?f=5&t=27 to ensure that your data is clean and clear. Nothing is more important for your results than quality data.


Re: How to promote the training effect and convert effect?

Posted: Tue Apr 14, 2020 2:28 am
by yueshitian

My data is clean, which I pick by my hand, every face-data is pure. And every training data (face A and face B)contains 2300 images. But test effect was bad.
PS: 15000 iterations I cost 2 weeks !! The training progress is too too slow. My GPU is NVIDIA 1080Ti, and the cost of my GPU is just 10% during training.Training Batch-size is 256. How to speed up my training?


Re: How to promote the training effect and convert effect?

Posted: Tue Apr 14, 2020 9:26 am
by torzdf

15,000 in 2 weeks???? What model? That is insanely slow.

Also post out from Tools > Output System Info


Re: How to promote the training effect and convert effect?

Posted: Tue Apr 14, 2020 8:52 pm
by djandg

possibly running in CPU mode and not GPU.


Re: How to promote the training effect and convert effect?

Posted: Fri Apr 17, 2020 11:15 am
by PLAY-911
yueshitian wrote: Mon Apr 13, 2020 3:09 am

Hello, Author, First, Thank you for great work. I study and run the code those days , but the training effect and convert effect is not good.
And training loss does not decrease, or decrease very very slow:
......
[#15601] Loss A: 0.03253, Loss B: 0.03333

How can I promote training effect and decrease loss ?

Hi there have you read this?

Batch Size - Batch Size is the size of the batch that is fed through the Neural Network at the same time. A batch size of 64 would mean that 64 faces are fed through the Neural Network at once, then the loss and weights update is calculated for this batch of images. Higher batch sizes will train faster, but will lead to higher generalization. Lower batch sizes will train slower, but will distinguish differences between faces better. Adjusting the batch size at various stages of training can help.

Also----I have a GTX1010 4h4mins 6307 iterations at Batch of 64

I have started with batch of 64 and now I am with 15 that I Guess will help to train---