yueshitian wrote: ↑Mon Apr 13, 2020 3:09 am
Hello, Author, First, Thank you for great work. I study and run the code those days , but the training effect and convert effect is not good.
And training loss does not decrease, or decrease very very slow:
......
[#15601] Loss A: 0.03253, Loss B: 0.03333
How can I promote training effect and decrease loss ?
Hi there have you read this?
Batch Size - Batch Size is the size of the batch that is fed through the Neural Network at the same time. A batch size of 64 would mean that 64 faces are fed through the Neural Network at once, then the loss and weights update is calculated for this batch of images. Higher batch sizes will train faster, but will lead to higher generalization. Lower batch sizes will train slower, but will distinguish differences between faces better. Adjusting the batch size at various stages of training can help.
Also----I have a GTX1010 4h4mins 6307 iterations at Batch of 64
I have started with batch of 64 and now I am with 15 that I Guess will help to train---