So im about 70k iterations in, and my training model has stopped getting better roughly 30k iterations ago. This is probably due to my lack of training data available.
Either way, I had my batch size set to 16, with a loss of about 0.03-4 on both A and B.
So, In a sort of last ditch effort to increase output quality, I decreased my batch size to 1. I assumed that having less pictures go through at once would yield marginally better output. And yet, my loss jumped to about 0.07-8 on both A and B.
Am I missing something?
I can't increase my batch size to 256, since, as I said, I severely lack training data. so I cant test if I get a smaller loss.
Are batch sizes actually the reverse of what I think they are?