Hi,
I've just started this fun yesterday and I'm doing my reading slowly, but I would really appreciate some pointers in the meantime, so I don't waste precious gpu-time.
I started training using the Original model with default settings, the only thing I changed (I think? ) is batch size from def. 16 to 64.
Is the original model enough for decent results in 1080p video? I'm so newb I just wanna make sure I'm not missing something.
After 80k iterations the resulting swap is pretty blurry, and not really accurate (identity-wise): . I think I've seen better results in some tuts, but on the other hand, maybe it's to be expected?
After 60k iterations I have changed my batch size to 16 (should improve quality/detail?) but I noticed loss values spiking after doing this, so I recovered my model and kept training on 64 (poof, 1 hour gone). I asume the loss sipke is to be expected? Should I try and reduce the batch size again to get better quality?
Unfortunately, the first 30k iterations were done with 2 or 3 trash pictures (other person's partial face, no face) in the source face set directory. Luckily I spotted one of them on preview, deleted them and after that continued training. Can you comment on this? Is continuing training in such case a mistake? I would assume it gets ironed out in the process, but how much damage does it cause?
So, while I'm trying to learn, should I keep on training this one?