I'm using footage from a head mounted facial capture camera to train a swap right now, so I have a LOT of dead on frontal frames that I'm training from, like 12,000. This seems like overkill considering its just them talking so there's probably alot of "redundant" frames, and I've got some more dynamic angle footage I'd like to add.
Can I just extract more footage to the B folder and then continue training? would it speed up training to remove some of the 12000 front-facing frames? It seems like as its getting more iterations, the training is slowing down.
Also, I'm using the original trainer, but I heard Dfaker is better if you have a good GPU (I've got a 3080) can I swap Trainer mid training or will that mess things up?