I've got thousands of frames of front on high res head mounted camera footage, as well as lots of other pictures and slightly lower res video from other angles for both actors.
My goal was to put actor B onto HMC footage of actor A, which I thought would be easy since I have so much HMC footage of both actors. I supplemented some extra footage and pictures to flesh out the model just in case, but even after 800K+ iterations, the resulting convert is very blurry and shaky.
There's no obstructions in my training frames for either actor, and the bulk of the pictures are clear, up close, and fairly high res. Is it possible to have TOO much training data? Ive seen better results using a single video of B and a single video of A at only 75k iterations, so clearly something is wrong. What might be causing blurry output despite so much data and iterations? Do I need to train it for longer if I have more training images?