Ive got lots of facial capture footage I'm trying to put onto other facial capture footage, as in both actors are looking directly at the lens of a head mounted camera. For actor B (the face Id like to put on the clip od actor A) Ive got many frames of HMC footage as well as a smaller sample of video and photos of them from other angles. But for actor A I only have the video I am putting the face onto. Does FaceSwap want an equally large sample for actor A as actor B? I feel silly realizing that as I type this, for some reason I was under the impression the A input only needed to be the video you want to put the new face on.
Output blurry even after 800,000 iterations?
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Training a Faceswap model.
If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
- CProdDigital
- Posts: 14
- Joined: Fri Nov 04, 2022 4:22 pm
- Has thanked: 1 time
Re: Output blurry even after 800,000 iterations?
Yes, you want lots of variety on both sides.
Also, see here:
app.php/faqpage#f4r1
My word is final
- CProdDigital
- Posts: 14
- Joined: Fri Nov 04, 2022 4:22 pm
- Has thanked: 1 time
Re: Output blurry even after 800,000 iterations?
Does having too much of the HMC footage clog up and confuse the training? still getting a blurry result when I use an expanded data set
Re: Output blurry even after 800,000 iterations?
There really isn't a high cadence of responses here are there
I just started playing with this stuff and I noticed that I was getting some pretty bad blur until I used the "Wrap to landmarks" checkbox in the Augmentation section of the trainer. That might not help you, but it did help me.