Apologies if this is a stupid set of questions or posted in the wrong place. I'm new and still learning.
So I extracted frames from various sources for both A and B. Some video sources and some from random photography. However, during the extraction phase I checked "custom" masker for both. I thought it didn't matter for the training, and all I needed was the faces aligned. I've run the resulting output images through 100k training iterations and the results are looking sharp! However, I noticed the masks in the preview thumbs are often...bad. For instance, in many frames the source is wearing a headband. It doesn't necessarily obscure the face, but the masker left some of that headband in and it's leaked into the model. Other masks are just not tightly constrained to the face. But the results look pretty good!
Is this an issue, or is it working as intended and the masking isn't a big deal?
Will the model learn OK regardless, or will the masking issues lead to downstream problems?
If the faulty masks are a problem, what's the remedy? For instance, how do I fix the headband issue for, say 1500 extracted images? Do I have to re-extract from the source and address each on manually? What tool do I use to see the masks before training?
Thanks for any input or feedback.