I'm not sure if the deep learning algorithm already has what I will describe, or alternative algos that would have the same effect, or if this is simply a bad idea. Sharing for discussion.
Precursor: I notice while training that in the preview, while some face swaps are doing fantastic, among them are some that are not just uncanny, but Frankenstein-ish.
Basic idea: As training proceeds, each input source face tracks its average loss (function) when compared with the processed swap. Every n iterations (e.g. 10000), starting at x iterations (e.g. 50,000), the algorithm trims out y% of the input images with the highest relative loss calculated—either deleting them from the input source A folder or placing them into a designated folder, possibly renaming them to indicate at which iteration they were removed.
The presumption would be that either through inadequate input source preparation (by us), or even well thought out input source preparation and pruning (by us, with help from the various tools), there may be some faces that don't train well and may have an adverse effect on the weights (model effectiveness) as training proceeds.
Could the algorithm dynamically prune the input source when the loss calculated simply isn't keeping up with the progress of the rest of the model's success, indicating that the model could benefit by removing such input source faces from the sample?
Apologies in advance to the fs veterans if my lack of understanding of the complexities of the algorithm make this idea ill-advised or useless or redundant.