Suppose I have three videos:
-
A1: Good collection of expressions/angles/lighting, for face A
-
B: Good collection of expressions/angles/lighting, for face B
-
A2: Some other video of face A, similar quality to A1
If I train a model to swap B's face onto A1, can I then use that same model to swap B's face onto A2, with okay-ish results?
I figure, assuming A1 and A2 are similar enough in quality and angles, that it ought to work, seeing as there's talk of extracting alignments "for convert" and "for training" (i.e. alignment data for training is a subset of alignment data used for convert, and will have to render faces it's never seen before when converting).