This was a thought i had while training. There is probably a good reason it isn't done. I may also mix up some terminology.
So my apolgies if the idea is silly.
A few years ago i was reading about using image stacking to get higher resolution images.
https://petapixel.com/2015/02/21/a-prac ... photoshop/
Could it be applied to face swapping , at the conversion step, to get higher resolutions from lower resolution models?
For example if a model output is 256X256 and faces in the files to be converted are closer to 512. Instead of inputting a face once to swap it, you input the face multiple times but dither/offset the input differently each time.
Then process the multiple outputs into a single higher resolution image Python package for this that replaces the original, and apply necessary colour corrections etc.
The negatives i can speculate on:
Conversion would take far far longer.
Outputs may not fit as well.