whats the consensus on quality if each is trained on the same batch with the same output resolution? which would yield the highest quality output? I'm able to train all three if the optimizer is checked, and would like to know.
Additionally, does faceswap still not do angled faces well? I used it ages ago and could, even with a well trained model, never get it to swap faces that were of, say a person laying on a couch or a head tilted 90 degrees. I know DFL can do this with H128 but using DFL is not an option for me due to their community interests and dev.