Feeding images to a DeepFake model
Hello everyone,
Firstly thanks to everyone and especially to the developper of Phaze-A for the amazing work done here, I've just started becoming interested in deepfakes and this forum has been an awesome ressource.
Since I like coding, I tried to implement a very simple architecture (1 encoder - 2 split FCL - 1 shared encoder) similar to what can be done in DFLab for instance.
I'm wondering how the batches of data are fed to the network, is it by simply iterating over the source images and then "reseeing" randomly chosen batches of the destination for instance (since I have more source images than destination images) ? As the Phaze-A guide mentions it, the term "epoch" doesn't make sense here
For now I have very good reconstructed images during training, but the swap is very blurry and not satisfying at all. I was thinking this could be the issue.
Many thanks if someone can help me, and have a good day