Hi everyone, I have finished my first deepfake test. The result as you can see is poor. I ask you if you can help me to understand where I went wrong. It makes me think that the original video file (1280x780) was not of sufficient quality.
I used these settings:
- size extract 256
- train BS64 -140,000 iterations
- model original
As you can see from the last timelapse it was already evident that face B was blurred compared to A and my chin remains out of the red grid. Is it a problem? How to remedy?
Maybe I need to do more iterations? I am also attaching the graph, after 140,000 iterations:
Original video:
Me:
Final result:
update
Ater 250,000 iterations, not Improvements.