Hi
i converted the video but the face looks blurry , i don't know if the problem is during the training process or during the converting prooss
this is the video on youtube
and for my pc performances , it's kinda not good for this kind of work lol
here's a pictures from DirectX
Note : i used a bacth size of 8 during the training
any help would be appreciated, thank you
Converted faces are blurry
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Converting faces from your trained model.
If you are having issues with the Convert process not working as you would expect, then you should post in the Convert Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
Converted faces are blurry
Yes. As per the FAQs. Training on CPU will take weeks to get good results.
Re: Blurry faces !
You are training on CPU because you don't have an AMD or Nvidia GPU:
https://faceswap.dev/forum/app.php/faqpage#f3r1
My word is final
Re: Blurry faces !
torzdf wrote: ↑Sun Sep 08, 2019 2:45 amYou are training on CPU because you don't have an AMD or Nvidia GPU:
https://faceswap.dev/forum/app.php/faqpage#f3r1
yes i am, i have intel hd 520 gpu with 120VRAM ;that's why i'm training with CPU
is that the reason why the faces look blurry or what :/
Seeing fuzzy A side after successful conversion
The B face was successfully swapped to the A face, but the blurred A face can be seen in the photo.Maybe B's face is smaller than A's face
Low quality / Blurry face
Hey i'm new to this tool and i'm having trouble with the swapped face quality.
The faces i extracted come in 256x256 resolution and the quality looks good, but when i try to train / convert the model it gives me blurry / low quality results in the preview and in the end result. Does the blur go away with more training? Do i have to change some settings for better results? I'm currently using the default settings that come in the GUI.
Re: Low quality / Blurry face
Yes, training takes time....
You can get more info here:
viewtopic.php?f=6&t=146#monitor
My word is final
- deephomage
- Posts: 33
- Joined: Fri Jul 12, 2019 6:09 pm
- Has thanked: 2 times
- Been thanked: 8 times
Re: The converted images are kind of bad
Have you followed the best practices? viewtopic.php?f=6&t=74. Check the alignments and make sure you're using a variety of expressions, poses and lighting settings in both face sets.
DLight model still blurry after 400k iterations
Hi,
The issue is pretty much as the title says. The picture is still completely burry. I ran the model for a week at we're at 395k iterations which seems enough. The mask type I'm using is VGG-obstructed.
I have about 1200 pictures from video A and B. Is it just a lack of data (though I thought the blurriness would go away after so many iterations...)
Any suggestions?
Re: DLight model still blurry after 400k iterations
Generally still blurry is down to data issues, or a requirement to train longer.
If the image is a close-up then there is likely to always be some blurring due to upscaling.
My word is final
Re: DLight model still blurry after 400k iterations
Hi,
Like you I trained still image-sets with the Dhl128 model. On 250K I made the first conversion - It was.. well not good!
I then did the following:
From training A -> B, I switched training B -> A. Focus on the two different sides has vastly improved conversion which was done at 350K and 450K iterations.
Matt (@torzdf) may agree here that if you pay attention to the numbers (example 0.11000, drops to 0.9000 then goes to 0.15000) it tells you that the model is still very much engaged in the training process and still "learning".
So my suggestion is when you have trained A->B 400K iterations, switch to B->A it also helps strengthen foundation of the model causing the results to become less corrupt in final images.
It also helps to tweak conversion settings (using gaussian, normalized etc.) to find the optimal result. But using still images is truly a long process and takes A LOT more training then if doing movie sequences.
I my experience (limited) that is.
Keep at it. Will share my results when reaching the 1.000.000 mark.
/sergor
Re: DLight model still blurry after 400k iterations
sergor wrote: ↑Tue Sep 29, 2020 11:25 amI then did the following:
From training A -> B, I switched training B -> A. Focus on the two different sides has vastly improved conversion which was done at 350K and 450K iterations.
So my suggestion is when you have trained A->B 400K iterations, switch to B->A it also helps strengthen foundation of the model causing the results to become less corrupt in final images.
/sergor
Is there a trick to do the A>B to B>A swap? Is there a checkbox anywhere, or do you change/swap the directories and alignment files in the training tab?
- bryanlyon
- Site Admin
- Posts: 793
- Joined: Fri Jul 12, 2019 12:49 am
- Location: San Francisco
- Has thanked: 4 times
- Been thanked: 218 times
- Contact:
Re: DLight model still blurry after 400k iterations
Do not do that. It is the exact opposite of what you should do for a quality swap. (Keep A and B separate)
Re: DLight model still blurry after 400k iterations
My experience with DLight gives good A quality, and totally washed out B quality. Dunno if that helps you. 192 SAE and Villain give me the best results.
- bryanlyon
- Site Admin
- Posts: 793
- Joined: Fri Jul 12, 2019 12:49 am
- Location: San Francisco
- Has thanked: 4 times
- Been thanked: 218 times
- Contact:
Re: DLight model still blurry after 400k iterations
DLight is an "unbalanced" model. It's meant specifically for B's face swapped onto A. It will only work well in that direction.