Page 1 of 1
The faces of the figures in the second and third columns of the preview image are white frames
Posted: Fri Jun 12, 2020 3:47 am
by yuanzhao
Hello, I have trained on a dataset extracted from 2 videos.
The faceA folder contains 251 images and the faceB folder contains 1095 images.
I used the client command for training, the model is original
I've trained 65,000 iterations and the second and third columns of the preview image still have white boxes on the faces of the characters. los A has been the 0.41, los B has been 0.45.
Could it be because I have trained for too few iterations?
Could it be for the image number of my dataset?
Thank you a lot.
Re: The faces of the figures in the second and third columns of the preview image are white frames
Posted: Fri Jun 12, 2020 8:26 am
by torzdf
251 is too few images.
However, I'm not sure what you mean by "white boxes". If you could post an example, it would be useful,
It sounds like model corruption.
Also, if you could post the output of system info, that would be useful:
GUI Users: Go to Help -> Output system information
CLI Users: From inside your virtual environment, inside your faceswap folder, run:
python -c "from lib.sysinfo import sysinfo ; print(sysinfo)"
Re: The faces of the figures in the second and third columns of the preview image are white frames
Posted: Fri Jun 12, 2020 8:51 am
by yuanzhao

Code: Select all
============ System Information ============
encoding: ANSI_X3.4-1968
git_branch: # master
git_commits: 127d3db Dependencies update (#1028)
gpu_cuda: 10.2
gpu_cudnn: 7.5.0
gpu_devices: GPU_0: Tesla M40 24GB
gpu_devices_active: GPU_0
gpu_driver: 410.48
gpu_vram: GPU_0: 22945MB
os_machine: x86_64
os_platform: Linux-3.10.0-327.el7.x86_64-x86_64-with-centos-7.5.1804-Core
os_release: 3.10.0-327.el7.x86_64
py_command: -c
py_conda_version: N/A
py_implementation: CPython
py_version: 3.6.8
py_virtual_env: True
sys_cores: 48
sys_processor: x86_64
sys_ram: Total: 257663MB, Available: 229317MB, Used: 19451MB, Free: 142310MB
=============== Pip Packages ===============
absl-py==0.8.0
astor==0.8.0
cycler==0.10.0
decorator==4.4.2
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-pasta==0.1.7
grpcio==1.23.0
h5py==2.10.0
imageio==2.8.0
imageio-ffmpeg==0.4.2
joblib==0.15.1
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib==3.2.1
mock==3.0.5
networkx==2.4
numpy==1.17.1
opencv-python==4.2.0.34
pathlib==1.0.1
Pillow==7.1.2
protobuf==3.9.1
psutil==5.7.0
pynvml==8.0.4
pyparsing==2.4.7
python-dateutil==2.8.1
PyWavelets==1.1.1
PyYAML==5.3.1
scikit-image==0.17.2
scikit-learn==0.23.1
scipy==1.4.1
six==1.12.0
tensorboard==1.13.1
tensorflow-estimator==1.13.0
tensorflow-gpu==1.13.1
termcolor==1.1.0
threadpoolctl==2.1.0
tifffile==2020.6.3
toposort==1.5
tqdm==4.46.1
Werkzeug==1.0.1
wrapt==1.11.2
================= Configs ==================
--------- train.ini ---------
[global]
coverage: 68.75
mask_type: none
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
icnr_init: False
conv_aware_init: False
reflect_padding: False
penalized_mask_loss: True
loss_function: mae
learning_rate: 5e-05
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
[model.dfl_h128]
lowmem: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.original]
lowmem: False
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.villain]
lowmem: False
[model.dlight]
features: best
details: good
output_size: 256
--------- .faceswap ---------
backend: nvidia
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.s3fd]
confidence: 70
batch-size: 4
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
[mask.unet_dfl]
batch-size: 8
--------- convert.ini ---------
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[scaling.sharpen]
method: unsharp_mask
amount: 150
radius: 0.3
threshold: 5.0
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
Re: The faces of the figures in the second and third columns of the preview image are white frames
Posted: Fri Jun 12, 2020 9:13 am
by torzdf
Yeah, that's model corruption. If it has always been like that, you should start again.
Maybe do some experimentation with different models so you can isolate whether there is a problem with your setup or with the model you are using.
Re: The faces of the figures in the second and third columns of the preview image are white frames
Posted: Fri Jun 12, 2020 9:40 am
by yuanzhao
Okay, thank you !
I'll try it with a different model first, but what do I do if this still happens?
Is it only possible to replace the equipment?
Re: The faces of the figures in the second and third columns of the preview image are white frames
Posted: Wed Jun 17, 2020 3:01 am
by Tekniklee
You should see changes even in the first preview (100 iterations). It won't lookd recognizable as a face at that point, but you should see rapid improvement over the first few thousand iterations. In addition, the error graph should fall very rapidly at first. In addition to possible corruption, I would also try resetting your models to their defaults. If you have messed with certain parameters (such as training rate), things will go bonkers outside of very tiny ranges.