Training Experiment

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Post Reply
User avatar
MaxHunter
Posts: 194
Joined: Thu May 26, 2022 6:02 am
Has thanked: 177 times
Been thanked: 13 times

Training Experiment

Post by MaxHunter »

Hopefully no one will mind this post as it is more a general place to put my notes on an experiment and some of the surprising finds.

The experiment is still in the early stages and my next model is going to explore more.

The idea is to build a general model using loss functions that are specific to stages . The first stage concentrating on building a solid structure of the face, and then the second stage using different loss functions to concentrate on the details.

My current model is a 512 STOJO modified model (I call it a Max 512) Typically I use MsSsim ; Logcosh; FFL; and lpips vgg at 20-25%. However this time from 0-550k I used MS-ssim and Logcosh. From 550-725k I added gmsd and laploss to the 3rd and 4th slots. What I've noticed from adding those slots is the color detail at 100% coming through that I normally don't see. Skin blemishes in my B model, very distinct eye glare, redness in the whites of the eyes, and distinct hair color of whiskers and eyebrows, and (something Ive never seen in my models) the uvula - the dangling thing in your mouth- came through in a face screaming. Normally these details don't come in so distinctly until after 1 million (I often use the same faces so I have pretty good idea when things are changing.). But here we have it coming in around 700k. The structure is still blurry and not crisp but it's interesting it's picking up these details. (It's important to note that these details are at 100% preview pics, where typically it's very blurry.)

I'm not sure if it's the gmsd or the laploss, or both that's doing this. If anyone has any thoughts or suggestions please feel free to chime in.

My next stage will be exchanging the loss functions for more detailed functions until 1 million. Then No Warp No Flip.

Last edited by MaxHunter on Sun Nov 26, 2023 6:07 am, edited 2 times in total.
User avatar
MaxHunter
Posts: 194
Joined: Thu May 26, 2022 6:02 am
Has thanked: 177 times
Been thanked: 13 times

Re: Training Experiment

Post by MaxHunter »

Note to self: According to the abstract:

Table IV: Running time of the competing IQA models.
Models Running time (s)
MAD [12] 2.0715
IFC [22] 1.1811
VIF [23] 1.1745
FSIM [7] 0.5269
IW-SSIM [16] 0.5196
MS-SSIM [17] 0.1379
GS [15] 0.0899
GSD [5] 0.0481
SSIM [8] 0.0388
G-SSIM [6] 0.0379
GMSD 0.0110
GMSM 0.0079
PSNR 0.0016

"Table IV shows the running time of the 13 IQA models on an image of size 512×512. All algorithms were run on a ThinkPad T420S notebook with Intel Core i7-2600M CPU@2.7GHz and 4G RAM. The software platform used to run all algorithms was MATLAB R2010a (7.10). Apart from G-SSIM and GSD, the MATLAB source codes of all the other methods were obtained from the original authors. (It should be noted that whether the code is optimized may affect the running time of an algorithm.) Clearly, PSNR is the fastest, followed by GMSM and GMSD. Specifically, it costs only 0.0110 second for GMSD to process an image of size 512×512, which is 3.5 times faster than SSIM, 47.9 times faster than FSIM, and 106.7 times faster than VIF."

GMSD is supposedly faster and more competent than both Ms Ssim and Ssim. Perhaps using GMSD in slot one is a better way to go.

Last edited by MaxHunter on Sun Nov 26, 2023 6:40 am, edited 2 times in total.
User avatar
torzdf
Posts: 2687
Joined: Fri Jul 12, 2019 12:53 am
Answers: 159
Has thanked: 135 times
Been thanked: 628 times

Re: Training Experiment

Post by torzdf »

Thanks for this. Will be interested to see how it goes.

Incidentally, you can insert a table with this notation (not in a code block):

Code: Select all

| Syntax      | Description |
| ----------- | ----------- |
| Header      | Title       |
| Paragraph   | Text        |

will give you:

SyntaxDescription
HeaderTitle
ParagraphText

My word is final

Post Reply