Random questions...

Training your model
Forum rules
Read the FAQs and search the forum before posting a new topic.

Please mark any answers that fixed your problems so others can find the solutions.
Locked
User avatar
Grassone
Posts: 45
Joined: Sun Apr 19, 2020 7:32 pm
Has thanked: 6 times
Been thanked: 2 times

Random questions...

Post by Grassone »

... while I am building my new pee-see...

1) What are the consequences of using a small batch (i.e. less then 6) compared to a large batch (..64 ?) Does this affect the convergence process/ learning time ? I have understood that working with small batches usually leads to better quality... but I suspect that there is more than that...=

2) Is it possible that using "more detailed trainers" (for example: Dlight in best features and good quality) can lead to a "standoff" in learning ? (i.e: the score does not go below 0.030), while a simple DFL-H128 reaches a 0.016 without issues on the same training set ? ... and yes... I have tried to swap A and B to see if this was due to the unbalanced decoders.

3) Is the "face loss" a "universal" score ? ( is a score of 0.010 for Villain as good as 0.010 for Realface ?)

THNX!


User avatar
bryanlyon
Site Admin
Posts: 495
Joined: Fri Jul 12, 2019 12:49 am
Answers: 41
Location: San Francisco
Has thanked: 3 times
Been thanked: 120 times
Contact:

Re: Random questions...

Post by bryanlyon »

Going to go backward through your questions.

3: No loss is NOT the same for each model (or even each loss function type). It's just a score used to help train the model.

2: Don't look at the loss numbers while training. That teaches you nothing. Instead focus on the previews.

1: Small batches MIGHT be able to get rare details better than large ones which MIGHT be able to generalize better. Do not believe anyone who says that "small batches usually leads to better quality". They're missing about 99% of the point with that blanket statement.


Locked