Page 1 of 1

Where are the biases?

Posted: Wed Nov 16, 2022 3:31 pm
by Scrapemist

So I'm coming from DFL and and now dipping my toes in FS, in prarticular the Phaze-A model.
To wrap my head around the meaning of all the settings I am studying how NN actual work.
This yt video is helping me alot to get a basic understanding.
It talks about the importance of weights and biases.
I see weights being represented in the settings of FS and talked about here on the forum.
But how do the biases come in to play?
Which settings adress them?


Re: Where are the biases?

Posted: Thu Nov 17, 2022 9:49 am
by torzdf

Biases are not directly exposed in settings. Biases are implemented at a layer by layer level, which is more granular than faceswap exposes. The actual biases (if used) are stored in the model .h5 file.

Faceswap does not generally use biases as they are not necessary for our functions (with the exception of dlight and realface model.... I would need to check with the plugin author why he chose to include them).


Re: Where are the biases?

Posted: Sat Nov 19, 2022 9:52 am
by Scrapemist

I see. Thanks for clearing that up.

Then, just to see if I understand the NN correctly;
Is the first layer of nodes in the encoder the amount of pixels of the image times 3? (for each rgb channel)
So that each node contains an r, g or b value of a single pixel?


Re: Where are the biases?

Posted: Sat Nov 19, 2022 10:48 am
by bryanlyon

Biases exist intrinsically in dense (Fully connected) layers but are generally not included in convolutional layers because convolutions work in a different way and don't really need them. HOWEVER, "weights" are generally taken to mean all variables used in the model. For example, convolutions include a kernel arrangement. That is saved in the "weights" file, but it's not actually a weight. Same with biases.

Basically, saying "Weights, Biases, Kernel initializes, attention values, and whatever else a given layer needs to operate" is just way too long (unless you're writing a full paper) and can just be (somewhat inaccurately) all be called "weights" together.

All our models use SOME biases, but some models use biases where they may typically be considered "optional".