question: I Ttride to do like a custom LowMem plugin / Model.py

Want to understand the training process better? Got tips for which model to use and when? This is the place for you


Forum rules

Read the FAQs and search the forum before posting a new topic.

This forum is for discussing tips and understanding the process involved with Training a Faceswap model.

If you have found a bug are having issues with the Training process not working, then you should post in the Training Support forum.

Please mark any answers that fixed your problems so others can find the solutions.

Locked
User avatar
Judasan
Posts: 2
Joined: Sun Mar 01, 2020 7:25 am

question: I Ttride to do like a custom LowMem plugin / Model.py

Post by Judasan »

I Ttride to do like a custom LowMem plugin
and I changed The contents of: Plugins / Model_Original / Model.py
I changed the content because my video card "GTX1650" only with 4 GB memory and the model crashed.
And what I've changed is only the values ​​1024 to 768 in both places marked in blue.
From 1024 to 768, in two places and I want to know if its okay?

The original content is:

Model.py

Based on the original https://www.reddit.com/r/deepfakes/ code sample + contribs

from keras.models import Model as KerasModel
from keras.layers import Input, Dense, Flatten, Reshape
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import Conv2D
from keras.optimizers import Adam

from .AutoEncoder import AutoEncoder
from lib.PixelShuffler import PixelShuffler

from keras.utils import multi_gpu_model

IMAGE_SHAPE = (64, 64, 3)
ENCODER_DIM = 1024 - to 768
class Model(AutoEncoder):
def initModel(self):
optimizer = Adam(lr=5e-5, beta_1=0.5, beta_2=0.999)
x = Input(shape=IMAGE_SHAPE)

Code: Select all

    self.autoencoder_A = KerasModel(x, self.decoder_A(self.encoder(x)))
    self.autoencoder_B = KerasModel(x, self.decoder_B(self.encoder(x)))

    if self.gpus > 1:
        self.autoencoder_A = multi_gpu_model( self.autoencoder_A , self.gpus)
        self.autoencoder_B = multi_gpu_model( self.autoencoder_B , self.gpus)

    self.autoencoder_A.compile(optimizer=optimizer, loss='mean_absolute_error')
    self.autoencoder_B.compile(optimizer=optimizer, loss='mean_absolute_error')

def converter(self, swap):
    autoencoder = self.autoencoder_B if not swap else self.autoencoder_A
    return lambda img: autoencoder.predict(img)

def conv(self, filters):
    def block(x):
        x = Conv2D(filters, kernel_size=5, strides=2, padding='same')(x)
        x = LeakyReLU(0.1)(x)
        return x
    return block

def upscale(self, filters):
    def block(x):
        x = Conv2D(filters * 4, kernel_size=3, padding='same')(x)
        x = LeakyReLU(0.1)(x)
        x = PixelShuffler()(x)
        return x
    return block

def Encoder(self):
    input_ = Input(shape=IMAGE_SHAPE)
    x = input_
    x = self.conv(128)(x)
    x = self.conv(256)(x)
    x = self.conv(512)(x)
    x = self.conv([size=200][color=#0000BF][b]1024[/b][/color][/size])(x) [size=200][color=#0000BF][b]- to 768[/b][/color][/size]
    x = Dense(ENCODER_DIM)(Flatten()(x))
    x = Dense(4 * 4 * 1024)(x)
    x = Reshape((4, 4, 1024))(x)
    x = self.upscale(512)(x)
    return KerasModel(input_, x)

def Decoder(self):
    input_ = Input(shape=(8, 8, 512))
    x = input_
    x = self.upscale(256)(x)
    x = self.upscale(128)(x)
    x = self.upscale(64)(x)
    x = Conv2D(3, kernel_size=5, padding='same', activation='sigmoid')(x)
    return KerasModel(input_, x)

I did it because I want to get as much quality as I can with my gpu.

updating:
doing a Training with a batch size - 6, it works fine, I will update below how the video quality came out ...

.

User avatar
deephomage
Posts: 33
Joined: Fri Jul 12, 2019 6:09 pm
Answers: 1
Has thanked: 2 times
Been thanked: 8 times

Re: question: I Ttride to do like a custom LowMem plugin / Model.py

Post by deephomage »

Read the training guide: viewtopic.php?f=6&t=146. It's not recommended that you edit the model files or reduce the autoencoder dims. The Original model has a lowmem option for VRAM limited GPUs. Use the lowmem option. Also, try reducing your batch size and closing all other programs when training a model.

Locked