I have trained a model with OpenFaceSwap. Is it possible to use it with/convert it with Faceswap?
OpenFaceSwap model
Read the FAQs and search the forum before posting a new topic.
This forum is for discussing tips and understanding the process involved with Converting faces from your trained model.
If you are having issues with the Convert process not working as you would expect, then you should post in the Convert Support forum.
Please mark any answers that fixed your problems so others can find the solutions.
Re: OpenFaceSwap model
The answer to that is "possibly".....
OpenFaceSwap has been dead for a long while, but IIRC it was based on an old version of Faceswap....
We have tried to maintain backwards compatibility, so running it in Faceswap should update the model to the latest code.
That said, if you can screengrab your model folder, I should be able to tell you for sure.
My word is final
Re: OpenFaceSwap model
Model files: (Name)(Last modified)(Type)(Size)(Created)
Train command used on OFS:
python\scripts\python.bat faceswap_lowmem\faceswap.py train -A "C:\Users\*\A\faces" -B "C:\Users\*\B\faces" -m "C:\Users\***\model" -t LowMem -bs 16
I trained for 2-4 hours a day for almost 2 weeks until the weight loss got stuck at 0.02xx
I have NVIDIA GeForce 920M with 2GB
I tried renaming the files to lightweight_* and run the training, but it gave me this error:
Code: Select all
07/24/2020 19:45:40 MainProcess _training_0 multithreading run DEBUG Error in thread (_training_0): Tensor("Placeholder:0", shape=(1024, 512, 3, 3), dtype=float32) must be from the same graph as Tensor("upscale_8_0_conv2d/kernel:0", shape=(3, 3, 256, 2048), dtype=float32_ref).
07/24/2020 19:45:41 MainProcess MainThread train _monitor DEBUG Thread error detected
07/24/2020 19:45:41 MainProcess MainThread train _monitor DEBUG Closed Monitor
07/24/2020 19:45:41 MainProcess MainThread train _end_thread DEBUG Ending Training thread
07/24/2020 19:45:41 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
07/24/2020 19:45:41 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
07/24/2020 19:45:41 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training_0'
07/24/2020 19:45:41 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
Traceback (most recent call last):
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 834, in load
network = load_model(self.filename, custom_objects=get_custom_objects())
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 221, in _deserialize_model
model_config = f['model_config']
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\io_utils.py", line 302, in __getitem__
raise ValueError('Cannot create group in read only mode.')
ValueError: Cannot create group in read only mode.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\***\faceswap\lib\cli\launcher.py", line 155, in execute_script
process.process()
File "C:\Users\***\faceswap\scripts\train.py", line 161, in process
self._end_thread(thread, err)
File "C:\Users\***\faceswap\scripts\train.py", line 201, in _end_thread
thread.join()
File "C:\Users\***\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\***\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\***\faceswap\scripts\train.py", line 226, in _training
raise err
File "C:\Users\***\faceswap\scripts\train.py", line 214, in _training
model = self._load_model()
File "C:\Users\***\faceswap\scripts\train.py", line 255, in _load_model
predict=False)
File "C:\Users\***\faceswap\plugins\train\model\lightweight.py", line 20, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\***\faceswap\plugins\train\model\original.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 125, in __init__
self.build()
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 244, in build
self.load_models(swapped=False)
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 456, in load_models
is_loaded = network.load(fullpath=model_mapping[network.side][network.type])
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 837, in load
self.convert_legacy_weights()
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 863, in convert_legacy_weights
self.network.load_weights(self.filename)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 1058, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2465, in batch_set_value
assign_op = x.assign(assign_placeholder)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\variables.py", line 2067, in assign
self._variable, value, use_locking=use_locking, name=name)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\state_ops.py", line 227, in assign
validate_shape=validate_shape)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\ops\gen_state_ops.py", line 66, in assign
use_locking=use_locking, name=name)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 367, in _apply_op_helper
g = ops._get_graph_from_inputs(_Flatten(keywords.values()))
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 5979, in _get_graph_from_inputs
_assert_same_graph(original_graph_element, graph_element)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\tensorflow_core\python\framework\ops.py", line 5914, in _assert_same_graph
(item, original_item))
ValueError: Tensor("Placeholder:0", shape=(1024, 512, 3, 3), dtype=float32) must be from the same graph as Tensor("upscale_8_0_conv2d/kernel:0", shape=(3, 3, 256, 2048), dtype=float32_ref).
EDIT: Forgot to share the settings
Code: Select all
{
"name": "lightweight",
"sessions": {
"1": {
"timestamp": 1595608900.6572073,
"no_logs": false,
"pingpong": false,
"loss_names": {
"a": [
"face_loss"
],
"b": [
"face_loss"
]
},
"batchsize": 16,
"iterations": 9,
"config": {
"learning_rate": 5e-05
}
}
},
"lowest_avg_loss": {
"a": 0.20435872860252857,
"b": 0.15313835069537163
},
"iterations": 9,
"inputs": {
"face_in:0": [
64,
64,
3
]
},
"training_size": 256,
"config": {
"coverage": 68.75,
"mask_type": null,
"mask_blur_kernel": 3,
"mask_threshold": 4,
"learn_mask": false,
"icnr_init": false,
"conv_aware_init": false,
"reflect_padding": false,
"penalized_mask_loss": true,
"loss_function": "mae",
"learning_rate": 5e-05
}
}
Re: OpenFaceSwap model
Encoder View: (all the conv2d and dense folder groups contained data)
Decoder A View:
Decoder B View:
I tried to edit the h5 file, putting everything inside a model_weights folder group and gave me this error:
Code: Select all
07/24/2020 20:15:54 MainProcess _training_0 multithreading run DEBUG Error in thread (_training_0): "Unable to open object (object 'input_2' doesn't exist)"
07/24/2020 20:15:55 MainProcess MainThread train _monitor DEBUG Thread error detected
07/24/2020 20:15:55 MainProcess MainThread train _monitor DEBUG Closed Monitor
07/24/2020 20:15:55 MainProcess MainThread train _end_thread DEBUG Ending Training thread
07/24/2020 20:15:55 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
07/24/2020 20:15:55 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
07/24/2020 20:15:55 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training_0'
07/24/2020 20:15:55 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
Traceback (most recent call last):
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 834, in load
network = load_model(self.filename, custom_objects=get_custom_objects())
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 221, in _deserialize_model
model_config = f['model_config']
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\utils\io_utils.py", line 302, in __getitem__
raise ValueError('Cannot create group in read only mode.')
ValueError: Cannot create group in read only mode.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\***\faceswap\lib\cli\launcher.py", line 155, in execute_script
process.process()
File "C:\Users\***\faceswap\scripts\train.py", line 161, in process
self._end_thread(thread, err)
File "C:\Users\***\faceswap\scripts\train.py", line 201, in _end_thread
thread.join()
File "C:\Users\***\faceswap\lib\multithreading.py", line 121, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\***\faceswap\lib\multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\***\faceswap\scripts\train.py", line 226, in _training
raise err
File "C:\Users\***\faceswap\scripts\train.py", line 214, in _training
model = self._load_model()
File "C:\Users\***\faceswap\scripts\train.py", line 255, in _load_model
predict=False)
File "C:\Users\***\faceswap\plugins\train\model\lightweight.py", line 20, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\***\faceswap\plugins\train\model\original.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 125, in __init__
self.build()
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 244, in build
self.load_models(swapped=False)
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 456, in load_models
is_loaded = network.load(fullpath=model_mapping[network.side][network.type])
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 837, in load
self.convert_legacy_weights()
File "C:\Users\***\faceswap\plugins\train\model\_base.py", line 863, in convert_legacy_weights
self.network.load_weights(self.filename)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\keras\engine\saving.py", line 1021, in load_weights_from_hdf5_group
g = f[name]
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\***\MiniConda3\envs\faceswap\lib\site-packages\h5py\_hl\group.py", line 264, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'input_2' doesn't exist)"
On converting I got more or less the same error.
Re: OpenFaceSwap model
Ok, assuming you're using the Faceswap GUI (and you should it's the easiest way to use faceswap), then these models should auto-update when you put them into Faceswap either to train or convert....
It's been a long time since we changed our structure from the format you have, but they should update. The old LowMem model is now part of the Original model, so select the "Original" model in faceswap.
Your model files should be named:
Code: Select all
lowmem_encoder.h5
lowmem_decoder_A.h5
lowmem_decoder_B.h5
NB: Lightweight model and LowMem are not the same thing.
Also do not manually edit the weights files! If you have already edited the files and don't have a backup, you may have ruined any chance you had for updating your model.
My word is final
Re: OpenFaceSwap model
Firstly I want to tank you already for your help, I didn't know this forum was so active
NB: Lightweight model and LowMem are not the same thing.
Also do not manually edit the weights files! If you have already edited the files and don't have a backup, you may have ruined any chance you had for updating your model.
Thanks for clarifying, and I yes, I there's no way I wouldn't make a back up of a model I trained for days.
But, it gives me ValueError: Cannot create group in read only mode. and ValueError: Tensor("Placeholder:0", shape=(3, 3, 512, 1024), dtype=float32) must be from the same graph as Tensor("upscale_8_0_conv2d/kernel:0", shape=(3, 3, 512, 1024), dtype=float32_ref).
I've checked lowmem on Original in the train settings, but the model files keep getting called original_* .
On the train.ini file, lowmem is set to lowmem = True .
This is the current original_state.json file:
Code: Select all
{
"name": "original",
"sessions": {
"1": {
"timestamp": 1595697018.5932138,
"no_logs": false,
"pingpong": false,
"loss_names": {},
"batchsize": 0,
"iterations": 0,
"config": {
"learning_rate": 5e-05
}
}
},
"lowest_avg_loss": {},
"iterations": 0,
"inputs": {
"face:0": [
64,
64,
3
]
},
"training_size": 256,
"config": {
"coverage": 62.5,
"reflect_padding": false,
"mask_type": null,
"mask_blur_kernel": 3,
"mask_threshold": 4,
"learn_mask": false,
"lowmem": false,
"icnr_init": false,
"conv_aware_init": false,
"penalized_mask_loss": true,
"loss_function": "mae",
"learning_rate": 5e-05
}
}
EDIT:
Training settings:
Original lowmem check:
Re: OpenFaceSwap model
Ok, I've looked into this.
Legacy support appears to no longer work. This isn't a huge issue from perspective... The fact that this has only just been discovered means that people generally don't use it anyway.
I rolled back to a known working version, and whilst I was able to convert the decoders, there appears to be an issue with your encoder. Ultimately, it is the wrong size (it should be about double the size of the file you provided), so when it tries to convert, it throws up errors about layer count mismatch.
At this point, all you can really do is make sure you provided me the right files. If you did, then I can only assume that OpenFaceSwap made some changes to this model which is not compatible with Faceswap.
Either way, there have been many developments in Faceswap since the version you are using, so you're probably best off starting again (not the answer you wanted to hear, I'm sure, but probably the best approach).
My word is final
Re: OpenFaceSwap model
The reason for me to move from OFS to Faceswap is that on conversion with OFS it gives me those warnings :
Code: Select all
Failed to convert image: C:\Users\***\Desktop\Nuova cartella (2)\faceswap\m2\p2,1\images\imageA*.jpg.
Reason: __init__() got an unexpected keyword argument 'r'
and it returns only the images that did not detect a face.
Does it matter that, in Faceswap, the lowmem value in the original_state.json is set to false, even though I turned it on?
Also, could I get this old working version to try myself?
Re: OpenFaceSwap model
You can check out the pre-snapshot branch of Faceswap, which should be compatible with your model (if it hasn't been changed by openfaceswap) by entering your faceswap folder and entering
Code: Select all
git checkout pre-refactor-snapshot
This is very old though, and no way near as user friendly, so you may be best off creating a new environment.
You can get back to the master branch by entering:
Code: Select all
git checkout master
The first compatible commit that auto-updates legacy files is:
Code: Select all
git checkout 9114262234e1bf5ca52ed200e6a757f9e6456143
Same caveats as above apply.
My word is final
Re: OpenFaceSwap model
Sorry for bothering, can't open faceswap on pre-refactor-snapshot. It give me :
ModuleNotFoundError: No module named 'dlib'
And when trying to get git checkout 9114262234e1bf5ca52ed200e6a757f9e6456143
it gives me fatal: reference is not a tree: 9114262234e1bf5ca52ed200e6a757f9e6456143.
And sorry for asking too much, do you know if is it possible to get a version of OFS that you are sure is compatible? Thanks and sorry again
- bryanlyon
- Site Admin
- Posts: 793
- Joined: Fri Jul 12, 2019 12:49 am
- Location: San Francisco
- Has thanked: 4 times
- Been thanked: 218 times
- Contact:
Re: OpenFaceSwap model
Pre-refactor is quite old and before we had a uniform installation. You're not going to be able to simply "pip" install dlib. It requires you to compile it. When you checked out the pre-refactor it included an install.md file. You'll need to open that (preferrably in a tool that reads markup) and follow the instructions there to get it running.
Re: OpenFaceSwap model
You can get dlib with
Code: Select all
conda install -c conda-forge dlib
However, from this point, I'm afraid you're on you're own. It's just not a great use of time providing support for a seriously outdated version of our software.
As for OpenFaceSwap. I don't know, I never used it.
My word is final
Re: OpenFaceSwap model
Auto update commit:
https://github.com/deepfakes/faceswap/t ... ef24604c81
My word is final