Frequently Asked Questions


Where can I download the software?
If you are on Windows you can use the Windows Installer, which will set everything up for you. Similarly on Linux you can use the Linux Installer.

For other platforms check the installation instructions which have fairly detailed instructions for installing in most circumstances.
Is there a usage guide?
Yes! At least for the main components of Faceswap. Unfortunately the code changes often and documentation takes time. However, a knowledge base is building up in the forums, and the main guides are updated regularly. You can check also check the stickies in the support forums for more specific guides and tips.

Also check the page of github. The search function in this forum and our Discord server is also useful.

We welcome contributions to the repo/fourm to expand documentation.
But seriously. I've installed and I don't know what I'm doing!
First and foremost: Read the guides! Once you've done that, then read them again, then go over them once more for good measure.

Machine Learning techniques are complex, and whilst we are working hard to demystify the process as much as possible, there will still be some work required on your part to get the basics down.

For a very high level overview of the process you can read on our github repo.

Also read through the remainder of the FAQs on this page as it will help you familiarize yourself with some of the challenges you may face.

At a high level, the process is fairly simple:
  1. Run extract on source face (A)
  2. Run extract on target face (B)
  3. Train on generated faces from A + B
  4. Convert from your source frames (A) Specifying the model you used
Why does Windows say my GPU is hardly being used?
By default, the Windows GPU usage reporting is highly inaccurate, and is tailored towards gaming rather than Machine Learning. If you want accurate usage statistics use nvidia-smi or modify the GPU setting in Task Manager to "CUDA usage". In addition, your CPU is used to prepare the data for the GPU. This includes getting the images into a format that the GPU can work with easily and quickly as well as any augmentation that you have selected. Depending on the options and the details of your system it's quite possible that your CPU may be in more use than your GPU. This is normal.
How do I update Faceswap to the latest version?
To update faceswap to the latest version, look under Tools button in the GUI, and select "Check for updates." The latest commit and any new dependencies will be installed. You may be prompted to re-start faceswap after the update.
Can faceswap work on Mobile phones/Game consoles?
Unfortunately, faceswap uses the speed that only a GPU can sustain to train the AI used for swapping. Portable and low end devices cannot train or use faceswap. They're simply not powerful or fast enough.
Can I use Faceswap to swap a single photograph?
This is a loaded question. The answer is 'yes', but with severe caveats. Machine Learning requires many thousands of different images to train a model. This is true regardless of how many images you finally intend to convert (a video is just a series of still images after all). The process to convert a single photograph would be the same as to convert a video. You would still need to collect thousands of faces to train the model. It is not as simple as just feeding in a single image.
Why does faceswap make my computer shut down?
Faceswap uses your GPU and CPU quite heavily. When Faceswap starts up, it causes a spike of power usage that on some Power Supplies cause a voltage drop that leads to a shutdown. This is NOT limited to insufficient wattage, but can happen even with power supplies that should provide enough wattage or that run fine in other usages.
What graphics card do I need?
Faceswap runs on both AMD and Nvidia graphics cards, but you will get a far better experience with Nvidia GPUs due to their propriety Cuda library for machine learning. You may be able to run the smallest model on a 2GB GPU, under Linux, but you really want a graphics card with 4GB or more. 8GB is the minimum to run the larger models.


Where can I download the software?
If you are on Windows you can use the Windows Installer, which will set everything up for you. Similarly on Linux you can use the Linux Installer.

For other platforms check the installation instructions which have fairly detailed instructions for installing in most circumstances.
I installed with the installer, but Faceswap won't launch Or I'm having GPU problems
Faceswap has been installed successfully thousands of times across a wide array of OSes, so the problem is almost definitely with your system. The following should fix most issues:

NB: It is imperative that you follow every step listed here. Failure to do so will be unllikely to resolve your issues.
  • Uninstall Conda: Add/Remove programs > python (Check for both MiniConda and AnaConda)
  • Uninstall any and all versions of Cuda that are installed on your system: Add/Remove programs > anything with Nvidia Cuda in the name
  • Uninstall any other Python installs you have on your system.
  • Go to your
    C:\users\[your username]
    folder and delete any files/folders with "conda" in the name
  • Go to your
     C:\users\[your username]\AppData\Roaming
    folder and delete any folders with "Python" in the name
  • Delete your Faceswap folder
  • Make Sure your Nvidia drivers are the most up-to-date version
  • Reboot
  • Re-run the faceswap installer
How do I uninstall Faceswap?
Faceswap is deliberately installed "standalone" so that it doesn't interfere with the rest of your computer. To uninstall, simply delete the folder that you installed Faceswap into.

Then if you want to you can also uninstall MiniConda (the only 3rd party app that we install) the usual way.


What are alignments? Why are they important?
Alignments are 68 points that identify features on a face. They are important as they tell the training process how to build a face mask, and they tell the convert process where the swap on the original image should occur and how the face is lined up.


How many Images should I use?
Aim for between 500-10000 images per side. These should be of a high quality and contain a wide variety of angles, expressions and lighting conditions. An example of highly varied data can be seen here:
How long does it take to train a model?
This depends on many factors; the model used, the number of images, your GPU etc. However, a ballpark figure for lower resolution models is 12-48 hours on GPU. For higher resolution models, this can become weeks, or even over a month, to train from scratch.

If you do not have a GPU, then it is not recommended to train on CPU, but if you do go this route, it can take weeks to train even the simplest of models.
What do loss values mean and how should I use them?
Loss values represent the success of the model in recreating A or B from the original input photos. These numbers are for internal use of the model but are exposed for "at a glance" use of them. It's important to mention that this is NOT measuring the swap at all and is ONLY measuring A-A or B-B. If you're training with a mask you may also see a mask loss, this again is just measuring the success of the model at recreating loss values. For viewing swap success the only reliable method is to watch the previews. Keep an eye out for good looking B faces in A images as this is the preferred direction of swap.
Can I resume training?
Yes. FaceSwap auto-saves your model, so you can stop at any time. When you recommence training, just point FaceSwap at your existing model folder and it will carry on from where it left off.
Can I add/remove data after I've started training?
Yes, this is fine. In fact it is often a good idea to refresh your data, to help prevent the model from overfitting. Make gradual changes rather than wholesale changes though. Make sure you have stopped training before adding or removing images.
Why do my previews suddenly go a solid color, with the loss values spiking?
This is model corruption. It can happen for numerous reasons, but is fairly rare. One reason may be due to overclocking your GPU. DO NOT OVERCLOCK YOUR GPU. We cannot be clearer on this. Overclocking is meant to speed up 3D rendering, where errors are not too important. In Machine Learning errors can be absolutely catastrophic. Another reason may be exploding/vanishing gradients. This unfortunately happens sometimes. You can read more about why here. Fortunately Faceswap takes a backup every time the loss drops, so you can restore from this with the restore tool (Tools > Restore in the GUI, or `python restore -h` from the cli). Otherwise you can copy a snapshot to your main model folder and carry on training from there.
Why does my model keep crashing for no discernible reason?
See the Overclock warning from the previous question
Can I use AWS/Google Colab for training?
Yes, although none of these are directly supported by the developers. An existing FaceSwap notebook for Google Colab can be found here .

Note: Google has banned the training of Deepfakes in Colab, so this is not recommended as you put your account at risk.
In the analysis tab, what does EG/s mean?
Examples per second. This is basically the speed that faces are being processed through the model. Technically the true value is double the figure displayed, as it only displays the rate for one side.
Why does it crash when I start training with a cryptic message that mentions OOM or CUDA_ERROR_OUT_OF_MEMORY?
OOM means Out Of Memory. It basically means that you do not have enough GPU memory to train the selected model at the selected settings. You can try a number of things.
  • Try enabling "Mixed Precision" training.
  • Lower the batchsize (the amount of images fed into the model each iteration).
  • Set the "Central Storage" Distibution Strategy option (in the main Faceswap training tab). This will place optimizer variables on the CPU rather than the GPU. This will slow down training due to an increased number of GPU to CPU copies, but can buy a little bit of extra VRAM.
  • Use a more lightweight model, or select the model's "LowMem" option (in config) if it has one.
Can I change the model/trainer and keep training the same model?
No, they all have different data and work in entirely different ways. Changing the model mid-train is impossible.
Why do I have to train both A>A and A>B? Can't I just train A>B (i.e. 1 side)?
No you can't. This is simply not how the Neural Network works. In as simple terms as possible, you are training the Neural Network to recreate 2 faces (A and B), The final swap works by switching the outputs. In order to do this a "shared" encoder is required to store the data for both the A and B faces. This encoder needs to be fully trained on both sides to know that it is creating a decent mechanism to be able to encode faces from each side and, ultimately, perform the swap.


Why do my faces look like they haven't changed?
Most likely you have trained your model the wrong way round. A should be the person who's face you want to remove and B should be the person who's face you want to place on A. All is not lost. You can select the -s, --swap-model switch that will perform the swap B>A rather than A>B. This works well on most models, but may work less well on 'unbalanced' type models.
I've trained the model for ages, the previews look good, so why is the swapped face blurry?
There are many reasons that this can happen, the main ones are:
  • You are training a low resolution model and are trying to swap on to a close-up in a high resolution image. Some of the older models (for example Original and Lightweight) train to a 64 pixel output size. Stretching a 64 pixel image onto a higher resolution output is always going to be blurry, no matter what you do, as the data just won't exist in the swapped image to pad out sharp data.
  • Your data variety you used for training is bad. Data variety is the single most important thing for achieving a good swap. Lots of very similar data with very similar lighting will not lead to a good swap, unless it has been very specifically matched for both your A and B side (although generally speaking, this is bad practice). You need to collect data from as many different sources as possible, with as many different angles, lighting conditions and expressions as you can, for both the A and B side.
  • You need to enable the "no-warp" option during training. "Warping" data is very important to build a robust model that can handle lots of conditions, however, to get a really good swap you need to turn this off towards the end of training. You can read more about it here
I've run conversion, but there is a distinct box around the swap area. What gives?
The default convert settings work for some swaps, not for others. They can be tweaked by editing the configuration file. The best way to tweak the settings to your liking is to use the preview tool, save the configuration, then run convert. See the pin in #convert-support for more information.
Why is the swapped face in my converted video flickering/glitching?
This could be for a couple of reasons. The first is that you have used Seamless Clone. Don't use Seamless Clone. It's bad. The second reason is that you do not have a decent alignments file. You may not have generated an alignments file for your conversion at all. This is not good practice as the CPU based "on-the-fly" detector is poor, and should only be used for quick tests. Decent swaps require decent alignments. See this guide for more thorough details on creating a good alignments file.