Mixed precision will always be able to gain you VRAM. It can be enabled for any Nvidia card. You will only get additional speed benefits with a 20xx card or higher (so your 30xx card will get you additional speed benefits).
You can switch between full precision and mixed precision for any existing model. This is a relatively new addition to Faceswap (there is no reason why you can't switch precision, just Tensorflow does not allow that ability out of the box, so I have had to implement our own solution). In the past, when you chose a precision you were stuck with it.
Mixed Precision can increase the risk of numerical instability. That is NaNs appearing in the model. This will not destroy your model, but it can be very frustrating having to roll back 50k iterations and lower learning rate. This is because the numerical range of fp16 (the numerical range that mixed precision does calculations in) is much smaller than the numerical range of fp32 (the numerical range of full precision). Tensorflow implements something called "dynamic loss scaling" which should mitigate this issue, but from experience, it is not perfect. In fact Googling "mixed precision NaN" you will find numerous posts, across multiple ML Libraries that have this issue.
The bottom line is, though, for some models, you will have no choice but to enable Mixed Precision to fit the model into VRAM. You will get speed benefits from this, but you may need to learn how to live with the drawbacks.