Is there a good method of training when you have a series of images in different perspective in real world 3D not 2D.
For example if you have Fisheye or Obscure image with another set of images it can't seem to map and over more iterations can even gets worse pushing a generalize model into 2D for outliers get worse not better. So it gets best around 150k-300k IT once I pass that pushes the outliers which are over 1/4 of training material to curb and none of those models follow.
So for example if you put your go-pro camera to someone's face. Say you put your face 3 inches away the lens may even distort the person's face making a much bigger nose with smaller lower chin / bigger forehead so entire face is still masked and in frame of your fisheye lens. So overall you have many other shots to compensate for this outlying perspective such as a shot of them from distance / side view extra but you want the model to work in 3D space like reality.
I don't know what training to try. Such as if you build a face model from what I reviewed tends to flip x, y (2D). I don't know of any flipping the Z axis in 3D like real world is.
Is there settings to do to prevent or stabilize the models to 3D prospect to follow the face / eyes / mouth rather than mis-train to 2D only.
VR Nvidia GPU card with 4G aka lower memory.
I can push this higher but no point if ends up the same result
nice python faceswap.py train -A /mnt/f/V/FaceSwap/FACEA -B /mnt/f/V/FaceSwap/FACEB -m /mnt/f/V/FaceSwap/MODEL -t original -bs 12 -it 520250 -s 250 -ss 5000 -nl -wl -L INFO
nice python faceswap.py train -B /mnt/f/V/FaceSwap/FACEA -A /mnt/f/V/FaceSwap/FACEB -m /mnt/f/V/FaceSwap/MODEL -t original -bs 12 -it 520250 -s 250 -ss 5000 -nl -L INFO