I've been at this a couple of days. I am not using conda. I was able to get this running from source by running the nvidia-docker command, installing the prereqs manually on the jupyter terminal, copying the files over myself, and launching with docker. It launches fine, but I get the
There was an error reading from the Nvidia Machine Learning Library. Either you do not have an Nvidia GPU (in which case this warning can be ignored) or the most likely cause is incorrectly installed drivers. If this is the case, Please remove and reinstall your Nvidia drivers before reporting.Original Error: NVML Shared Library Not Found
No GPU detected. Switching to CPU mode
Warning. I am able to run other nvidia docker samples fine and they detect my gpu. I added the "--gpus all" flag to
Code: Select all
nvidia-docker run --gpus all -p 8888:8888 \
--hostname faceswap-gpu --name faceswap-gpu \
-v /opt/faceswap:/srv \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
-e AUDIO_GID=`getent group audio | cut -d: -f3` \
-e VIDEO_GID=`getent group video | cut -d: -f3` \
-e GID=`id -g` \
-e UID=`id -u` \
deepfakes-gpu
(updated the display to match my X410 config as well)
in case that was doing it, but that didn't seem to help. nvidia-ml-py3 is installed on the jupyter system that nvidia-docker is running and on my main WSL2 Ubuntu. As I said, other nvidia docker samples work, even their jupyter one, but this one does not. I'm not sure why it won't detect my gpu. lspci does not show it (as I am on wsl). I have the latest nvidia drivers installed for WSL2 as I'm on the insiders program, so that isn't it either.