- #UNITY 3D UBUNTU INSTALL#
- #UNITY 3D UBUNTU ARCHIVE#
- #UNITY 3D UBUNTU WINDOWS 10#
- #UNITY 3D UBUNTU CODE#
The character's gaze is not synchronized. Finally, the center of the iris is computed as the center of the black area. Then the eye images are converted to grayscale, and a pixel intensity threshold is applied to detect the iris (the black part of the eye). Using GazeTracking, The eyes are first extracted using the landmarks enclosing the eyes. The head pose is represented by the green frustum and the axes in front of the nose. The character's head pose is synchronized.Īs for the visualization, the white bounding box is the detected face, on top of which 68 green face landmarks are plotted. Finally, kalman filters are applied to the pose to make it smoother.
A face bounding box and the 68-point facial landmark is detected, then a PnP algorithm is used to obtain the head pose (the rotation of the face). Using head-pose-estimation and face-alignment, deep learning methods are applied to do the following: face detection and facial landmark detection. In this section, I will describe the functionalities implemented and a little about the technology behind. Right: GPU model run on a GTX1080Ti.Įnjoy your VTuber life! Functionalities details (add -debug to see your face and -cpu if you have CPU only as step 1.) After the vitual character shows up, run python demo.py -connect to synchronize your face features with the virtual character.
Important: Ensure that only one window is opened at a time! Download and launch the binaries here depending on your OS to launch the unity window featuring the virtual character (unity-chan here).If you don't use onnxruntime, you can omit this step as the script will automatically download them for you. You need to download the models here, extract and put into face_alignment/ckpts. Here we assume that you have installed the requirements and activated the virtual environment you are using. Unity Editor if you want to customize the virtual character.OBS Studio if you want to embed the virtual character into your videos.
#UNITY 3D UBUNTU INSTALL#
If you have CUDA 10, pip install onnxruntime-gpu to get faster inference speed using onnx model.Install other dependencies by pip install -r requirements_gpu.txt.Example: conda install pytorch=1.2.0 torchvision=0.4.0 cudatoolkit=10.0 -c pytorch if dlib cannot be properly installed, follow here.If you have CUDA 10.1, pip install onnxruntime-gpu to get faster inference speed using onnx model.Install the requirements by pip install -r requirements_(cpu or gpu).txt.(Optional) It is recommended to use conda environments.Python3.x (installation via Anaconda is recommended mandatory for Windows users) (Optional but recommended) An NVIDIA GPU (tested with CUDA 9.0, 10.0 and 10.1, but may also work with other versions).
#UNITY 3D UBUNTU WINDOWS 10#
#UNITY 3D UBUNTU CODE#
Youtube Playlist (Chinese) (Covers videos 1-4):įirst of all, I'd like to give credits to the following projects that I borrow code from: ProjectĪnd the virtual character unity-chan © UTJ/UCL. This is part of the OpenVTuberProject, which provides many toolkits for becoming a VTuber. Use Unity 3D character and Python deep learning algorithms to stream as a VTuber! This repository doesn't work as well as those tools, but can still serve as a tool if you want to integrate your character in Unity and custom it (e.g. Nowadays you can basically find many public tools on the internet, even for mobile platforms.
#UNITY 3D UBUNTU ARCHIVE#
VTuber_Unity : Due to massive bugs and the fast-moving virtual character technology, I decide to archive this repository (no updates anymore).