-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too few Fps #7
Comments
@nagadit Me too, I test the runner in the InferenceWrapper class and got about 380 ms per frame on gtx 1060. |
@egorzakharov need your intervention |
Hi! @nagadit First of all, in this pipeline, you are evaluating the full model (initialization + inference) and external cropping function, not just inference. Cropping function consists of a face detector and landmarks detector (face-alignment library), which can be optimized further, we just did not do it in this repository. For a real-time application, you need to train the model from scratch using a face and landmarks detector that works in real-time (like Google's MediaPipe). Note that this issue is common across all methods which utilize keypoints as their pose representation. You can crop data externally via the Lastly, if you want to measure the speed of the inference generator only, then you need to perform a forward pass of only this network. as mentioned in our article. We additionally speed it up by calling the Hope this helps! |
This closely follows a pipeline that we have developed in our mobile application: a computationally heavy initialization part runs separately in the PyTorch Mobile framework, and then we optimize a personalized inference generator by converting it into ONNX followed by SNPE for the real-time frame-by-frame inference. By the way, I have pushed the |
@egorzakharov could you also share the onnx weights? |
Thank you very much for such an informative answer, I will try to do something about it. |
@ak9250 I will ask my colleagues for approval, but I believe the conversion to ONNX was very simple, ONNX -> SNPE was much trickier. |
@egorzakharov
could you please also push the |
I tested G_inf on Nvidia 1060, got about 15ms per frame. Thanks for the advice. |
The speed of work does not correspond to the declared one, or I am doing something wrong.
GPU - 2080TI
Help with this pls.
The text was updated successfully, but these errors were encountered: