You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi , I am new to using Triton inference server, I want to deploy my DINO architecture model trained on coco dataset on triton inference server. I have a tensorrt .engine file for the model, And How should I create a proper configurations file in the model repository and How to check if My model file is supported by the inference server and properly deploy the model. It would be great help, if I get a step by step guidance to do this so that I can deploy any type of model using triton inference server in future.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi , I am new to using Triton inference server, I want to deploy my DINO architecture model trained on coco dataset on triton inference server. I have a tensorrt .engine file for the model, And How should I create a proper configurations file in the model repository and How to check if My model file is supported by the inference server and properly deploy the model. It would be great help, if I get a step by step guidance to do this so that I can deploy any type of model using triton inference server in future.
Beta Was this translation helpful? Give feedback.
All reactions