Production Inference Deployment with PyTorch

Production Inference Deployment with PyTorch

25.624 Lượt nghe
Production Inference Deployment with PyTorch
After you've built and trained a PyTorch machine learning model, the next step is to deploy it someplace where it can be used to do inferences on new input. This video shows the fundamentals of PyTorch production deployment, including Setting your model to evaluation mode; TorchScript, PyTorch's optimized model representation format; using PyTorch's C++ front end to deploy without interpreted language overhead; and TorchServe, PyTorch's solution for scaled deployment of ML inference services.