You can host almost any LLM on watsonx! In this video I will walk you through, how you can do this for deepseek-r1 (distilled-llama-8b). Follow the official documentation if you want to know all the details: https://eu-de.dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-custom-fm-cloud.html?context=wx&locale=en&audience=wdp
timestamps:
00:00 - Intro
01:37 - The official Documentation
02:24 - Prerequisites
03:13 - Downloading deepseek-r1-distill-llama-8b from huggingface
06:35 - upload the model IBM Cloud using Aspera high-speed transfer
08:50 - connect the storage to watsonx
10:49 - register the custom foundation model in watsonx
12:35 - deploying the model
13:22 - using deepseek in the prompt-lab
14:27 - outro