Looking for information on what performance you can expect on your Homelab OpenwebUI and Ollama based Ai Home server? This is the video! We will be using Llama3.1 70b and assessing Tokens per Second on many model variations. Have questions about mixed VRAM setups? Need to run your GPU on a smaller PCIe slot than a full 16x? I dive into testing to answer these questions that YOU have asked recently around mixing GPUs of different VRAM size, adding extra GPUs to your own Ai Home Based Rig.
Ai GPU Benchmarking 3090s and 3060s + VRAM Testing
https://youtu.be/-heFPHKy3jY
Bare Metal OpenwebUI Ai Server Setup Guide
https://youtu.be/q_cDvCq1pww
Proxmox LXC Docker Ai Setup + GPU Passthrough
https://youtu.be/TmNSDkjDTOs
Results Table https://digitalspaceport.com/llama3-1-70b-benchmarks-on-gpus/
GPUs reviewed
3090 https://geni.us/3090GPU_y3jY
3060 12GB https://geni.us/3060_GPU_12GB
LOCAL AI QUAD 3090 SERVER
https://youtu.be/JN4EhaM7vyw
GPU Rack Frame https://geni.us/GPU_Rack_Frame
Supermicro H12ssl-i MOBO (better option vs mz32-ar0) https://geni.us/MBD_H12SSL-I-O
Gigabyte MZ32-AR0 MOBO https://geni.us/mz32-ar0_motherboard
AMD 7V13 (newer, faster vs 7702) https://geni.us/EPYC_7V13_CPU
RTX 3090 24GB GPU (x4) https://geni.us/GPU3090
256GB (8x32GB) DDR4 2400 RAM https://geni.us/256GB_DDR4_RAM
PCIe4 Risers (x4) https://geni.us/PCIe4_Riser_Cable
AMD SP3 Air Cooler (easier vs water cooler) https://geni.us/EPYC_SP3_COOLER
iCUE H170i water cooler https://geni.us/iCUE_H170i_Capellix
(sTRX4 fits SP3 and retention kit comes with the CAPELLIX)
CORSAIR HX1500i PSU https://geni.us/Corsair_HX1500iPSU
4i SFF-8654 to 4i SFF-8654 (x4, not needed for H12SSL-i) https://geni.us/SFF8654_to_SFF8654
ARCTIC MX4 Thermal Paste https://geni.us/Arctic_ThermalPaste
Kritical Thermal GPU Pads https://geni.us/Kritical-Thermal-Pads
HDD Rack Screws for Fans https://geni.us/HDD_RackScrews
Be sure to 👍✅Subscribe✅👍 for more content like this!
Join this channel https://www.youtube.com/@DigitalSpaceport/join
Digital Spaceport Website https://digitalspaceport.com
Patreon https://www.patreon.com/digitalspaceport
Buy Me a Coffee https://buymeacoffee.com/digitalspaceport
Please share this video to help spread the word and drop a comment below with your thoughts or questions. Thanks for watching!
Chapters
0:00 Home Ai Server GPU Benchmarking
2:06 70b q8 4x 3090 + 1x 3060ti Mixed Test
6:33 70b q4 4x 3090 + 1x 3060ti Mixed Test
7:31 MZ32-AR0 Bios PCIe Configuration
8:48 Llama 3.1 70b q8 4x 3090 Test
10:00 Llama 3.1 70b q8 4x 3090 Test
11:15 70b q8 3x 3090 VRAM Test
12:39 3x 3090 LLM Benchmark
16:33 2x 3090 Llama 3.1 70b q4 Benchmark
17:00 Conclusions
*****
As an Amazon Associate I earn from qualifying purchases.
When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.
*****