Are you looking to setup a server cluster in your home lab? Proxmox is a great option along with Ceph storage. In this video we take a deep dive into Proxmox clustering and how to configure a 3-node Proxmox server cluster with Ceph shared storage, along with Ceph OSDs, Monitors, and Ceph storage Pool. In the end I test a migration of a Windows Server 2022 virtual machine between Proxmox nodes using the shared Ceph storage. Cool stuff.
★ Subscribe to the channel: https://www.youtube.com/channel/UCrxcWtpd1IGHG9RbD_9380A?sub_confirmation=1
★ My blog: https://www.virtualizationhowto.com
★ Twitter: https://twitter.com/vspinmaster
★ LinkedIn: https://www.linkedin.com/in/brandon-lee-vht/
★ Github: https://github.com/brandonleegit
★ Facebook: https://www.facebook.com/people/VirtualizationHowto/100092747277326/
★ Discord: https://discord.gg/Zb46NV6mB3
Introduction to running a Proxmox server cluster -
0:00
Talking about Promox, open-source hypervisors, etc -
0:48
Thinking about high-availability requires thinking about storage -
1:20
Overview of creating a Proxmox 8 cluster and Ceph -
2:10
Beginning the process to configure a Proxmox 8 cluster -
2:24
Looking at the create cluster operation -
3:03
Kicking off the cluster creation process -
3:25
Join information to use with the member nodes to join the cluster -
3:55
Joining the cluster on another node and entering the root password -
4:15
Joining the 3rd node to the Proxmox 8 cluster -
5:13
Refreshing the browser and checking that we can see all the Proxmox nodes -
5:40
Overview of Ceph -
6:11
Distributed file system and sharing storage between the logical storage volume -
6:30
Beginning the installation of Ceph on the Proxmox nodes -
6:52
Changing the repository to the no subscription model -
7:30
Verify the installation of Ceph -
7:51
Selecting the IP subnet available under Public network and Cluster network -
8:06
Looking at the replicas configuration -
8:35
Installation is successful and looking at the checklist to install Ceph on other nodes -
8:50
The Ceph Object Storage Daemon (OSD) -
9:27
Creating the OSD and designating the disk in our Proxmox hosts for Ceph -
9:50
Selecting the disk for the OSD -
10:15
Creating OSD on node 2 -
10:40
Creating OSD on node 3 -
11:00
Looking at the Ceph dashboard and health status -
11:25
Creating the Ceph pool -
11:35
All Proxmox nodes display the Ceph pool -
12:00
Ceph Monitor overview -
12:22
Beginning the process to create additional monitors -
13:00
Setting up the test for live migration using Ceph storage -
13:30
Beginning a continuous ping -
14:00
The VM is on the Ceph storage pool -
14:25
Kicking off the migration -
14:35
Only the memory map is copied between the two Proxmox hosts -
14:45
Distributed shared storage is working between the nodes -
15:08
Nested configuration in my lab but still works great -
15:35
Concluding thoughts on Proxmox clustering in Proxmox 8 and Ceph for shared storage -
15:49
Proxmox 8: New Features and Home Lab Upgrade Instructions:
https://www.virtualizationhowto.com/2023/06/proxmox-8-new-features-and-home-lab-upgrade-instructions/
Proxmox 8 and Ceph:
https://www.virtualizationhowto.com/2023/06/mastering-ceph-storage-configuration-in-proxmox-8-cluster/
Top VMware vSphere Configurations in 2023:
https://www.virtualizationhowto.com/2023/06/top-vmware-home-lab-configurations-in-2023/