Installing Prox Mox: An open-source Hypervisor for SMB

Prox Mox is starting to become a more production ready Virtualization management solution for Small – Medium sized Business

It offers Scalability and High Availability by utilizing some of these features:

  • ZFS Storage pools ZFS stands for Zettabyte File System and is a next generation file system originally developed by Sun Microsystems for building next generation NAS solutions with better security, reliability and performance.
  • Clustered Servers – Easily scale and add new servers, minimum used in a clustered environment will be 3 to avoid split-brain (quorom). The 3rd node does not need to run any VMs, nor does it need above 2Gb of RAM for High Availability & Fail Over to function. For this reason you may use whatever hardware you like as long as it supports Virtualization.
  • PCI-e Passthrough – pass entire SATA / SAS Controllers for VMs to use drives directly.
  • Support for live migration between nodes.

Recommended Hardware

  • Intel EMT64 or AMD64 with Intel VT/AMD-V CPU flag.
  • Memory, minimum 2 GB for OS and Proxmox VE services. Plus designated memory for guests. For Ceph or ZFS additional memory is required, approximately 1 GB memory for every TB used storage.
  • ZFS depends heavily on memory, so for this guide we will be using 8GB minimum.
  • To prevent data corruption, it’s best to stick with ECC memory.
  • Fast and redundant storage, best results with SSD disks.
  • OS storage: Hardware RAID with batteries protected write cache (“BBU”) or non-RAID with ZFS and SSD cache.
  • VM storage: For local storage use a hardware RAID with battery backed write cache (BBU) or non-RAID for ZFS. Neither ZFS nor Ceph are compatible with a hardware RAID controller. Shared and distributed storage is also possible.
  • Redundant Gbit NICs, additional NICs depending on the preferred storage technology and cluster setup – 10 Gbit and higher is also supported.
  • For PCI(e) passthrough a CPU with VT-d/AMD-d CPU flag is needed.

Getting Started

Download the Latest Version of Proxmox.

Create a Bootable USB using Balena Etcher, Rufus will fail.

Alright so lets install Proxmox onto the machine to be used as the Virtualization Server

Installation

Select Install Proxmox VE

(1,750,000/24)/365= 199.77 Years

Yes you are reading that correctly. That number is correct.

Another example, 600 Terabytes Written average before a Samsung Evo 980 NVMe is destined to fail., that’s a whole lot of time to just load Prox Mox every boot and run continuously. (Stick to a 250GB NVMe., around $50CAD)

For raw storage space that will be used for the Virtual Machines & ISOs, SATA3 7200 RPM will still be the best option.

Did you know that if a failing hard drive is still within warranty you can RMA it and you will be sent a replacement? (I recommend buying western digital, Companies also may offer options for handling sensitive data that may be on these drives in case you are in a sector that handles sensitive data at this level.)

As an example we will be using 3x WD Blue 1TB Desktop Hard Disk Drive @ $48.99 CAD each

ZFS can handle RAID without requiring any extra software or hardware by using its own implementation of RAID, RAID-Z.

In our setup we will be using RAID-Z1, it will require two disks for storage, and one for parity.

Now lets continue.

Select your Country, Timezone, Keyboard Layout

Set root password as well as administrator email to be used in case of alerts.

Prox Mox should now be running after installation has completed and the system has restarted. (Note in the guide the IP address of pve01 is 192.168.0.251/24)

At this point the 3x 1TB HDD will already have been attached prior to installation.

Connect to https://192.168.0.XXX:8006 using a web browser of your choice.

Do not worry about this warning, it means the server is using a self-signed certificate for LAN access, the traffic between you and the server is still being encrypted.

You will now be presented with a login screen, the user name will be root, and password is what we had set during the installation.

Expand pve01

We can begin configuring our first ZFS Pool on the primary node.

Navigate to ZFS located under Disks

ZFS Configuration

Click Create: ZFS

Select the following options:

Name your pool, select RAID level to be used, in our case RAIDZ

Congratulations! you have now created your first ZFS Pool, but we are not done yet, We have a few more steps to complete.

Creating other nodes

First bring up two more Proxmox servers on the other servers being used.

pve02.ifixtech.ca – 192.168.0.252/24 , Hard Drives attached: 3x 1TB 7200RPM + 250GB NVMe, 8GB Minimum RAM

pve03.ifixtech.ca – 192.168.0.253/24, 2GB RAM, This VM is only used to monitor the other nodes we are going to use for Fail over / High availability. 250Gb NVMe

Creating a Cluster

on pve01 navigate to Datacenter > Cluster

Click Create Cluster

Name your cluster according to your organization or site. (or both ie IFixTech-Ajax)

Click Create.

After Creating click Join Information

Adding New Nodes to an existing Cluster

First we will have to connect pve02 – pve03 to the cluster.

on pve02 navigate to Datacenter > Cluster

Repeat the same process for pve03.

Joined cluster succesfully, close and refresh

Creating your first Virtual Machine

We still need to create a test VM that will be configured for HA / Replication between ZFS Pools. (Note: Replication only works with ZFS Pools.)

If you haven’t already go ahead and upload an ISO you would like to install to pve1 on local storage. (You can also migrate existing VMDK / qcow2 or VHD images with a little help from qemu tools)

Second go ahead and create a VM on the ZFS_POOL located on pve01.

Replicating your ZFS Pool

on pve02 navigate to Disks > ZFS

Create a ZFS Pool by the same name as earlier, with RaidZ as well, but uncheck Add Storage.

Create Rules for HA (High Availability) Between pve01 & pve02

Navigate to Datacenter > HA

Click Add

Select the VM you created earlier.
Save the setting for HA. (This will always bring the VM up if an issue is ever detected with the selected VM in this cluster.)

Assigning multiple nodes to a ZFS Pool

Navigate to Datacenter > Storage

Double Click on ZFS_Pool

Enable pve01 & pve02 under Nodes

Click OK

Configure a VM to automatically replicate between ZFS Pools

Navigate to 100 > Replication

Click Add

Configure how often the VM you want to have the VM replicate between nodes in the Cluster
You may also throttle the bandwidth speed the VM will replicate at.
VM Replication Configured Successfully

Congratulations!

At this point Proxmox should be complete and ready for production use at your small business / home office.

Feel free to shut off the primary node to verify Fail Over is working successfully, within a minute of your VM going down you should see it brought up on the secondary node. (pve02)

I hope anyone who may have found this article finds use in it and if you have any questions or comments just leave them below

For extra redundancy you could also configure two nics on the nodes

Thanks for taking the time to read all this if you made it this far!

Alexander Donofrio,

IFixTech

Leave a Reply

Your email address will not be published. Required fields are marked *