Hardware

Lenovo ThinkCentre M720Q Tiny

M720Q Tiny Lenovo ThinkCentre PC

I run a Tiny form factor mini PC in my homelab. I bought it refurbished for just over $100 Canadian dollars. I chose to go with this older underpowered computer because it’s cheap, quiet, and doesn’t up electricity. At idle it consumes less than 20W. For context thats around the same as my iPhone fast charging.

My biggest issue with the hardware is the lack of GPU computing power. Transcoding video is brutal with the integrated intel UHD graphics card.

HardwareDescription
CPUintel i5 7500T
iGPUintel UHD 630
CPU Clock Rate2.7 GHz
Memory16 GB DDR4
Boot Disk512 GB PCIE 3.0 SSD
Hot Storage Disk1 TB 2.5" WD Blue HDD
Bulk Storage Disk2 x 5 TB HDD

Virtualization

The hardware in the server isn’t that strong so I decided to go with a Type 1 hypervisor to minimize bloat. Proxmox is my hypervisor of choice because its ease of use and community support. It also automatically identified my integrated intel GPU (Typically a pain for Linux operating systems).

When running my applications I had to choose between running applications as KVMs versus Containers. There are significant tradeoffs in the number of applications you can run, the speed, reliability, and hardware access for the applications.

When running VMs you get the maximum speed and reliability of the application as well as the ability to directly access hardware like the GPU and hard-drives. I setup my VMs to balloon to reserve the resources I specified so the applications maintain stability. The only VM based application I used was OpenMediaVault. (I don’t think it can be run as a container) The VM allowed for the installation of the OMV operating system and gave it native hardware access to access the hard drives.

When running Containers you get the maximum number of applications running concurrently while reducing speed & reliability. Containers require a maximum resource specification for virtual CPU (vCPU). This means the application can never use more than the specified vCPU when running at full utilization. Resulting in throttled app performance or even a crash. The major upside of containers is that you can run many more concurrent applications as long as the used resources is less than the total number of CPU in the server. Typically you would want the total vCPU request to be the same as the server. But, you could get away with more requested vCPU than exists in the server because applications do not use 100% of resources at all times.

Networking

Networking in my homelab can be described as “public” and “internal”. My public networking consists of port forwarding applications through traefik reverse proxy to my public ip address (dynamic). I use a script to automate the management of my dynamic DNS. I also use cloudflare tunnels to allow connections to HTTP services.

My internal networking are the static ip addresses that my applications use to connect with each other. To avoid issues with the DHCP I blocked off a range of internal IP addresses so my DHCP can’t assign them. Then I manually assign IP addresses to my applications. This design choice was due to my ISP provided router being absolute trash and randomizing hosts and IP addresses with every reboot.

Storage

I use a 512 GB PCIE 3.0 SSD as the boot device that contains the Proxmox OS.

The containers that run on proxmox store their app data on the 1TB WD Blue HDD. The container configuration and metadata is stored on the SSD.

Digital media and content that I own or want to backup are stored on the 5TB hard drives. I currently use an array of 2 disks with mergerfs. This allows for me to combine the separate filesystems to be mounted on the containers. It automatically allocates the data I upload to the drives among the groups to ensure even usage.