Refactoring My Home Network
Completely redesigning my network and reconfiguring my core switch.
I spent last night completely redesigning my home network. My previous setup was... functional, but it was a hodgepodge of VLANs added over time with no real strategy. It was becoming a nightmare to manage, especially as I expanded my homelab and CloudStack environment. One thing I've learned about my lab is that if I'm even remotely concerned I could break something I just wont do it. This is counter to the experimentation that the lab is meant to enable, so it was important that I get this sorted out.
The Hardware
At the core of my network is a stack of three Brocade/Ruckus ICX 7450-48p switches. These are enterprise switches that I picked up used (pro tip: ex-enterprise gear on eBay is the homelab enthusiast's best friend). Each switch has:
- 48 PoE+ 1Gb ports
- 4x 10Gb SFP+ uplink ports
- 2x 40Gb QSFP+ ports (which I'm using for stacking)
The setup is pretty straightforward. Stacking is 40Gb QSFP+ in a ring which gets treated like a "single switch" with 144 ports. I obviously don't need that many ports, but these switches were affordable on eBay and I needed at least 8 10G ports. 3 of these give me 12 which is perfect.
The VLAN Strategy
My previous network was a mess of overlapping VLANs with inconsistent numbering. The first thing I did was establish a proper VLAN numbering scheme:
# Infrastructure VLANs
VLAN 4 - WAN (connection to ISP)
VLAN 5 - TRANSPORT (router interconnects)
VLAN 100 - MGMT (infrastructure management)
VLAN 200 - SAN (storage area network)
VLAN 1010 - HARDWARE-MANAGEMENT (IPMI, iDRAC, etc.)
# Home Network VLANs
VLAN 2010 - HOME-WIRED (wired home devices)
VLAN 2020 - HOME-WIRELESS (Wi-Fi devices)
VLAN 2030 - VPN (for VPN clients)
# CloudStack VLANs
VLAN 3000 - CLOUDSTACK-MGMT (CloudStack management)
VLAN 3010 - CLOUD-PUBLIC (public interfaces for VMs)
VLAN 3020 - CLOUD-SHARED (shared services)
VLAN 3500-3520 - Guest networks (for tenant isolation)
What I love about this scheme is that it's immediately obvious what each VLAN is for based on its number. Infrastructure VLANs are < 1000, home networks are in the 2000s, and cloud stuff is in the 3000s. VLANs 100 and 200 are artifacts from my previous configuration that I just didn't want to change. I'll have to re-ip all the hardware to put them on the 1010 VLAN the way I want them to be.
Layer 3 Switching
One of the best features of these switches is that they can do proper Layer 3 routing between VLANs. Instead of tromboning all inter-VLAN traffic through an external router, the switch handles it directly:
interface ve 100
ip address 10.1.0.1 255.255.255.0
!
interface ve 1010
ip address 10.232.10.1 255.255.255.0
ip helper-address 1 172.16.0.2
!
Each VLAN has a corresponding virtual interface (ve) with its own IP address that serves as the gateway for that network segment. This significantly reduces latency for cross-VLAN communication, which matters when you're pushing lots of data between your servers and storage.
Security Through Isolation
Security was a major focus of this refactor. I've completely isolated my lab environment from my home network. The CloudStack environment has its own management VLAN (3000) that's separate from both my home network and the actual cloud networks.
The HARDWARE-MANAGEMENT VLAN (1010) is particularly important - it contains all the IPMI, iDRAC, and management interfaces for physical servers. This keeps those sensitive interfaces away from normal traffic and makes them accessible only via specific routes.
vlan 1010 name HARDWARE-MANAGEMENT by port
tagged ethe 3/2/4
untagged ethe 1/1/1 to 1/1/4 ethe 1/1/8 ethe 2/2/3 ethe 3/2/1
router-interface ve 1010
The Storage Network
Another critical component was setting up a dedicated SAN network (VLAN 200). This carries all the iSCSI traffic between my servers and storage:
vlan 200 name SAN by port
untagged ethe 1/2/2 ethe 1/2/4 ethe 2/2/2
Notice these ports are untagged - I'm not doing any VLAN tagging on the storage network to reduce overhead. Each server has a dedicated 10Gb link just for storage traffic, completely separated from management and VM traffic.
CloudStack Integration
The most complex part of this setup is the CloudStack integration. CloudStack needs multiple networks:
- A management network for communication between CloudStack components
- A public network for VMs that need internet access
- A shared services network for internal CloudStack services
- Guest networks for tenant VMs
I've configured these with their own VLANs, and CloudStack can create additional VLANs as needed for tenant isolation. The ports connecting to my CloudStack hosts are configured to allow tagged traffic for all these VLANs:
interface ethernet 1/2/1
port-name PVE-01 DATA
no optical-monitor
!
vlan 3000 name CLOUDSTACK-MGMT by port
untagged ethe 1/2/1 ethe 1/2/3 ethe 2/2/1 ethe 2/2/4
!
vlan 3010 name CLOUD-PUBLIC by port
tagged ethe 1/2/1 ethe 1/2/3 ethe 2/2/1 ethe 2/2/3 to 2/2/4