Saturday, February 23, 2013

My Home Setup - FREENAS and VMWare ESXi

Throughout the years my home setup has seen many changes but the overall requirements have pretty much stayed the same. I need some test servers and a file server as well as a firewall to provide internet access and do some PAT/NAT luvin. Here is sort of an explination of the evolution of my home setups through the years.

The early days

In the early dial up/cable modem days I would have a standalone multi-homed box acting as my firewall and then multiple machines living behind that. This worked really well for a long time but it had a rather large physical footprint. I would have multiple physical machines for different things. Back in the day this wasn't a big deal because energy was cheap and I thought it was cool to have a sun box to learn on etc.

Enter virtualization

Once virtualization became something that was solid and memory prices began to drop I was able to consolidate a lot of hardware on to less physical hardware. This eventually pushed me down to a single server with lots of disk running linux and a commodity interwebs firewall/router. This worked fine for a while but there were a few things I had problems with:

1. Flexibility

With linux being my host OS and running VMs I was constantly trying to manage updates and reboots.  This becomes a serious pain in the ass when you have to recompile kernel modules every kernel update so that vmware would continue to work. The other issue was if I wanted to play around with different base OS installs I would have to do something with my data like copy it to other machines which is a royal pain. Also your standard home based firewall can't do a lot of cool things like say a pfsense box. (OK The WRT stuff is cool)

2. Security

Typically there are a few services I like to expose to the internets like SSH in case I need to log into the crib or shell access for things like irssi etc. The problem with using PAT is that if your internet facing VM gets WTFPWNTSAUCED the adversary now has a foot hold inside your internal network. This means they can try and access your systems that actually have something you care about on them.

Enter my new home setup:

Normally my home setup was something that was just pieced together with old stuff without too much thought. So I decided new hardware instead of hand me downs and also put together some basic requirements for my new home setup. (Disclaimer - My hardware is not new any more as this setup has been running for several years now but when I built it was mostly new)

- Must be able to keep network storage available while I am blowing up my OS/kernel.
- Need to isolate VMs that have no exposure to my internal network.
- Easy to spin up instances for testing stuff.
- ZFS Z-Raid.. Cause its pretty much awesome.
- Jumbo Frames NICs. (We'll get into that later)
- ESX compatible hardware.

When I took these requirements into consideration I ended up going for 2 physical boxes. I would have a FreeNAS box and an ESXi box. The FreeNAS machine would meet my ZFS needs as well as allow me to mess with stuff on my main box without having to move data around. I would simply use this same disk for the storage pools inside ESXi.

First off I won't go deep into the steps to get ESX working on a whitebox type motherboard. There are tons of articles and sites with info out there for that. In my ESX box I have 8 gigs of RAM and a quad core CPU. I ended up buying 2 intel Gigabit NICS and pulled an old 3Com 3C905B from storage. Those were the bomb back in the day and I still have a few.. I could never bring myself to pitch them. So my hoarding ways for old hardware paid off. (I still have my 2 Canopus Pure 3DII cards.. Those will always be with me.) All I have is a 8 gig USB thumb drive plugged into the back.

So why 3 NICS? My first Intel NIC is my uplink to the internal network. On the second Intel NIC is my dedicated connection to my FreeNAS box utilizing jumbo frames. (You must configure the virtual switch to use jumbo frames) My 3rd NIC the trusty old 3C905B is for connectivity to the Internet. It's only a 100meg NIC but my Interwebs is only 30mbit so it will do.

On the FreeNAS box I dropped in an Intel NIC along with the onboard Realtek. The Intel NIC would be for the jumbo frames network. I also used an 8GB USB drive for the OS as well. I also have 6 x 1TB hard drives that are in a Raid-Z config as well as 2 x 300GB that are mirrored for my iSCSI mounts for the OS.

So give me some details:

In ESXi I created 4 virtual switches 3 of which have one of the NICs assigned. You can see ISONET has no physical adapters. This will be the network for VMs I want to expose inbound services from the Internet to.

The Kernel Port in the Storage network is important because that is how I am mounting the iSCSI mount from the FreeNAS box for my VMs.


FreeNAS is pretty straight forward. I set up iSCSI but you can also use NFS for your ESX setup. Just be sure to change the networking for jumbo frames support:

pFSense.. The key to this setup:

Commodity firewall/routers are neat and all that but nothing beats pfSense. It gives me everything I need out of the box with some really cool remote access features. When you install the pfSense VM assign it 3 network cards and put one in the 3C905B switch (Internet), the internal network Intel NIC (LAN), and finally the last one in the ISONET.

When install pFSense make sure you assign the correct network cards to the "zone" they are supposed to be in. Typically you can keep the default for the WAN and LAN zones but make your ISONET ruleset look like this:

The first rule denies all traffic to the LAN Network. The second rule allows it to go everywhere else. Now you can further lock this down if you like or you could open certain services to your internal network if you really wanted to like ssh etc. Just make sure you put those above the first rule.

Where to go from here?

Personally I am getting ready for a hardware refresh as recent motherboards etc have way lower power usage. I know Panaman is using an E-350 for his FreeNAS box which is extremely low power. That is really the only shortcoming with my current implementation is power usage. If I was building this today I would make sure to take power consumption in mind. Although I am running 6 of the WD Green Drives I think I would get 4 x 3TB WD Red drives. They are better for home NAS use according to Western Digital. Soem other things I would like to do to improve this is add second NICs to the jumbo frame network and make them bond interfaces to get more bandwidth. Most of the time this is not a problem but if I am processing a lot of data across that link I do notice a performance hit on other VMs.

That's really it. Comment below if you have questions.

Smooth out