3 minute read

We've been looking at different storage solutions to act as storage servers for forensic images, and some extracted data. Essentially we have a server with eight 3TB drives, and one 40GB flash drive. RAID cards are RAIDCore BC4000 series and another HighPoint RocketRAID 272x series. The RIADCore, we have been unable to get working in either FreeNAS 8 or Ubuntu 12.04. It does, however, give the OS direct access to the connected drives if they are not RAIDed. The HighPoint works with Ubuntu 10.04 (kernel 2.6), and works with the newest version of FreeNAS (BSD driver). Though with FreeNAS I have had drives somehow fall out of a zfs pool, and not be able to be re-added. HighPoint also gives the OS direct access to the connected drives if they are not RAIDed.

So, since we were having trouble with the hardware RAID, we decided to look for alternatives. The ideal solution would aggregate all the storage across 4 servers into one volume. So initially we considered using sharing each of the physical disks via iSCSI, then having some single-head initiator to access them all. The initiator would the be responsible for some type of software RAID, LVM and sharing the  NFS mount point. Connectivity and data redundancy are big issues, and I was worried with this method, if one of the servers drops out a lot of data may be lost or corrupted, plus the head may be a bottleneck.

Instead, we decided that each server would host its own NFS share, so we just needed a way to combine all the 3TB physical disks into one large volume. I thought first about a software RAID+ext4, but instead decided to try ZFS. It seemed to have the redundancy we were looking for, plus some other interesting features, such as taking care of NFS from ZFS.

For ZFS we decided to try both Ubuntu and FreeNAS (BSD) to figure out 1) which is faster and 2) which is easier to setup and administer.

After installing Ubuntu 12.04, and following these instructions with a bit of help from here, we had terabytes of aggregate storage accessible over NFS in about 15 minutes. Pretty easy.

Then we started trying to image to the share. From a USB device to the server using Guymager, we could image about 11-13MB/sec. After 30 to 40 seconds of imaging, all the drives in the zpool would start flashing like crazy, and imaging would drop to around 3MB/sec. Faster initial speeds, but similar degradation happened with eSATA/IDE drives on the acquisition machine.

[edit] I believe part of the problem was with NFS having the 'sync' flag set on the server-side. I had less degradation and slightly faster performance with 'async'.

So, for large files ZFS was quite slow. This did NOT seem to be the case when writing from the server to the local zpool, which leads me to think it might be partially a problem with NFS (which zfs handles itself with the 'zfs set sharenfs=on ... ' command). I think the initial high speed is because of caching. So now I am wondering if ZFS is meant for bursts of smaller reads and writes rather than writing large files.

Looking around for why ZFS seems degrade so much, I found this article, which claims to improve ZFS performance. NOTE: if you make any of the changes from the article, make sure you have a backup of your data. I found that after making some of the tweaks, my pool had been corrupted and had to be re-created. The tweaks did seem to make the server more stable over time (less freezing), but write speeds still degraded.

FreeNAS took much less time than Ubuntu to install, and as long as it can detect your disks, creation of a new pool and sharing the pool takes about 10 clicks of your mouse (from the web interface). ZFS on FreeNAS appears to be less stable that on Ubuntu (we had drives drop out, and a more freezing), but we are getting write speeds of approximately 15MB/sec with less - but some - degradation over time.

To see if the slow write speed is ZFS related, we are currently building a software array, and will just format it with ext4, and create a standard NFS share. Syncing the 8 disks in the array will take hours, but after hopefully we will have much, much better transfer speeds. Ubuntu allows the creation of a software array on install, which makes it very easy to set up.

If you have any experience with ZFS, and know what some of the issues might be, or even just a better set-up for aggregating large disks and getting good performance writing large files, please leave a comment.

Bookmark and Share
Image: FreeDigitalPhotos.net