FreeNAS and ZFS on old hardware

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Anyone have experience with FreeNAS with ZFS on older hardware? Love the idea of ZFS and the web based interface seems super easy, but I'm a little scarred of the RAM recommendations. 8 GB? Really? The old PC I want to reuse has a core 2 duo e7400 and 2 GB of RAM. I just want to run a single hard drive, maybe two with data spread between them (my backup takes about 2 tab now, and I have some extra 2 tb drives). Don't really want to invest money in outdated RAM.


http://doc.freenas.org/index.php/Hardware_Recommendations
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Something that supports an e7400 should support 8GB RAM. You do definitely want the RAM. My FreeNAS machine was god-awful until I upgraded it past 4GB (I wound up with 24GB of registered, ECC DDR2. Thank you Ebay). As I recall, my disk transfer rate for writes approximately doubled when I upgraded from 4GB to 8GB. It's not a joke.

My FreeNAS is on a C2Q Xeon and going by the CPU graphs for pretty much any timeslice I care to look at, it never went above 5% CPU utilization. CPU resources are just not an issue.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
I don't really want to use my more powerful PCs for "just" running a NAS. Any way to run FreeNAS together with (or as a service of) PC-BSD? I can only find online comments about doing this as a VM, and it seems FreeNAS does not work well as a VM.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
You don't need a more powerful PC. You need a few extra 2GB DIMMs. It will run on 2GB RAM. You'll just see that write speed will be slow. Which might be fine, depending on your personal needs.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Some testing:

First, 3 machines:

1) FreeNAS machine is a Core 2 duo 7200 @ 2.53 GHz (stock), 2x1 GB PC 1066 RAM. Gigabit ethernet via a netgear switch to my router. FreeNAS 8.3.1 installed to an 8 GB flash drive. I have a single disk, a Seagate 7200.10 500GB drive, that I let ZFS have. Connected via CIFS.

2) My "workstation" is a Windows 7 machine, i7 2600k, 16 GB RAM, and the disk used is a Samsung 840 pro 256 GB SSD.

3) My "linux" machine is running Mint, i5 3570k, 16 GB RAM. It is running the CrashPlan client, and putting the backup on a ZFS pool using ZFS-fuse, spread across two Hitachi 7k2000s. It also has a shared drive, a WD 20 EARS, a 2 TB green drive, formatted in Ext4.

Note MBps vs Mbps.

Using the CrashPlan client on my workstation, using a virtual folder so CrashPlan thinks its a local backup, it runs to the FreeNAS machine at a reported ~70 Mbps (according to the CrashPlan client). FreeNAS's Reporting section shows that incoming network traffic was in the range of 350 Mbps, free RAM went down to 52 MB (with no activity was typically around 1.2 GB), CPU usage around 25%.

A similar task going to my Linux machine runs around 240 Mbps (30 MBps).

Copying a single large file from my workstations's SSD to the FreeNAS using Teracopy will be pretty erratic and run anywhere from 12-90 MBps, and typically in the 40-60 MBps range. FreeNAS's Reporting section shows that incoming network traffic was in the range of 400-450Mbps, free RAM was around 1 GB, CPU usage around 25%. Copying a single large file from FreeNAS using Teracopy to my workstation's SSD runs 30-40 MBps. FreeNAS's Reporting section shows that outgoing network traffic was in the range of 300 Mbps, free RAM was around 1 GB, CPU usage around 25%.

Copying a single large file from my workstation to my Linux machine's Ext4 disk runs at 107 MBps. Copying from the Ext4 disk to my workstation's SSD runs around 52 MBps.

I'm hesitant to read too much into this since there are so many differences between machines. But for my use, it seems like using CrashPlan to FreeNAS runs about 1/3 as fast as it does going to my Linux machine's client runing ZFS-FUSE. Wasn't expecting that. Regular network transfers to FreeNAS (ZFS on a 7200.10) runs about 1/2 as fast as they do going to my Linux machine's Ext4 WD green drive. Not surprizing, I guess.

Thoughts?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Crashplan might be doing more integrity checking than is typical for SMB file transfers, which could account for some of the difference, but the bursty nature of your transfers is really the result of buffers emptying and filling again. More RAM would help that, but it's still probably not the end of the world if you stick with what you have.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Some more numbers:

On my Linux machine I installed the ZFS kernel module. I took my WD Green drive and made it into a ZFS volume with both dedup and compression on. Transferring some videos to it from my Window's machine's Seagate 3TB drive ran anywhere from 30-90 MB/s, and used 10-20% of the Linux machine's CPU cycles and up to 65% of it's 16GB of RAM (just sitting idle typically takes up just a few percent). Turning dedup and compression off results in much more consistent performance - transfer rates in the 88-90MB/s range, just 6-8% CPU utilization, but still 67% of my RAM.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Shoot, I think I forgot to do the ashift=12 trick with the WD 20EARX I used. Oh well. Not planning on using this drive long-term in the array, was just testing out the ZFS kernel module. I know "green" drives don't do well with ZFS, but does this hold true for just single volume Zdevs (just using the drive as a single device, no redundancy etc)?
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Anybody know a way I can get some or most of the features of FreeNAS (easy web-based administration with ZFS support) within a Linux machine? I'm trying to see if I can make my Linux box pull double duty as a workstation of sorts but also a destination for backups to be stored on ZFS Zpools.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
My understanding is that ZFS needs gobs of memory for the the caching, block lists CRC and such. This is why it is robust and you can do things like dedup. My question is why you want to use ZFS on something that looks like a single volume? Why not just use CIFS/SMB? On a single drive you can get similar performance without the overhead. Is this a learning thing?
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Liam, so far it's been a learning thing, yes. But I want to move to using a ZFS system for my backups if I can make it work. So far the easiest solution I've found is to spend a few hundred bucks making a new server to run FreeNAS, but I'd prefer not.

I use CrashPlan, which can't split a backup between volumes. My current backup takes up about 2TB but I I anticipate growing substantially over the next year (>3 TB). Right now I've got an "A" backup set (~1 TB) and "B" backup set (~1 TB), but I'd prefer to use just a single set for simplicity.

I like the idea of being able to add additional disks when I need more space, something that ZFS's RaidZ allows me to do if I understand things correctly.

Right now my backups go to a number of drives in a number of locations, but It's getting to be a pain to manage all of this. I'd prefer a smaller number of backup copies, but to be comfortable doing so I'd prefer some degree of redundancy. And I like the idea of all the checksumming in ZFS, even for reads off a single disk.

So, do I need ZFS? No, not really. But I'm hoping I can use this cool technology with minimal fuss now to simplify my life in the long run.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Storage abstraction is going to become a common feature set in other OSes and filesystems. zPools are nifty, but there's also UnRAID, SnapRAID, disParity, Storage Spaces and Btrfs that you could look in to.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
Wow. Some more testing:

Linux machine, running ZFS on Linux. Installed three 3TB Seagate 7200.14 ST3000DM001. RaidZ (RaidZ1), ashift=12 using the three drives, giving me 5.31 TB of space. Copying from my Window machine's SSD goes at about 108 MB/s, CPU usage around 11% on the Linux machine. RAM usage goes from <10% before to 23% to 63% during the transfer. I turn off the machine, pull the power from one of the 3 TB drives, zpool status tells me that the pool is degraded but I can still transfer data off of it (to an SSD on the Linux machine) at 104 MB/s, using 19% of the CPU and 23% of the RAM.
[h=1][/h]
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Er, why? Gaming isn't going to be improved with a bandwidth increase and latency is ultimately going to be controlled by infrastructure upgrades from your ISP that they probably have zero interest in doing.

Granted that I want it too, but I'm more worried about getting the switch ports that go with it than anything else; I think the least expensive switch ports are on the order of $300 apiece.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
Er, why? Gaming isn't going to be improved with a bandwidth increase and latency is ultimately going to be controlled by infrastructure upgrades from your ISP that they probably have zero interest in doing.

Granted that I want it too, but I'm more worried about getting the switch ports that go with it than anything else; I think the least expensive switch ports are on the order of $300 apiece.

Switches are expensive, but I don't mind fooling with multiple NICs. Getting better speed to my NAS would be nice, particularly once 4k video starts getting more common.
 
Top