Sun Fire x4500 Thumper: Recommended ZFS Zpool Layout

By Alasdair Lumsden on 16 Nov 2008

Sun fire X4500 again

The x4500 comes with 48 disks, two of which you typically use as a mirrored ZFS pair for the host OS, leaving 46 drives for data. One of the questions you’re faced with, is how to efficiently lay out your zpool configuration to balance performance, reliability and capacity.

For the particular workload we’ll be using the x4500 for, we want a balance across all 3. No particular factor wins out over the others – they’re all equally important. To further complicate matters, the box has six 8-channel SATA controllers, so you want to spread your workload across the controllers in an intelligent fashion.

There are many differing opinions on this. I sparked a debate on #solaris on Freenode posing the question, with some suggesting a single zpool with collection of mirrors if databases are involved, 1 drive per controller. Others suggested lots of small raidz2 sets in a single zpool.

After experimenting, musing, and researching on the web, we finally settled on the following configuration, which provides a fair balance:

  pool: zpool01
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zpool01     ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0
            c6t0d0  ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c0t0d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c3t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
            c3t5d0  ONLINE       0     0     0
            c6t5d0  ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0
            c3t7d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
            c3t6d0  ONLINE       0     0     0
            c6t6d0  ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
        spares
          c4t2d0    AVAIL
          c4t6d0    AVAIL

This gives 2 spares, and 11 drives across 4 raidz2 groups. The chances of 3 drives failing in a 11 disk raidz2 pool before the spares finish rebuilding are (hopefully!) fairly low. In the unlikely event that 3 drives did fail, they’d more than likely be spread across the 4 raidz2 groups. It’s all about managing risk.

The command to create this would be:

zpool create zpool01 raidz2 c{4,3,6,5,1,0}t1d0 c{3,6,5,1,0}t0d0
zpool add zpool01 raidz2 c{4,3,6,5,1,0}t3d0 c{3,6,5,1,0}t2d0
zpool add zpool01 raidz2 c{4,3,6,5,1,0}t5d0 c{3,6,5,1,0}t4d0
zpool add zpool01 raidz2 c{4,3,6,5,1,0}t7d0 c{3,6,5,1,0}t6d0
zpool add zpool01 spare c4t2d0 c4t2d6

Finally, this inspiration for this configuration came from the Joyent web blog. Those guys know their stuff and have been using ZFS in production for longer than most.