ZFS pour Ubuntu
=>
https://launchpad.net/~zfs-native/+archive/stable (ZFS pour Ubuntu)
Tests & Perf
=>
http://hardforum.com/showthread.php?t=1651800The machine specs are now
SUPERMICRO X8DTN+
2 x Xeon 2.26Ghz
2 x 80 GB WD
2 x Corsair Corsair_Force_3 120GB
8 x 2 TB Western Digital WD2002FYPS SATA 5400RPM
8 x 2 TB Western Digital WD2003FYYS SATA 7200RPM
1 x 3ware 9650SE-16ML 16 Port SATA Raid
Uubntu 10.04.3 LTS 64Bit
12GB RAM
I took the advise of the Forums and stay away from zfs-fuze, deciding on using the Ubuntu package by Darik Horn
I wanted to try out sub.mesa's zfs build, but this backup server also runs a number of jobs so I couldn't move away from Ubuntu just yet.
The next build we will do will split the rolls up giving up a backup target (sub.mesa) and a server to push things around.
That said, here's our layout; I have two pools;
-1
RAIDz on a Raid5 array from the Controller for the 8 5400RPM drives (/backup/base0)
-1 RAIDz for the 8 7200RPM drives (/backup/base1)
I put base0 together as quickly as possible as we needed somewhere to store our backups before sending them to tape and to boss said to build a Hardware Raid5 array with ZFS. We didn't know enough back then, but now with the base1 I had more time to test out different setting for best performance. On the zfs side, there was
Dedup on/off
Compression on/off
ZIL on an SSD or not
L2ARC on an SSD or not
On the Controller, I set all the drives to single disk mode, and enabled and disabled
Read Cache
Write Cache
Write Journaling
Queuing
Link Speed (1.5 vs 3.0 Gb/s)
And finally Raid types
RaidZ with 8 disks
RaidZ with 4 Raid0 arrays on the controller
Now, that wall all theoretical benchmarking. Things are not so simple in the real world as I'm slowly learning
My system setup has /backup as the mount points for both base0 and base1
Code :
# zpool list all
NAME USED AVAIL REFER MOUNTPOINT
base0 11.6T 838G 11.6T /backup/base0
base1 454G 11.6T 354G /backup/base1
base1/archive 229K 11.6T 229K /backup/base1/archive
base1/filelevel 100G 11.6T 100G /backup/base1/filelevel
base1/vmlevel 229K 11.6T 229K /backup/base1/vmlevel
-
Code :
# zpool status
pool: base0
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
base0 ONLINE 0 0 0
sdd ONLINE 0 0 0
pool: base1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
base1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-3600050e0bd269d00722d0000fe750000 ONLINE 0 0 0
scsi-3600050e0bd26b60060ff00008aa70000 ONLINE 0 0 0
scsi-3600050e0bd26ca00673f0000c84b0000 ONLINE 0 0 0
scsi-3600050e0bd281900d841000032ef0000 ONLINE 0 0 0
scsi-3600050e0bd282800bcc1000079570000 ONLINE 0 0 0
scsi-3600050e0bd283700c197000017b90000 ONLINE 0 0 0
scsi-3600050e0bd284b00a47300006cf10000 ONLINE 0 0 0
scsi-3600050e0bd285a0021cd0000af4f0000 ONLINE 0 0 0
cache
ata-Corsair_Force_3_SSD_11356504000006820497 ONLINE 0 0 0
errors: No known data errors
Code :
zfs get dedup,compression
NAME PROPERTY VALUE SOURCE
base0 dedup off default
base0 compression off default
base1 dedup off default
base1 compression off default
base1/archive dedup off default
base1/archive compression off default
base1/filelevel dedup off default
base1/filelevel compression on local
base1/vmlevel dedup off default
base1/vmlevel compression off default
I have a couple of questions, for one, does anyone want my CSV bonnie++ results?
Also I'm at a loss as for why all my 'per char' results so low... Can't seem to figure that out.
Lastly, I set the wrong datastore for a backup (put it into /backup/base1/ instead of /backup/base1/filelevel/) and now I'm trying to
Code :
mv /backup/base1/server001/daily.0/ ../filelevel/server001/
and it's taking forever with almost nothing happening. The data is an rsync from a number of linux servers, totaling ~355GB. It only took 5 hours to get that data over the network, and it's been moving it to the correct dataset for 6 hours and is only at 100GB xfered;
Code :
iostat -x 1
extended device statistics
device mgr/s mgw/s r/s w/s kr/s kw/s size queue wait svc_t %b
sda 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
sdc 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
sdb 0 3 2.0 2.0 2.9 96.2 25.4 0.1 17.5 17.5 7
md2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
md1 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
md0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
sdd 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
sde 8 0 50.8 0.0 1555.1 0.0 30.6 0.8 14.8 11.0 56
sdf 9 0 44.0 0.0 1543.3 0.0 35.1 0.6 14.0 9.6 42
sdg 9 0 47.9 0.0 1562.9 0.0 32.7 0.7 14.5 10.8 52
sdh 9 0 51.8 0.0 1476.9 0.0 28.5 0.7 13.2 9.4 49
sdi 11 0 46.9 0.0 1582.4 0.0 33.8 0.7 15.2 10.6 50
sdk 9 0 46.9 0.0 1562.9 0.0 33.3 0.7 15.4 11.0 52
sdj 5 0 50.8 0.0 1476.9 0.0 29.1 0.7 14.4 8.5 43
sdl 8 0 49.8 0.0 1527.7 0.0 30.7 0.8 15.7 11.2 56
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
base0 11.6T 1.02T 64 61 4.42M 5.46M
sdd 11.6T 1.02T 64 61 4.42M 5.46M
-------------------------------------- ----- ----- ----- ----- ----- -----
base1 549G 14.0T 60 149 6.13M 8.26M
raidz1 549G 14.0T 60 149 6.13M 8.26M
scsi-3600050e0bd269d00722d0000fe750000 - - 14 16 893K 1.30M
scsi-3600050e0bd26b60060ff00008aa70000 - - 14 16 870K 1.28M
scsi-3600050e0bd26ca00673f0000c84b0000 - - 14 16 891K 1.30M
scsi-3600050e0bd281900d841000032ef0000 - - 14 16 871K 1.29M
scsi-3600050e0bd282800bcc1000079570000 - - 14 16 893K 1.30M
scsi-3600050e0bd283700c197000017b90000 - - 13 16 869K 1.28M
scsi-3600050e0bd284b00a47300006cf10000 - - 14 16 890K 1.30M
scsi-3600050e0bd285a0021cd0000af4f0000 - - 14 16 871K 1.29M
cache - - - - - -
ata-Corsair_Force_3_SSD_11356504000006820497 106G 5.89G 6 11 67.1K 1.36M
-------------------------------------- ----- ----- ----- ----- ----- -----
_________________
Kévin, sor deux sec or !!