Table of Contents
Introduction
While we typically advise against the use of RAID controllers in the average system, there are certainly some situations in which a RAID controller is necessary for either data security or high performance. Today we will be looking at a pair of RAID cards from LSI that feature PCI-E 3.0 support and the new mini-SAS HD SFF8643 12GB/s connector.
When qualifying RAID cards, there are two major points that we always want to look at. From a performance standpoint, one of the most important things to know about a RAID card is simply where it tops out at in terms of throughput. For example: consider that you are using SSDs that individually have a 500 MB/s sequential read performance. If the RAID card is found to max out at 1000MB/s then you may not need more than 2-3 drives (depending on the type of RAID array) before the RAID card starts to become a bottleneck.
The second point is how the RAID controller handles failed drives and arrays. In the event of a drive failure, a RAID 0 (stripe) is always going to fail catastrophically regardless of the quality of the controller due to the lack of redundancy. However, any RAID type that does have redundancy (RAID 0, 5, 10, etc.) should fail in a way that prevents data loss. What we like to see is notification to the user in the event of a drive failure and an easy, effective way to repair the degraded RAID array once the failed drive has been replaced.
RAID Card Specifications
Specifications | 9341-8i | 9361-8i |
Internal Port | 2x Mini-SAS HD SFF8643 | 2x Mini-SAS HD SFF8643 |
Native Supported Disks | 8 | 8 |
Maximum Supported Disks* | 128 | 128 |
Controller Chip | LSISAS3008 PowerPC 476 controller @ 1.2 GHz | LSISAS3108 dual core ROC |
Cache Memory | – | 1GB 1866MHz DDR3 SDRAM |
RAID Levels | 0, 1, 5, 10 and 50 | 0, 1, 5, 6, 10, 50, and 60 |
Battery Backup Support | – | CacheVault LSICVM02 |
MSRP | $375 | $735 |
*Achieved by using port expanders like the Intel RES2SV240
Both of these cards use the new Mini-SAS HD SFF8643 port that does much the same thing as the older Mini-SAS SFF8086 port except that it has a peak theoretical throughput of 12GB/s instead of the older 10GB/s. Just like the older port, you can either connect to a backplate, port expander, or use a breakout cable to directly connect up to 4 hard drives per port. Because of this, these cards support 8 disks natively and up to 128 disks with the use of port expanders. The -4i versions of these cards have exactly the same specifications, except that they only have a single Mini-SAS HD SFF8643 port which limits the number of native supported disks to just 4.
In terms of specifications, the LSI 9361-8i has a more powerful dual core LSISAS3108 controller chip than the 9341-8i as well as 1GB of integrated DDR3 memory. The 9361-8i also supports RAID 6 and RAID 60 in addition to RAID 0, 1, 5, 10, and 50 which may be useful if data security is your primary concern. For the further data security, the 9361-8i supports CacheVault which is LSI's flash memory and battery back up accessory that reduces the risk of data corruption when power is unexpectedly lost.
One thing that is not shown in the specifications is how much more flexibility the 9361-8i has in terms of RAID customization options:
Available RAID Options | 9341-8i | 9361-8i |
Stripe Size | 64KB | 64KB / 128KB / 256KB / 512KB / 1MB |
Read Policy | No Read Ahead | Always Read Ahead / No Read Ahead |
Write Policy | Write Though | Write Through / Write Back / Write Back with BBU |
I/O Policy | Direct IO | Direct IO / Cached IO |
Overall, the 9341-8i doesn't really have any customization options at all beyond setting the RAID type and number of disks that will be in that RAID array. The lack of customization will likely result in a small loss of performance in some situations, but if absolute top performance is not needed then the cost savings may be enough to compensate.
Test Setup
To benchmark the performance of these cards we used CrystalDiskMark which is our standard hard drive benchmark. Each benchmark was run with the default setting of 5 passes with a test size of 1000MB. Running multiple passes will allow the integrated RAM on the 9361-8i to come into play which we feel is more realistic than trying to take the integrated RAM out of the equation. Due to this, however, the results for the 9361-8i will essentially be a best-case scenario so in actuality you will likely see slightly lower performance in real world situations.
Our test platform is based around our recently updated Genesis I certified system along with 1-8 Samsung 840 Pro 128GB storage drives in a RAID 0 configuration. This system uses Xeon E5 processors to allow up to 12 CPU cores, 256GB of REG ECC memory and plenty of PCI-E 3.0 slots which will ensure that the test platform itself will not be a bottleneck.
Testing Hardware | |
Motherboard: | Supermicro X9SRA |
CPU: | Intel Xeon E5-2680 v2 Ten Core 2.8 GHz |
CPU Cooler: | Corsair Hydro Series H60 CPU Cooler (Rev. 2) |
Video Cards: | NVIDIA GeForce GTX Titan 6GB |
PSU: | Seasonic X-1050 1050 Watt |
RAM: | 8x Kingston DDR3-1600 4GB ECC Reg. |
OS Drive: | Samsung 840 Pro 256GB |
Storage Drives: | 1-8x Samsung 840 Pro 128GB |
In addition to the hardware, the 9361-8i also has an adjustable Stripe Size as well as Read Ahead and Write Back options that will affect performance. While a 64KB stripe size is generally accepted as the best stripe size to use for SSDs, both LSI and the internet in general has conflicting recommendations on whether you should use Read Ahead and Write Back for best performance. To determine which settings we will use in our benchmarks for the 9361-8i, we decided to first run some preliminary benchmarks with a 4 disk RAID 0 array to see how different combination of Read Ahead and Write Back performed:
If you click on the charts above, you will see three sets of benchmark results: one with Write Back and Read Ahead, one with Write Through and Read Ahead, and one with Write Through and No Read Ahead. Based on these results, we determined that having Write Back and Read Ahead set gives the best overall performance so that is what we will be using for the 9361-8i benchmarks. Remember that that the 9341-8i only allows Write Through and No Read Ahead so we will be using different settings for the two RAID cards.
RAID Benchmark Settings | 9341-8i | 9361-8i |
Stripe Size | 64KB | 64KB |
Read Policy | No Read Ahead | Always Read Ahead |
Write Policy | Write Though | Write Back |
I/O Policy | Direct IO | Direct IO |
The one caveat we need to make for these settings is that when you use Write Back it is highly recommended that you also use the CacheVault accessory. CacheVault is a small battery and flash memory chip that plugs into the RAID cards that allows the card to move any data that is currently in the integrated memory cache to the CacheVault flash memory in the event of a power loss. Then, once the power is restored it can write that data back to the RAID array. Without CacheVault a loss of power would also mean the loss of the of data that is being stored in the cache which will likely result in data corruption.
1-8 Disk RAID 0 Performance
Sequential Read performance is one of the most commonly used metrics for hard drive and RAID performance, so that is where we will start. The Samsung 840 Pro 128GB drive has a sequential read of about 500MB/s on most motherboard controllers which is right in line with what we saw with the 9341-8i. Interestingly, the 9361-8i is double that base performance with only a single disk. This is the result of the 1GB of integrated memory and really goes to show the immediate impact of integrated memory on RAID cards.
Both cards show a steady increase in performance as more drives are added, although the 9361-8i starts to top out after 6 disks at approximately 2500 MB/s. The 9341-8i only just hits 2500 MB/s (actually 2439.8 MB/s) with 8 disks so we suspect that 2500 MB/s is the maximum sequential read performance possible on both of these cards.
For the sequential write performance, these two cards actually start and end with pretty much identical performance numbers. However, between a 2 disk and 5 disk RAID 0, the 9361-8i is about 20% faster than the 9341-8i. Both cards appear to peak at just over 2000 MB/s, but the increased performance of the 9361-8i allows it to hit that peak with just 5 of the Samsung 840 Pro 128GB drives instead of the 6 needed for the 9341-8i.
Moving on to random read performance, we get our first look at an instance where the 9341-8i is not able to catch up to the 9361-8i regardless of the number of drives. The 9361-8i peaks after about 5 drives at ~1700 MB/s. The 9341-8i also peaks after about 5 drives, but does so at only ~1200 MB/s.
Random write performance is interesting since the 9361-8i largely peaks after only 2 drives and has a maximum performance of ~1700 MB/s. The 9341-8i appears to peak after 4 drives, but sees a late performance improvement when 8 drives are used. This allows the 9341-8i to hit the same peak random write performance as the 9361-8i, but it requires 8 of our test drives instead of just 2.
Looking at the random read IOPs (Input/Output Operations Per second) shows that both cards hit their peak immediately. For the 9361-8i, this is at approximately 9000 IOPs while the 9341-8i peaks at about 5300 IOPs.
The random write IOPs is interesting because both cards float around the 10500 IOPS mark regardless of the number of drives so both cards have essentially identical performance.
Overall, we found the following to be the maximum performance for both of the cards:
|
|
While the 9341-8i is able to match the maximum performance of the 9361-8i in many cases, it is worth specifically pointing out how many more drives it took to achieve that performance. With this information you can approximate how many drives (or at least how many of the Samsung 840 Pro drives) it would take before the RAID card itself becomes a bottleneck. However, if you are using a different drive than what we used in our testing, you have to do some math (and some educated guess work) to determine how many of those drives it would take to saturate these RAID cards.
RAID 0 Disk Failure Management
Since a RAID 0 doesn't have any sort of redundancy, if a disk fails there is nothing you can do to recover your data. Since the data is spread across all the drives in a RAID 0 array, a single drive failure will result in the total loss of the data on that array. If this is your OS drive, you would likely receive a bluescreen (or Linux equivalent) then be unable to boot into the OS since the data is completely corrupt.
If the RAID 0 array is a storage drive, the LSI MegaRAID Storage Manager will give a popup notification that there is a fatal error and that the RAID array is offline. If you go into the MegaRAID Storage Manager software, you will find that the virtual drive is "offline".
Whether the RAID 0 array is your OS drive or a storage drive, since there is no redundancy there is no way to rebuild a RAID 0 array. You simply have to delete the virtual drive and recreate the RAID 0 array from scratch after you have replaced the failed drive.
RAID 1/5/10/etc. Disk Failure Management
For RAIDs that have some form of redundancy (RAID 1/5/10/etc.), these LSI controllers handle a drive failure pretty gracefully considering the drastic nature of a drive failure.
Regardless of whether the RAID is a storage drive or your OS drive, the first thing that happens in the event of a disk failure is that you will get an popup alerting you that the RAID virtual drive is degraded. If you already have an extra drive assigned as a hotspare, the controller will automatically add that drive to the array and begin the rebuilding process without any input. You still need to replace the failed drive and mark the new drive as a new hotspare, but once the rebuilding process has completed you at least have a window where the safety of your data is not at risk.
If there is no hotspare, then you should replace the failed drive ASAP since your data is at risk of being lost if a second drive fails. After you have replaced the failed drive, the controller should automatically add it to the virtual drive (it considers any unassigned drives as "Emergency Spares" and uses them to repair a failed RAID) and start rebuilding the array. If it doesn't automatically add the drive, you just have to right-click the new unconfigured drive in the MegaRAID Storage Manager, select "Replace Missing Drive" and follow the prompts to get it assigned to the correct RAID.
Overall, repairing a degraded array after a drive failure is about as painless as it could be with the LSI MegaRAID Storage Manager software. You get an immediate alert about the issue and we did not have any problems getting a degraded array back up and running after replacing the failed drive.
Conclusion
We've been using LSI RAID cards at Puget Systems for a number of years now and have always been very happy with the quality of both the controllers and the MegaRAID software. The 9361-8i and 9341-8i are no exception and either they or their single port -4i equivalent are a great choice when you require a RAID controller in your system. The performance of the 9361-8i is excellent with the integrated 1GB of DDR3 memory and while the 9341-8i is slower it is still very good for it's price point. One thing both of these cards have in common is the excellent way they handle a degraded array in the event of a drive failure. They both give an immediate notice of the failure and offer a clear, easy to manage way to repair the degraded array.
The one thing we did not cover in this qualification that we have seen repeatedly reported in user reviews as a major negative is the fact that these cards run very hot. In fact, in a standard desktop chassis without airflow over the card we found that they would run at up to 105 °C. This sounds like a terrible design flaw at first, but it is really par for the course for high quality RAID controllers. These types of cards are primarily designed for use in rackmounts where there is plenty of airflow so in a desktop environment they require a bit of planning to ensure they do not overheat. Luckily, you only need a very moderate amount of airflow (a low RPM side fan works great) to drop that temperature down to 45-50 °C. And from our perspective, we would much rather have to provide moderate airflow over the card rather than having LSI integrate a tiny 40-50mm cooling fan directly onto the card. Those tiny fans are always one of the first components to fail in a computer and are notorious for making rattling and whining noises after any amount of dust gets into them.
Overall, both of these cards are very high quality and will be an excellent addition to our product line. Which card is right for you is going to depend on your specific application and budget, but whichever card you end up using you can rest assured that you will be using some of the highest quality RAID controllers currently on the market.