Find by % savings, by brand, by category:
Macs, 4K Television, Printers, SONY, Nikon, Canon, Olympus, ZEISS, ALL DEALS...
+ 50 sheets 13X19 paper + SanDisk 32GB + LowePro Pouch
$250 instant savings + $350 mail-in rebate ===> $249.99
How to Choose a Mac
Sandforce has been less than clear on its controller and firmware combinations. A small company with a great product, and poor communication so far. And that includes late-in-the-game production firmware changes that led to a sleep issue (now resolved).
My extensive discussion with OWC on this matter satisfies me that OWC has done the right thing at every step of the game. This section now reflects the state of things as I understand them as of April 17, 2010.
One online source (anandtech) has created a firestorm by claiming lower performance for the 1200 firmware vs the 1500 firmware, based on specifications alone, without any real-world tests, certainly no tests relevant to any Mac user. This claim is so confusing and irrelevant to any real-world use on a Mac or PC (even very demanding use) that it must be understood for what it is: a disservice to anyone looking for a solution.
This review focuses on real-world performance, not specifications or speculation.
A Windows firmware updater is available at the time this was written. Mac users running Apple’s BootCamp to boot Windows can run it also. A Mac/Linux version is likely to appear in time.
Firmware vs controller
There is a lot of confusion out there about the controller chip used versus the firmware it runs. There are several flavors of firmware out there: the 1200 firmware, the 1500 firmware, and at least one hybrid/custom version.
The same hardware is used for the controller, but which firmware it runs is a separate issue. More details on the firmware, etc follow after the table below.
|1200 firmware||1500 firmware|
|RAID-0 striping supported||YES||YES|
- most brands have 0%
- 256GB flash yields 200GB capacity
|SuperCap||NO, not needed||YES, needed to prevent data loss|
|Read operations per second*||30,000||30,000|
|Write operations per second**||10,000||30,000|
|Power consumption in use||.55 watt||.95 watt|
|Cost premium||-||$100 - $150|
|Real-world difference||None for most every usage scenario|
* Read operations per second are more important for most uses.
** Hardly anything ever makes more than 10,000 write operations for second, certainly not on a Mac. For that matter, Mac OS X caching can accept writes into the cache and write more efficiently anyway!
The 28% over-provisioning assures high data reliability and top performance over the long term.
Since almost all other brands use 0% over-provisioning in favor of more capacity, they cannot offer the reliability or long term performance that the 28% over-provisioning offers. Even a 7% over provisioning variant (planned) will be vastly superior to 0%. See the DiskTester recondition command for examples of just how bad it can get with some brands.
The 28% over-provisioning of the Mercury Extreme is used for robust error correction, bad blocks, and long term performance. That explains the oddball 200GB capacity: it’s really a 256GB drive with 56GB set aside. (Or 100/128GB or 50/64GB).
Firmware — 1500 vs 1200
When first released, OWC used the 1500 firmware without SuperCap, because at that time Sandforce had not clarified the issues involved with 1500 firmware with a power failure scenario.
As Sandforce clarified the issues, it became clear that firmware behavior made the 1500 without SuperCap a Bad Idea: the 1500 firmware messes around after writes are “done”, but this has implications: a power failure in such a case could cause data loss. Or possibly even “bricking” the drive, though that is my supposition, based on 28 years of being a software engineer.
So there are only two safe combinations:
• 1200 firmware without Supercap
• 1500 firmware with Supercap.
So OWC did the right thing: the Mercury Extreme now utilizes the 1200 firmware without SuperCap, and this avoids the data loss issue, at least according to Sandforce. OWC might soon offer a 1500 firmware model with SuperCap but this will be pointless for most everyone, and come with a sizeable premium.
Only severely loaded database servers or a few other extremely demanding applications will see any difference between the two firmware versions— it should be irrelevant for any Mac user.
But why wouldn’t you want the 1500 firmware? Because, as Sandforce has now clarified, the 1500 firmware achieves its performance in part because it rearranges and rejiggers blocks behind the scenes after the writes are “done”. To prevent data loss, the SuperCap feature (see below) is needed, and this add a substantial cost premium.
OWC is likely to offer a 1500 firmware with SuperCap, but it will come at a premium, yet be irrelevant for the majority of users.
SuperCap is essentially a power buffer (capacitor) which maintains power in the event of a system power loss. With the 1500 firmware (not the 1200 firmware) and massive writes ops in progress, this power backup helps prevent data loss, because the drive itself might still be housekeeping when system power fails. The 1200 firmware bypasses this issue by restricting itself to 10,000 write operations/sec.
Realistically, anyone who thinks they need the 1500 firmware and SuperCap might also need dual redundant power supplies and other high-grade extras, which means you’re not using a Mac (XServe might be an exception).
The SuperCap feature maintains power long enough for I/O activity already in the SSD to finish, including the “housekeeping” needed by the 1500 firmware.
Drive caching and power failure
The following are worth noting/understanding:
- Use an uninterruptible power supply with or without an SSD!
- The Sandforce controller does not cache writes, so the risk of data loss by power failure is lower than today’s hard drives, which cache up to 64MB of data, and also have no power buffering. In other words, the OWC Mercury Extreme is inherently less risky than a hard drive.
- Note well that some brands of SSDs do use volatile memory caching and those other brands are thus inherently more sensitive to data loss during power failure.
- Most programs write in chunks, and even if they do not, the operating system frequently breaks the data into chunks and can reorder writes as well, for a variety of technical reasons. Which means that worrying about data on the drive is beside the point— chances are fair that a file will be fractionally written in any case, because it won’t all make it to the SSD. In a power loss situation, the machine is likely to die while data is still being sent to the SSD.
- The Mercury Extreme is inherently fast, so the window of exposure for loss is very short, given the rapid failure of the machine itself due to power loss. Also, continuous writes are just not the usual situation with desktop use, even working steadily in most programs; writes are “blips” with long time gaps between writes (speaking in computer terms).
In other words, if you think you need the SuperCap feature, you ought be looking at $10K - $100K servers with specialized hardware, redundant power supplies, UPS backup, special database support, etc.