which earn me advertising fees or commissions.
As an Amazon Associate I earn from qualifying purchases.
Transfer speed from 32K to 512MB
OWC Mercury Aura SSD for Mac Pro: 1TB about $899, 2TB about $1479.
disktester run-sequential-suite -s 32K -e 512M --iterations 5 --test-size 4G
Transfer speed for 32K chunks are already at 200MB/sec, and that helps real world performance tremendously, since many apps write in relatively small chunks. Speed ramps up quickly, with peak speed coming with 2MB transfer sizes.
Real world speed through the OS X file system, e.g. speeds a normal app can access (not unrealistic low-level driver-level performance).
Chris C writes:
When you show your testing of 32K … 512MB, your presentation of the data is almost misleading at first glance. I think it would look even more impressive if you showed these with a constant arithmetic scale, rather than the geometric or exponential scale on the bottom.
Of course, I know that would jam things together at the left end, but really, who works with any files of any meaning that have a minimum of a few MB anyway? This is not the 1980s. Even a small color tiff scan I did the other day was ~250mb.
By the time you hit 2mb or more, you are basically in the zone of optimum performance, which really means you are almost always working in that zone,
Faulty premise, unfortunately.
MPG: Would that this were the case, but it doesn’t work like that.
Even Photoshop writes in 1MB chunks *at most*. If a smaller “tile size” is used, then the transfer size might be as small as 128K. It is why a Photoshop scratch faster than ~600 MB/sec helps, but only a little; the transfer size of 1MB kills peak speed. MPG has made certain recommendations to Adobe on this front. The whole tile size thing is an anachronism that ought to be removed, with Photoshop dealing in 8MB or 16MB chunks.
The unfortunate reality is that small transfer sizes dominate most I/O operations. Many programs write in tiny sizes, like 4K or 8K or 32K. The operating system can sometimes aggregate a steady burst of very small writes, and this helps. But it cannot get around the fact that the I/O most programs do is almost always under 1MB per transfer, and generally far smaller.
Sometimes, it is the nature of the data itself; many things are small: email message, word processing documents, spreadsheets, etc. Even reading a JPEG or TIF means making many small I/O requests for the structural and metadata portions. Or, consider any copy operation (Finder copying or cloning) from a very fast SSD to another fast SSD: why is the throughput 1/2 or 1/4 or 1/20 of peak speed with lrge transfers? Because of the overheads of the data itself requiring millions of small transfers; it is not one huge file (which reads/writes at top speed, done right).