Trusted computing vendor MacSales.com
B&H DAILY DEAL B&H Deals by brand/category/discount
Wish lists: Sony | NIKON | ZEISS | Canon | Pentax K | Fujifilm | Leica M | Leica SL | Macs
Buy anything at Amazon.com
Outstanding protection against drops and impact!
Photoshop CS4 oddities
This is a programmer/nerd discussion, feel free to skip it.
Using 'iostat', it can be seen that only about 1/3 of the disk speed is exploited (based on extensive testing of my RAID setup). This appears to be an internal Photoshop bottleneck, since neither the disk bandwidth nor CPU cores are exploited more than fractionally: computation is neither CPU-bound nor I/O-bound; the I/O is underutilized as are the CPU cores. The operations in question are not ones resistant to parallel techniques.
This oddity led me to investigate—
Photoshop PBReadSync size: 1051232 = 1MB + 2656 bytes
Photoshop PBWriteSync size: 1051648 = 1MB + 3072 bytes
The concern here is that the writes are done with a non-integral number of bytes relative to the stripe size: there are 3072 bytes “left over” that a single drive must handle, unless the OS can zero-pad the block those bytes fall into. It is unclear, based on varying usage scenarios and Mac OS X caching, whether this issue is responsible for failure to exploit available disk bandwidth with 3/4/5/6 drive striped scratch volumes. Reads are not at issue since whole blocks are always read.
DiskTester results (which bypass system caching) show that write speed drops from a median speed of 492MB/sec to 118MB/sec on a 6-drive striped RAID when 1025K chunks vs 1024K chunks are written on the 2nd and subsequent passes. A 2-drive stripe sees a 57% performance loss. Whether there are 2 or 3 or 5 or 10 drives in the stripe, write performance is limited to (roughly) the speed of a single drive. Whether the scratch drive is used in this manner in general Photoshop operation is unclear, but if so, it would be a major performance hit.
Adequate memory for caching seems to mask the performance loss. See the test results.
Adobe could eliminate this as a potential issue by always writing scratch data in integral multiples of the stripe size (this could be a preference setting to choose the size). Or choosing 128K would solve the issue for virtually all striped RAID configurations.