Original Link: https://www.anandtech.com/show/7555/mushkin-atlas-msata-240gb-480gb-review



The retail mSATA SSD market doesn't have too many players. Most OEMs, such as Samsung (although that is about to change), Toshiba and SanDisk, only sell their mSATA SSDs straight to PC OEMs. Out of the big guys, only Intel and Crucial/Micron are in the retail game but fortunately there are a few smaller OEMs that sell retail mSATA SSDs as well. One of them is Mushkin and today we'll be looking at their Atlas lineup.

Mushkin sent us two capacities: 240GB and 480GB. Typically 240GB has been the maximum capacity for mSATA SSDs due to the fact that there's room for only four NAND packages and with 64Gbit per NAND die the maximum capacity for each package comes in at 64GB (8x8GB), which yeilds a total NAND capacity of 256GB. Crucial and Samsung have mSATA SSDs of up to 512GB (Samsung offers up to 1TB now) thanks to their 128Gbit NAND but currently neither Samsung nor Micron is selling their 128Gbit NAND to other OEMs (at least not in the volumes required for an SSD). I'm hearing that Micron's 128Gbit NAND will be available to OEMs early next year and many are already planning products based on it.

Since Mushkin is limited to 64Gbit NAND like other fab-less OEMs, they had to do something different to break the 256GB barrier. Since you can't get more than 64GB in a single NAND package, the only solution is to increase the amount of NAND packages in the SSD. Mushkin's approach is to use a separate daughterboard with four NAND packages that's stacked on top of the standard mSATA SSD. There are already four NAND packages in a regular mSATA SSD, so with four more the maximum NAND capacity doubles to 512GB. However, the actual usable capacity in Atlas is 480GB thanks to SandForce's RAISE and added over-provisioning.

The result is a slightly taller design than a regular mSATA SSD but the drive should still be compatible with all mSATA-equipped devices. Mushkin had to use specially packaged NAND in the 480GB model (LGA60 vs LBGA100 in the 240GB) to lower the height and guarantee compatibility. The NAND daughterboard seems to be glued to the main PCB and dislocating it would require a substantial amount of force. I tried to dislocate it gently with my hands but I couldn't, so I find it unlikely that the daughterboard would dislocate on its own while in use.

Mushkin Atlas Specifications
Capacities (GB) 30, 40, 60, 120, 240, 480
Controller SandForce SF-2281
Sequential Read Up to 560MB/s
Sequential Write Up to 530MB/s
4KB Random Write Up to 80K IOPS
Warranty 3 years

The Atlas is available in pretty much all capacities you can think of, starting from 30GB and going all the way up to 480GB. Mushkin gives the Atlas a three-year warranty, which is the standard for mainstream drives. The retail packaging doesn't include anything else but the drive but you don't really need any peripherals with an mSATA drive.

Here you can see the difference in NAND packages. The one on the left is the 480GB model and it's NAND packages cover slightly more area on the PCB but are also a hair thinner. Like many other OEMs, Mushkin buys their NAND in wafers and does packaging/validation on their own. Due to supplier agreements, Mushkin couldn't reveal the manufacturer but I'm guessing we're dealing with 20nm Micron NAND. So far I've only seen Micron and Toshiba selling NAND in wafers and as Mushkin has used Micron in the past (the 240GB sample is a bit older and uses Micron NAND), it would make sense. 

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit



Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Mushkin Atlas 240GB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 250GB
Default
25% OP - -

Quite surprisingly, the 240GB model has great IO consistency but the performance is significantly lower at 480GB. We've not tested any 480GB SandForce SSDs for years, so I'm not sure if this is typical behavior or unique to the 480GB Atlas. Performance can slow down when more NAND dies are added because there are more pages/blocks to track, which requires more processing power and cache to deal with. The SF-2281 silicon is over two years old, so I think it wasn't really optimized for capacities over 256GB even though the controller is capable of supporting up to 512GB with 64Gb/die NAND. The 480GB model is still okay, though, as even at steady-state the IOPS is around 5000, while for example the Plextor M5M has occasions where the IOPS drops to zero.

  Mushkin Atlas 240GB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 250GB
Default
25% OP - -

 

  Mushkin Atlas 240GB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M Samsung SSD 840 EVO 250GB
Default
25% OP - -

 

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After torturing the drive, I measured the sequential write performance with Iometer (128KB IO size, fully random, 100% LBA, QD=1, 60 seconds). Next I TRIM'ed the drive (quick format in Windows 7/8) and reran Iometer.

Mushkin Atlas Resiliency - Iometer Incompressible Sequential Write
  Clean Dirty After TRIM
Mushkin Atlas 240GB 189.2MB/s 35.2MB/s 106.6MB/s

As expected, performance doesn't fully recover. I've heard SandForce actually has a fix for this but it's still in validation and will be implemented once its given a green light.



AnandTech Storage Bench 2013

When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.

For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst-case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.

We're reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AT Storage Bench 2013 - The Destroyer (Data Rate)

Even though the Atlas has good IO consistency, it doesn't perform that well in our new Storage Bench 2013. As the service time graph below suggests, the Atlas has biggest trouble when the queue depth is high (it's more stressful and it takes longer for each IO to complete when the QD is high). I'm guessing that this might be due to thermal throttling because mSATA SSDs don't have a metal casing acting as a heatsink but it may as well be due to firmware differences.

AT Storage Bench 2013 - The Destroyer (Service Time)



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

At 240GB, random write speeds are normal to SF-2281 but the 480GB model is noticeably slower. This isn't exceptional because the 480GB Vertex 3 (the only other 480GB SF-2281 SSD we've tested) exhibited similar behavior, although its performance was slightly better. I think the drop in performance has to do with raw processing power because when you double the capacity, the amount of pages/blocks that need to be tracked doubles as well. Newer controllers (like Marvell 88SS9187 and Samsung MDX) have no trouble tracking more pages/blocks but the SF-2281 design is over two years old, which is definitely showing up.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Desktop Iometer - 128KB Sequential Write

Sequential performance is average. When moving to incompressible data performance drops (as always) and especially the 240GB model experiences quite a big drop.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

 



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. The 480GB model is again a bit slower than the 240GB one and write performance especially is noticeably lower. For example at IO size of 4KB, the 240GB model is about 150MB/s faster, which is significant. Even at higher IO sizes the 480GB Atlas isn't able to make up the difference.

Click for full size



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally the benchmarks were kept short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). We've included a large amount of email downloading, document creation and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

AnandTech Storage Bench 2011 - Heavy Workload

Heavy Workload 2011 - Average Data Rate

Neither of the Atlases is able to compete with the fastest mSATA SSDs. It's surprising how big the difference is because Intel SSD 525 and the Altas are both SF-2281 based, but the SSD 525 is about 40% faster. Once again, the full data set (including read/write differentiation and disk busy times) can be found in our Bench.

AnandTech Storage Bench 2011 - Light Workload

Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). There's lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming.

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers. Interestingly, the 480GB drive actually comes out ahead in this case, suggesting it's more capable at light workloads.

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate



Power Consumption

Before I get into the numbers, I'd like to remind that we use a 2.5" to mSATA adapter with a voltage regulator. Hence the numbers for mSATA drives are not accurate because the voltage regulator consumes some of the power (one of our readers calculated that the voltage regulator would consume 40-50% of the total power). Furthermore, I don't have a system with HIPM/DIPM support (it's only supported by mobile chipsets), so the idle power consumption figures should be significantly lower if the SSD is used in a laptop.

The numbers here definitely seem high but when you subtract the power consumed by the voltage regulator, the power draw should be close to Intel SSD 525 (its power consumption was measured straight from the 3.3V rail, which bypasses the voltage regulator). There are "3.3VDC" and "1.1A" ratings on Atlas' label, so based on those the maximum wattage should be 3.63W, which sounds reasonable when the voltage regulator is taken away from the equation.

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

It's always good to see innovation from smaller OEMs instead of just the fab and controller IP owners. Mushkin, like most of the OEMs, are at the mercy of LSI/SandForce and fab owners when it comes to product availability. Crucial/Micron has been using 128Gbit NAND in their products for eight months now but none of the standalone SSD OEMs have gotten volume access to it yet, although this should change next year. However, this further reinforces the status and control of NAND fab owners -- when you get access to new technology a year or even longer before the others, you will always have an advantage. The best example is Samsung and their TLC NAND based 840 and 840 EVO because over a year later they are still the only OEM with a TLC SSD, giving them a substantial price advantage.

The Atlas is what you would expect from an mSATA SSD with SF-2281. It doesn't set any records but it's a decent performer. It's a bit unfortunate that the 480GB model is slower than the 240GB one, but I'm pretty confident that this is mostly due to the age and design of SF-2281, not Muskin's implementation.

NewEgg Price Comparison (12/16/2013)
  30GB 60/64GB 120/128GB 240/256GB 480GB
Mushkin Atlas $65 $135 $107 $195 $468
Crucial M500 - - $113 $176 -
Plextor M5M - $75 $112 $200 -
Intel SSD 525 - $80 $146 $290 -
ADATA XPG SX300 - $75 $110 $200 -

Atlas' pricing is fairly competitive but not earth-shattering. Given the NAND shortage issues, it's hard for fabless OEMs to continue the price war we had a couple of years ago. The days are behind when OEMs could buy a huge stock of NAND and get a significant discount -- now getting any stock at all can be an effort.

The 480GB Atlas was the only high capacity mSATA SSD when it was released earlier this year. However, now both Crucial and Samsung have retail mSATA SSDs in 480/500GB flavor as well (Samsung's upcoming 840 EVO mSATA even goes up to 1TB). Crucial's 480GB M500 mSATA isn't listed in the table since NewEgg doesn't have it but you can find it from Crucial's online store for $330. Given that the price difference is nearly $140, it's hard to recommend the Atlas 480GB at the moment. We haven't reviewed the M500 mSATA yet but we'll get to it in January once the CES rumble is over. Our SSD 840 EVO mSATA samples are currently on their way to our test lab and the review should be out before the holidays, so stay tuned for more mSATA related content!

Log in

Don't have an account? Sign up now