Amazing talk by Dan Kaminsky discussing what is broken with X.509 (SSL). It’s an amazing dive into how X.509 works, various exploits, and the impeding problem of the Verisign MD2 root certificate that may be open to preimage attack sometime in the near future.
A number breakdown on why the idea that "SSDs are unreliable" is a silly statement.
I’ve been hearing some silly assumptions that magnetic drives are more "reliable" than Solid State Drives (SSDs). I’ve heard some silly ideas such as "can I mirror my SSDs to regular magnetic disks", while this behavior completely defeats the purpose of having the SSDs (all disks must flush their writes before additional writes can be serviced), but I’ll show you why in this configuration the traditional magnetic drives will fail first.
For the sake of being picky about numbers, I’m going to point out a few of these are “back of a napkin” type calculations. Getting all the numbers I need from a single benchmark is difficult (being as most people are interested in total bytes read/write, not operations served), additionally I don’t have months to throw a couple SSDs at this right now.
A Very Liberal Lifetime Of A Traditional Magnetic Disk Drive
So we’re going to assume the most extreme possibilities for a magnetic disk drive, a high performance enterprise grade drive (15k RPM), running at 100% load 24/7/365 for 10 years. This is borderline insane and would likely be toast under this much of a workload long before then, but this helps illustrate my point. The high end of the load these drives can put out is 210 IOPS. So what we see on a daily basis is this:
210 * 60 * 60 * 24 = 18,144,000 18,144,000 * 365 = 6,622,560,000 x 10 = 66,225,600,000
We expect at the most insane levels of load, performance and reliability that the disk can perform 66 billion operations in it’s lifetime.
The Expected Lifetime Of A Solid State Drive
Now I’m going to perform the opposite (for the most part), I’m going to go with a consumer grade triple-level cell (TLC) SSD. These drives have some of the shortest life span that you can expect out of an SSD that you can purchase off the shelf. Specifically we’re going to look at a Samsung 250GB TLC drive, which ran 707TB of information before it’s first failed sector, at over 2900 writes per sector.
250GB drive 250,000,000,000 / 4096 = ~61,000,000 sectors. x2900 writes/sector = 176,900,000,000 write operations.
Keep in mind: the newer Corsair Force 240GB MLC-E drives claim a whopping 30,000 cycles before failure, but I’m going to keep this to "I blindly have chosen a random consumer grade drive to compete with an enterprise level drive", and not even look at the SSDs aimed at longer lifespans, which includes enterprise level SLC flash memory, which can handle over 100,000 cyles per cell!
So What Do You Mean More Robust?
The modern TLC drive from Samsung performed nearly three times the total work output of the enterprise level 15k SAS drive before dying. Well if that is the case why do people see SSDs are "unreliable"? The answer is simple: the Samsung drive will perform up to 61,000 write IOPS, where as the magnetic disk will perform at best 210, it would take me an array of 290 magnetic disks, at a theoretical optimal performance configuration (no failover) to match the performance of this single SSD.
Because of this additional throughput, the SSD wears out it’s lifespan much faster.
So I should Just Replace My HDDs with SSDs?
Whoa, slow down there, not quite. Magnetic storage still has a solid place from everywhere in your home to your data center. The $/GB ratio of magnetic storage is still much more preferable over the $/GB ratio of SSD storage. For home users this means the new hybrid drives (SSD/HDD) that have been showing up are an excellent choice, for enterprise systems you may want to look at storage platforms that allow you to use flash storage as read/write caches and data tiering methods.