All posts by Mooky Desai

Jeep Story – The Intro

My wife and I had two little baby girls within 18 months of each other. At the time, we had a Ford F-150 and a Mazda CX-7. Cool cars for us hipster parents. Add the requirement of hauling 2 strollers everywhere you go and the bed of the F-150 or the trunk of a compact (but fun!) crossover seemed like bad options. We bought  a 2-munchkin stroller eventually but try putting that in the back of a CX-7 along with diapers and groceries for the week! We eventually bought a shiny new minivan. Not just any minivan for us though. We bought the “Swagger Wagon”. The Toyota Sienna SE. One cool minivan. Altezza taillights, 19″ wheels, factory aerokit, LED projector headlights, Dual-DVD rear entertainment, for the kids…pretty sweet. It served us well but it was a minivan and the wife was over it. At a recent (annual) visit to the LA Auto Show, she fell in love with Jeeps. We got some estimates on trading the van in but at the last minute, found a buyer on Craigslist who just happened to live next to a used Jeep I wanted to explore at a dealer.  I met the guy at a bank close by and just like that, the van was sold. I was left without a car for my wife though so I headed over to the Jeep dealer and inspected the Jeep. It had a ding in the door but it seemed like a decent deal and I needed a replacement for the van so I bought it. This is the Jeep when I took delivery. Its a 2014 Granite Crystal Rubicon Unlimited.

We wanted  a little more aggressive look so we looked into what it would take to make it look a little more “fun”. Our journey is documented on the next few pages. We hope you enjoy!

“NVMe”

What’s in a name? A lot.

NVMe is getting used a lot. Im not sure everyone understands the difference between SCSI, ATA, and NVMe. Or M.2, U.2, PCIe or whatever else as it relates to NVMe. I am not sure I am completely savvy with all of it either honestly so please leave your comments below. My understanding of all this…NAND, Flash, SSD, PCIe, M.2, NVMe-oF…”stuff”… is documented here. I wont get into the details of the SSDs themselves such as MLC, TLC, eTLC, etc. in this post.

I remember the day I learned about Fusion-IO. Chips on a board that plugs into a PCI slot that gives you disk space was crazy enough. The speeds that you could achieve were even crazier. This helped resolve an internal performance issue for me overnight. Stuck these puppies in our HP BL460 Based Build Servers and BOOM! Code was getting compiled all the time, simultaneously, and quickly. Days down to hours. It was performance issue Nirvana. I wanted one of these for my PC at home! They were extremely expensive. Far too much for any home user to consider.

A few years later, I took the plunge and bought a fancy SSD drive to replace my old 7200 RPM SATA drive. I bought this fancy new drive and plugged it right back into the same connector that I pulled my slow SATA drive off of. This seemed crazy! This fancy non-spinning super fast drive and I plug it into the same old connector? Same bus? 6 Gb/sec? This couldn’t be!

Couple of years after that, I replaced my motherboard and video card etc. and noticed this little connector called M.2. “What goes in this M.2 slot?”, I thought.

M.2 (and U.2) slots are just connectors. Its the physical form factor. Depending on the way they are “keyed” (how far the little tab is away from one side (or how many “pins” away)) they can connect drives or other devices such as a Wi-Fi, Bluetooth, or cellular cards. SATA-based M.2 SSD drives are keyed differently than NVMe drives. NVMe requires PCIe x4 while SATA only requires PCIe x2. Thus the different tab locations on the drives themselves. They are keyed differently. NVMe drives wont necessarily always work on an M.2 connector and vice-versa. Although they may *fit*, the motherboard and BIOS need to support PCIe M.2 SSD and/or SATA M.2 SSD specifically.

NVMe is a command set. This is similar to SCSI commands we are all used to. Similar to ATA. A serial-attached SCSI (SAS) device generally has a single queue capable of 256 commands. Comparably, NVMe has 64,000 queues each capable of 65,535 commands.

NVMe takes advantage of high speed access that SSD drives provide and what SAS and SATA cant.

There is also the new world of Optane Technology that uses Intel’s 3D Xpoint Memory Media along with RST to provide an acceleration solution to your HDD, SSHD, or SATA SSD. Think high speed cache for your spinning disk for frequently/recently used data. Install the drive and boot windows. It will take a bit. Boot it again and then again and you will see it boot faster. This is the cache of the Optane drive warming up that you feel.

These drives use the B and M keys of the M.2 form factor. This must be PCIe M.2. and not SATA M.2. You must also have a 7th Gen Intel Core Processor-based system. Windows 10 is also required. Linux is not currently supported as of this write up. One major drawback of using this solution in your desktop is the inability to decouple your Optane drive from your “capacity” hard drive. Remove either one and you’re hosed. No boot for you. While the Optane drive can be used as a standalone NVMe SSD drive, Intel does not support that as of this writing.

In my next post, I will cover the implications of these technologies as it pertains to the enterprise. At the consumer level, this is all “local” access and getting NVMe commands down to the disk is a little less complicated in reasons I will also explain in that post.

Hopefully this clears up your questions about all the new tech. If you have any questions or comments, feel free to leave them below. Thanks for reading and happy computing!

 

 

Semantics

Why details in dialog are important:

 

At the drive-thru window:

 

Me: …Large Fries annnnd a Large Pepsi.

Speaker: OK, so large fries and…you said Diet Pepsi?

Me: No just regular Pepsi.

Speaker: OK, that’ll be blah blah blah please pull forward…

 

I ended up with Large fries and a medium (regular) Pepsi.

Cohesity

Cohesity is a web scale, appliance-based, secondary storage platform started by Mohit Aron formerly from Nutanix and Google. Each 2U appliance is Intel x86-based and consists of four 16 core nodes (Dual 8-way). Each node contains three 8TB spinning disks, one 1.6TB PCIe MLC Flash drive, and 2x 10Gb ports.

Much like the Rubrik solution, this also runs backup software natively on the platform thus eliminating the need for costly licenses for Veeam or Commvault. The need for server hardware is also eliminated.

Also similar to Rubrik, this solution leverages policies to take backups instead of the traditional method of specifying the data set, specifying a target, setting up a schedule, and saying, “go!”. Using these policies based on your RTO/RPO, one can use the built-in cloud gateway functionality to ship the coldest (idle) data off to the cloud.

Different to most backup platforms is the ability to use the data that is backed up immediately. Data isnt locked up in a backup job anymore. Leveraging the Journaled OASIS file system, they are able to present NFS, SMB, and S3 protocols natively. Part of the file system is something called SnapTree which allows for an infinite amount of snapshots with no performance penalties  like traditional arrays create.

The solution supports variable length global deduplication (even to the cloud!), encryption, replication to AWS cloud natively (CloudReplicate), compression, and even runs MapReduce natively. One benefit of this solution is the ability to index all the data coming in. Similar to what Object Storage solutions provide, you get fully searchable data. Pretty neat stuff and a real game changer.

In the enterprise, most people have their primary storage for databases for the application. Something like Pure Storage. These enterprises have “other” data which needs a home. Cohesity (and Gartner) refer to this as dark data. Cohesity is looking to be the platform for this dark data. When I was a manager at my last gig, we had a basic flash solution for our databases and then had an entire FlexPod environment for “everything else”. We also had an entire environment for backups for compliance/contractual reasons more than anything. Those environments mandated a lot of hardware, consumed a lot of rackspace and power and cooling. Not to mention the various management interfaces I would need to “run” things. Looking back, this would have helped to solve the problem of our backup environment and eliminated the need for “Utility” filer. Any storage admin knows the deal. “We cant delete that 4 TB of data”. There are usually 4-5 instances (volumes) like that, at least. I could have ripped out about 30 RU worth of gear and replaced it with 4 – 8 using Cohesity. Strategically, I would have immediately been able to leverage the cloud for cold data instead of buying yet another appliance for that use case (think AltaVault). Beyond that, my RTO would be reduced to minutes vs hours. That’s peace of mind and a HUGE win in the efficiency column.

If you would like to know more about this solution or have any questions on the data provided, please email me or leave a comment below.