All posts by Mooky Desai

Cisco Hyperflex – #700-905 – Notes

Intro to Hyper-Convergence

Started out with local storage

Couldn’t expand

Moved to centralized storage

Server availability issues

Moved to virt and converged for clusters

FlexPods, VersaStacks…chassis had limitations

Back to local storage

Scales similar to cloud

No limits

Intro to HX platform

Based on C-series

Wizard-based installer

Converged Nodes – Disk, network, cpu, memory

Data Platform

StorFS – Distributed File System

Springpath is underlying backbone

Ensures copy data is always there

High performance networking is used by StorFS log structured FS

Communication channel for VMs and management etc

FIs are hardware management platform

HX Software is not supported on C-Series

HX Installer is a VM (OVA) – requires existing vCenter

Expansion is easy and non-disruptive

Cloud Center

Intersight

Cisco Container Platform

HX Flavors and use cases

HX Edge

2-4 Nodes (ROBO) – No FI needed

HX

Converged

Compute-Only

Up to 32 converged nodes with up to an additional 32 compute only nodes

SFF Drives

Either all flash or 2.5 spinning drives

LFF Spinning – 6 or 8TB

HX240 M5

6-12 of LFF in one server

Caching and housekeeping drives are still SFF Flash in back

HK – 240GB Flash

Caching – 3.2 TB SSD

HX 3.5 introduced stretch cluster for LFF drives

HX Edge 3 node HX 220 based connected directly to ToR switches with no FI

Managed through a centralized vCenter

Central backup site (LFF Cluster) with Veeam

3.0 introduced Hyper-V support

3.5 introduced LFF Drives support but no stretch or edge for Hyper-V

Scaling and Expanding a Deployment

Scales from 3 – 64 nodes

Node – Additional memory and compute

Converged Node – includes storage

3.5 added Hyper-V supprt and stretch clusters

Node must be compatible

Storage type

Size

Generation

Chassis size

Similar in ram and compute

You can introduce M5 to M4 but not vice versa

Use HX Installer to expand

Use config file

Provide creds

UCSM/Hypervisor/vCetner

Select cluster to expand

Select unassociated server from list

Provide IP info

Provide Name

Add VLANs

Start install

Go to HX Connect

View added node

Data is automatically distributed to new disks on new nodes

You can leverage numerous platforms for compute only node

Must be connected to same domain

Must have HX Data Platform installed via Installer

Mounts HX storage via NFS

Disks are not added to shared pool (for datastore creation)

CVM on compute only require

1 vcpu

512 MB RAM

Software Components Overview

StorFS – Distributed Log-structured file system

Writes sequentially, uses pointers – increases performance

Index has references to blocks for files across the distributed log

New blocks are written, old are deleted and housekeeping occurs.

File system index is stored in memory of  a controller VM.

Runs on every node in HX cluster

Logs caching compression duplication

Disks are passed through to CVM

Space is presented as NFS

HX220

CVM needs 48 GB RAM and 10.8 GHz

HX240

CVM needs 72 GB RAM and 10.8 GHz

HX 240 LFF 78 GB RAM and 10.8 GHz

CVM is doing cluster management

HX management

HX Connect management interface

HTML 5 GUI

Monitoring capability/Utilization monitoring

Replication

Clone

Upgrades

CVM CLI – not all commands are supported through GUI

CVM CLI is ubuntu VM – connect via SSH

stcli command

IOvisor (vib) – responsible for data distro

captures data and sends to any available node for caching

Not dependent on CVM so if CVM fails, fs ops will be directed to appropriate node

VAAI is used in CVM and Hypervisor

Allows direct FS ops on a Datastore (StorFS ops)

Allows for snaps and clones (much faster) using FS

Distributed File System

StorFS

When deploying

Select RF2 or RF3

Caching Tier

In all-flash systems

Not used for read cache

In all systems

Write cache works the same way

All flash

Hybrid

Caches writes as it gets distributed

De-stages writes

Split between active and passive

Active – caches fata

Passive – moves data to capacity drives

Number of cache level segments depends on RF factor

2 for RF 2

3 for RF 3

Hybrid systems

Write cache still works the same

Caching drive is ALSO use for read caching

Frequently used

Recently used

VDI mode

Only caches most frequently accessed

Not most recently accessed

Hardware component Overview

3 tier storage

Memory Cache – volatile

Controller VM, File system metadata

Cache Drive

SAS SSD, NVMe SSD or Optane

Capacity tier

All spinning or all flash

Hybrid (blue bezel) or all flash (orange bezel)

2 chassis types

HX220 (1U) M5 (Dual Intel Skylake Platinum’s)

10 Drives

Min6 max 8 capacity

Cache drive is front mounted

M.2 drive installs esx (internal)

Housekeeping (logs, storage)

HX240 (2U) M5 (Dual Intel Skylake Platinum’s)

Cache is on back

Capacity and housekeeping on front

Min 6 up to 23 capacity drives

Up to 2 graphics cards

Memory channels

6 per proc

2 sticks per channel

Max 128 per stick

16, 32, 64, 128

6 or 12 sticks per CPU for optimal performance

If M type procs, 1.5TB per CPU (total 3TB)

1.2TB

1.8 TB for hybrid

(cost effective)

960 and 3.8 TB for all flash (performance/density)

Network:

VIC1227

Dual 10G

VIC 1387

Dual 40G

Fis

6248s, 6296, 6332, 6332-16UP

UCSM

Other Notes:

Can you install HX without network?

No

Can you use install software as NTP server?

No. 2.0 and 2.1 disables it after 15 mins

Can I install vCenter on HX?

Yes. With 4 nodes with earlier versions.

Should storage VLAN be layer 3?

No

Can you setup multiple VLANs in UCSM during install?

Yes but you have to rename them

Are jumbo frames required?

No but enable them

HX Tech Brief – An App, Any Cloud, Any Scale

ROBO – HX Edge/Nexus

Private – HX/ACI (private cloud)

Intersite federates management

Edge leverages Intersite as cloud witness

Public – Cisco Container Platform on top of HX for Kubernetes aaS with HX in Prem

Cloud center – model apps/deploy consistently

App dynamics – performance/application transaction visibility

CWOM – optimizes resource utilization for underlying infra

Tetration – workload protection – enforce versions at the host level

HX Edge 2-4 nodes (up to 2000 sites) – Intersight can deploy in parallel to multiple sites

HX – 32nodes –> 32 more compute nodes

Installation Notes:

Deploy/Launch Data platform installer – OVA – can be on a SE laptop

Root:Cisco123

Create new:

Customize your workflow

Run UCSM config (unless you have edge (no Fis)

Check everything else

Create cluster

Cluster expansion:

In UCSM, validate PID and make sure its unassociated/no profile

In installer:

Supply UCSM name/creds

vCenter creds

Hypervisor creds

Select cluster to expand

Select server

Provide VLAN configs

Use ; for multiple VLANs

Enable iSCSI/FC if needed

For mgt VLAN and Data VLAN

Provide IP for esxi host

Provide IP for storage controller

Recovery Gear

After a few outings in the Jeep, its become quite apparent how critical it is to have the right recovery equipment. One friend just slid sideways in the snow into the mountain, one popped a tire off the bead from a tree stump in the mountain, and even out in the middle of the sand dunes, I had to winch my friends RZR up a good 80′ dune after hitting a witches eye.

I have started building up my gear collection and I am listing the components here for easy reference.

  • Warn Zeon 10-S Winch – 10,000 Pounds
    • Factor 55 ProLink – 16,000 Pounds
    • Factor 55 Hawse Fairlead
  • WARN Snatch Block – 12,000 Pounds
  • Gator-Jaw PRO Soft Shackles – 52,300 Pounds
  • ARB Tree Saver – 26,500 Pounds
  • Bubba Rope Recovery Rope – 7/8 x 30ft – 28,600 Pounds
  • Factor 55 Hitchlink 2.0 (Jeep) – 9500 Pounds
  • Factor 55 Hitchlink 2.5 (F-250) – 18,000 Pounds
  • 7/8″ Galvanized Steel Shackle (F-250) – WLL 6.5T

Rattlesnake Canyon/Mottino Wash

I was invited by a long time friend out on a Jeep run this past weekend. I knew it was with a group of people, I didnt know it was with an official Jeep Club. We took the 210 East to the 15 and exited Bear Valley. It turned into the 18 until we got to El Coyote Loco. We had some good breakfast there with the group and headed out to the start of the trail off the 247 (aka Old Woman Springs Road). Make a right onto Rattlesnake Canyon Road and off you go. (34.355566, -116.664733). You will eventually get to the wash. Make a left and get ready for some difficult trails with lots of big rocks in the way. You will scratch your rims, you will scratch your sliders and you will definitely scratch your under belly skids. But you will have fun doing it. This isnt for the mall crawlers trying to keep their Jeep immaculate. Its a pretty difficult trail so make sure you go with another vehicle and have your recovery gear in working order.

I made a short video of my adventure. You can view it here…

Cheers!

Sandstone Canyon

Down in Ocotillo Wells, CA, there is a neat little offroad excursion to explore called Sandstone Canyon.  From the San Fernando Valley, you start by heading down the I-5 south. Take the 210 East and then the 10 East. Around Indio, keep right and take the 86 South. Follow the signs for Brawley/El Centro/865 Expressway. Turn right onto CA-78 and follow the signs for Ocotillo Wells. When you get to Split Mountain Road, make a left.

The trailhead for Sandstone Canyon is at:

Lat: 33.038973
Lon: -116.096849
Be careful if you are going after a rainfall as flash floods are common in the area.

Perseid Meteor Showers – Gorman,CA

The annual Perseid Meteor Shower was this past weekend and it was the first time in about 25 years since the skies were as dark as they were this year (no Moon). The peak, which is generally around the 12th of August, also happened to be on a weekend this year. We packed our toy hauler and the kids (and our new puppy, Benny!) and rolled us out to the desert for some good viewing where the light pollution was minimal. We setup the patio on the trailer, threw out some blankets and pillows and watched shooting stars fill the night sky, We had a blast. I didnt catch any pictures of our viewing party but here are some video of my daughters on their quads while we were out there.

https://www.youtube.com/watch?v=sUpHk9kIOtU

https://www.youtube.com/watch?v=KE4yi5r5FdU

The Jeep Story – Bumpers

The Jeep Wrangler Unlimited Rubicon in standard trim comes with plastic bumpers. We dont have a “Special Edition” Rubicon. There are a number of special edition Wranglers that came with steel bumpers from the Anniversary Edition to the 2017 Winter Edition. One of the most common aftermarket upgrades to most Jeeps are the bumpers. While the stock bumpers are lightweight and offer legal coverage, they often limit approach/departure angles and as a result, suffer casualties on the trails. Its difficult to attach lights and winches and any other gear you may need. The standard tire carrier on most Jeeps are also mounted on the rear tailgate so offloading the extra weight of the bigger tires to the bumper is a better option.  As far as I can remember, there havent been any stock Wranglers offered with a bumper mounted tire carrier. There are a number of quality aftermarket bumper manufacturers on the market today. Some of the more popular quality ones include Poison Spyder, Fab Fours, and GenRight Offroad. Most offer a bumper mounted tire carrier. Many of these are built for serious rock crawling duty and offer a great look, great protection, and even more clearance for your adventures.

For my build, after looking at a lot of pictures online and making the decision that we were building a Jeep that was mostly for overlanding adventures, we decided to stick with the same company that has been providing Jeep with bumpers for their special edition vehicles for years and the same manufacturer that we bought our suspension lift from, American Expedition Vehicles (AEV). One of the reasons we chose AEV was because of its history. They have been in business for quite some time and the fact that Jeep turned to THEM to provide parts was quite telling. More recently, Chevrolet gave the official nod to AEV with the release of the Colorado ZR2 Bison.

The rear bumper really caught our eye. Instead of tacking on 3 or 4  “jerry cans” or RotopaX for extra gas and water, AEV allows for 5 gallons of water in the bumper itself and another 10 gallons of gas in a nice tank tucked behind the spare tire (see pics below). The bumper mounted tire carrier also has an integrated Hi-Lift Jack mount, an integrated Pull-Pal mount, nice mount for a shovel, and mounts for rear lights as well as the CHMSL. All without looking like I’m Mad Max running through my concrete jungle. They look like they were designed and installed at the factory and it even has a nice winch mount on the front bumper that nicely hides my Warn Zeon 10-S. The bumpers are full width and the front bumper has a nice hoop and also comes with mounts for 2 driving lights. While you can get away with 6″ IPF lights from ARB or someone similar, I will probably opt for a Rigid Adapt light bar or two 7″ ARB Intensity’s. That is still TBD and may show up in a next post. Until then, thanks for reading and enjoy the pics!

Stock Front Bumper (Before):

AEV Front Bumper (After):

Stock Rear Bumper (Before):

AEV Rear Bumper with Tire Carrier, Fuel Cell, and Hi-Lift Mount:

Closer look at the Hi-Lift Mount (and FireStik CB Antenna):

 

NetApp A800 Specs

NetApp introduced its flagship model, the A800 and ONTAP version 9.4 which included a number of enhancements. Some details are below.

48 internal NVMe drives – 4U

1.1 Million IOPS – 200-500 microsecond latency

25 GBps sequential reads/sec

Scales 12 or 24 nodes (SAN/NAS respectively)

12PB or 6.6M IOPS for SAN (effective)

24PB or 11.4M IOPS for NAS (effective)

Nearly 5:1 efficiency using dedupe, compression, compaction, and cloning

Support for NVMeoF — connect your existing applications using FC to faster drives. Increase performance vs FC connected devices by 2x.

FabricPool 2.0 allows for tiering cold data to Amazon, Azure or any object store such as StorageGRID. You can now move active filesystem data to the cloud and bring it back when needed.

 

The Jeep Story – Steps (cont’d)

After some careful research on form and function, we opted to purchase some rock sliders/steps from GenRight Offroad. I chose these since they offered the most coverage for my Jeep, double as a step, came in a nice textured black powder-coated finish, and they are made close by in Simi Valley, CA.

These were available in a lightweight aluminum but strength is always an issue vs. steel. GenRight offers something called a Rash Guard. These $250 pieces bolt over the Rocker Guard and take most of the beating and add some strength to the setup. Overall, it is still lighter than the steel solution and its riveted design looks great too.

My friends recently took my family and I up to Big Bear, California and signed us up for a guided Jeep tour. We took a trail called Little John Bull and despite my best efforts, I came crashing down on my brand new guards along the way. First war wound. I am happy to say that it did not even dent the rash guard. There are scuffs on the powder coating as expected…but no denting. If I ever want to replace the guards where the rock hit, its just the $250 rash guard that needs replacing. The guards are highly recommended and are not available on all online sites (nor is the textured powdercoat option!). Call GenRight, they are great folks (I drove there).

While I do miss the convenience of a step lowering down so I dont have to climb in/out of the Jeep, I have to say that having the peace of mind that nothing is going to break (aka AMP PowerStep motors) while I am off-road is a decent trade-off. RockSlide Engineering makes a set of power steps that double as rock guards. With all the pieces to make it work properly, it would have been more money out of my pocket and I’m still worrying about motor life. I ended saving a few hundred bucks on the GenRight solution vs. my AMP steps but none of this was about cost really. I wanted a solution that protected my Jeep well, looked clean, and matched my AEV bumpers and wheels. The rivets also happened to compliment the beadlock bolts. GenRight fit the bill. We are 100% happy.

Enjoy the pics and let me know what you think!

 

SolidFire Install Notes and Guide

I had the pleasure of installing a 4-Node SolidFire 19210 recently. While the system was being used strictly for Fiber Channel block devices to the hosts, there was a little bit of networking involved. There are diagrams below that detail what I have explained for the most part if you want a TL;DR version.

The system came with 6 Dell 1U nodes. 4 of the nodes were full of disks. The other two had no disks but extra IO cards in the back (FC and 10Gb).

First step was to image the nodes to Return them To Factory Image (RTFI).  I used version 10.1 and downloaded the “rtfi” ISO image at the link below.

https://mysupport.netapp.com/NOW/download/software/sf_elementos/10.1/download.shtml

With the ISO you can create a bootable USB key:

  1. Use a Windows based computer and format a USB key in FAT 32 default block size. (a full format works better than a quick format)
  2. Download UNetbootin from http://unetbootin.sourceforge.net/
  3. Install the program on your windows system
  4. Insert USB key before starting UNetBootin
  5. Start UNetBootin
  6. After the program opens select the following options:
  7. Select your downloaded image […]
  8. Select Type : USB drive
  9. Drive: Select your USB key
  10. Press okay and wait for key creation to complete (Can take 4 to 5 minutes)
  11. Ensure process completes then close UNetBootin once done

Simply place the USB key in the back of the server and reboot. On boot, it will as a couple of questions, it will start installing Element OS, and then shut itself down. It will boot up into what they call the Terminal User Interface (TUI). It will come up in DHCP mode. If you have DHCP, it will pick up an IP address.  You will be able to see which IP it obtained in the TUI. You can use your web browser to connect to that IP. Alternately, using the TUI, you can set a static IP address for the 1GbBond interface. I connected to the management interface on the nodes once the IPs were set to continue my configs although you can continue using the TUI. To connect to the management interface, go to https://IP_Address:442. As for now, this is called the Node UI.

Using the Node UI, set the 10Gb IP information and hostname. The 10Gb network should be a private non-routable network with jumbo frames enabled through and through. After setting the IPs, I rebooted the servers. I then logged back in and set the cluster name on each node and rebooted again. They came back up in a pending state. To create the cluster, I went to one of the nodes IP addresses and it brought up the cluster creation wizard (this is NOT on port 442 instead, port 80). Using the wizard, I created the cluster. You will assign an “mVIP” and and “sVIP”. These are clustered IP addresses for the node IPs. mVIP for the 1GbBonds, and sVIP for the 10GbBond. Management interface is at the mVIP and storage traffic runs over the sVIP.

Once the cluster was created, we downloaded the mNode OVA. This is a VM that sends telemetry data to Active IQ, runs SNMP, handles logging and a couple of other functions. We were using VMWare so we used the image with “VCP” in the filename since it has the plugin.

https://mysupport.netapp.com/NOW/download/software/sf_elementos/10.1/download.shtml

Using this link and the link below, we were able to import the mNode into vCenter quickly. Once it had an IP, we used that IP to connect to port 9443 in a web browser and register the plugin to vCenter with credentials.

https://kb.netapp.com/app/answers/answer_view/a_id/1030112

I then connected to the mVIP and under Clusters–>FC Ports, I retrieved the WWPNs for zoning purposes.

Your SAN should be ready for the most part. You will need to create some logins at the mVIP along with volumes and Volume Access Groups. Once you zone your hosts in using single initiator zoning, you should be able to scan you disk bus on your hosts and see your LUNs!

As mentioned there were some network connections to deal with. My notes are below.

I am not going to cover the 1GbE connections. Those are really straight forward. One to each switch. Make sure the ports are on the right VLAN. The host does NIC teaming. Done.

iDRAC is even easier. Nothing special. Just run a cable from the iDRAC port to your OOB switch and you are done with that too. IP address is set through the BIOS.

FC Gateways (F-nodes)

These nodes have 2x 10Gb ports onboard and 2x 10Gb ports in Slot 1 along with 2x Dual Port FC cards.

For the FC gateways (F-nodes), there were 4x 10Gb ports on each node. 2 were onboard like the S-nodes and 2 are in a card in Slot 1.

From F-node1, we sent port0 (onboard) and port0 (card in slot 1) to switch0/port1 and port2. From F-node2, we sent port0 (onboard) and port0 (card in slot 1) to switch1/port1 and port2. We then created an LACP bundle with switch0/port1 and port2 and switch1/port1 and port2. One big 4 port LACP bundle.

Then, back to F-node1, we sent port1 (onboard) and port1 (card in slot 1) to switch0/port3 and port4. From F-node2 we sent port1 (onboard) and port1 (card in slot 1) to switch1/port3 and port4. We then created another LACP bundle with switch0/port3 and port4 and switch1/port3 and port4. Another big 4 port LACP bundle.

We set private network IPs on the 10g Bond interfaces on all nodes (S and F alike). Ensure jumbo frames is enabled throughout the network else you may receive errors when trying to create the cluster (xDBOperation Timeouts). Alternately, you can set the MTU for the 10GBond interfaces down to 1500 and test cluster creation to verify that jumbo frames are causing issues but this is not recommended for a production config. It should simply be used to rule out jumbo frames config issues.

As mentioned, each F-node has 2x Dual Port FC cards. From Node 1, Card0/Port0 goes to FC switch0/Port0. Card1/Port0 goes to FC switch0/Port1. Card0/Port1 goes to FC switch1/Port0 and Card1/Port1 goes to FC switch1/Port1.

Repeat this pattern for Node 2 but use ports 2 and 3 on the FC switches.

Storage Nodes (S-Nodes)

On all S-nodes, the config is pretty simple. Onboard 10Gb port0 goes to switch0 and onboard 10Gb port1 goes to switch1. Create an LACP port across the two ports for each node.

If the environment has two trunked switches, where the switches appear as one, you must use LACP (802.3ad) bonding. The two switches must appear as one switch, either by being different switch blades that share a back-plane, or have software installed to make it appear as a “stacked” switch. The two ports on either switch must be in a LACP trunk to allow the failover from one port to the next to happen successfully. If you want to use LACP bonding, you must ensure that the switch ports between both switches allow for trunking at the specific port level.

 

There may be other ways to do this. If you have comments or suggestions, please leave them below. If I left something out, let me know.

Some of this was covered nicely in the Setup Guide and the FC Setup Guide but not at length or detail enough. I had more questions and had to leverage good network connections to get answers.

Thanks for reading. Cheers.