Category Archives: Technology

Terraform on Windows 101

Create a folder called “bin” in %USERPROFILE%

Start–>Run–>%USERPROFILE%–>create a folder called “bin”

Download Terraform

https://www.terraform.io/downloads.html
Save the .exe in the “bin” folder you created

Set Windows “PATH” Variable

System Properties–>Environment Variables
Highight PATH
Click “Edit”
Click “New”
Add %USERPROFILE%\bin

Create a user in AWS for Terraform

In AWS, go to IAM
Create a user called “terraform”
programmatic access only
Attach existing policies directly
Administrator access (proceed with caution!)
Copy the Access Key ID (save to credentials store like KeePass or an excel spreadsheet for now)
Copy the Secret Access Key (save to credentials store)
Or download the .CSV and grab the values

Create a folder called .aws on your PC

Make sure to add a “.” at the end of the folder name or it will throw an error

Create a credentials file

Create a new file called “credentials” in the .aws directory (remove the extension)
Using the ID and Key from above, make it look like this:
Line 1: [default]
Line 2: aws_access_key_id=your_key_id_here
Line 3: aws_secret_access_key=your_access_key_here
Save the file (again, make sure to remove the .txt extension or it wont work)

Download and install Git for Windows

https://gitforwindows.org

Create a folder called TF_Code for your working files

I created mine on my desktop

Open Git Bash, navigate to your working directory

cd desktop
cd TF_Code

Make the directory a Git repository

git init

Create a new file with VI

vi first_code.tf
Line 1: provider “aws” {
Line 2: profile = “default”
Line 3: region = “us-west-2”
Line 4: }
Line 6: resource “aws_s3_bucket” “tf_course” {
Line 7: bucket = “tf-course-uniqueID”
Line 8: acl = “private”
Line 9: }

Commit the code

git add first_code.tf
git commit -m “some commit message”

Try Terraform! (in Git Bash)

terraform init
Downloads and Initializes plugins

Apply the code

terraform apply
yes (to perform the actions)

Check your AWS account (S3), you should see a new S3 bucket!

Delete the bucket

terraform plan -destroy -out=example.plan
terraform apply example.plan

Your bucket will now be deleted!

To recreate the bucket, just run the ‘terraform apply’ command again, say yes, and…BOOM, your bucket is created again!

Hope that helps. Good luck and happy computing!

SSH from PuTTY to GCP Compute Engine

First off, if you are trying to securely connect to your enterprise production network and instances, there are better (safer) methods (architectures) to do this. OSLogin or federating your Azure AD for instance, might be more secure and scalable. I run a pointless website (this one) with nothing to really lose across a handful of instances. This is a hobby.

Second, I recently got a dose of humble pie when trying to use PuTTY on Windows to connect to a Ubuntu instance in GCP. I was generally using gCloud command-line for getting my app running but I got a wild hair up my ass this morning to try and just use PuTTY to avoid the step of logging into Google Cloud (via Chrome) for administration. I am fairly use to AWS where I just create an instance, download the .pem file, convert it to a ppk with PuTTYgen, and then use that along with the default login (ec2-user or ubuntu) to connect to my minecraft and web servers. GCP was a little different.

Once I read a few docs from Google searches, it became much more apparent vs reading the GCP docs. Here is how I did it.

Download PuTTYgen if you dont have it already.

Launch PuTTYgen.

Click on “Generate“. I used a 2048 bit RSA key.

Move your mouse around the box to generate a key.

In the “Key comment” field, replace the data there with a username you want to use to connect to your Compute Engine instance (highlighted)

Copy the ENTIRE contents of the public key (the data in the “public key for pasting…”) box. It should end with the username you want to connect with if you scroll down.

Click on “Save Private Key” and select a location/path that is secure (and one that you will remember!).

Create a new Compute Engine instance or go to an existing instance. From the VM instances page, click on the instance name. In my case it was “minecraft001”.

At the top of the page, click on “Edit“.

Scroll almost all the way to the bottom and you will see an “SSH Keys” section.

Click on “show and edit

Click on “+ Add Item

Paste in the key data you copied from PuTTYgen from the step above.

  • You will notice that it extracts the username from your key on the left. This is the username you will use from PuTTY.

On the same page, click on “Save” at the bottom of the page.

On the VM instance details page, find the “External IP” section and copy the IP address (the cascaded window icon will add it to your buffer).

Now open or go back to your PuTTY client (not PuTTYgen).

Paste the IP address into your PuTTY client.

On the left side of the PuTTY client, scroll down to the “Connection” section and click the “+” to expand it

Click the “+” next to the “SSH” section

HIGHLIGHT the “Auth” section. Dont expand it.


Click on “Browse…

Find the Private Key file you saved from earlier (should have a .ppk file extension). Double click to select and use it.

Scroll back up and highlight the Session category.

From here you can either name your connection and Save it under “saved sessions“…or just click the “Open” button.

It should make a connection to your Compute Instance and ask for a username. Supply the username you specified in the step above and voila! I used “jonny” in my example.

That’s it! Happy computing!

Pushing Docker Containers to GitHub

I recently went through the process of building a dockerfile from scratch. I wont get into the details of that process but I did come across an error when trying to publish my package to GitHub Packages.

I tried to do a sudo docker push docker.pkg.github.com/mookyd/mymooky/mymooky:latest (my repo) and was thrown the error:

unauthorized: Your request could not be authenticated by the GitHub Packages service. Please ensure your access token is valid and has the appropriate scopes configured.

Its pretty clear what needed to happen but I thought my credentials would be enough since I wasnt using a script per se. I used docker login and provided my username and password and tried the command again. Same error.

After doing some reading, I discovered that you need to pass a “Personal Access Token” as a password. I generated a PAT under Settings–> Developer Settings –> Personal Access Tokens. I gave the token the access to the repo and to read and write packages. I then used docker login and passed the token string to login. After that, I was able to use docker push to upload my image.

Minikube on VirtualBox on Ubuntu on VirtualBox

I recently needed a small lab environment to sharpen my Kubernetes skills. I setup Minikube on an Ubuntu VM running 18.04.4 LTS (bionic). This VM was created on my Windows Desktop in VirtualBox. Confused yet? Some of the commands can leave your environment insecure so do not do this in your Production Internet facing environment.

To get started, I downloaded and installed VirtualBox onto my Windows PC. I then created an Ubuntu 18.04 VM and make sure the number of vCPUs on your VM is greater than or equal to 2.

First step is to update your VM.

  • sudo apt-get update
  • sudo apt-get install apt-transport-https (if using 1.4 or earlier)
  • sudo apt-get upgrade

Install VirtualBox on your Ubuntu VM

  • sudo apt install virtualbox virtualbox-ext-pack

Download Minikube

  • wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

Make it executable

  • sudo chmod +x minikube-linux-amd64

Move it so its in path

  • sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Download kubectl

  • curl -LO https://storage.googleapis.com/kubernetes-release/release/curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt/bin/linux/amd64/kubectl

Make it executable

  • chmod +x ./kubectl
  • sudo mv ./kubectl /usr/local/bin/kubectl

Check that its working properly

  • kubectl version -o json

I received an error saying docker wasn’t in $PATH. You may or may not see this error.

Install docker

  • curl -fsSL https://get.docker.com/ | sh

Start Minikube

  • sudo minikube start –vm-driver=virtualbox

Start the Kubernetes Dashboard

  • minikube dashboard
  • minikube dashboard –url

If you want to view the dashboard remotely, you will need to run the following commands:

  • sudo kubectl proxy –address=’0.0.0.0′ –disable-filter=true

You will get a message saying “Starting to serve on [::]:8001”

Hopefully this helps. If you get stuck or have a way to optimize this, please comment below.

Kudos to https://computingforgeeks.com/how-to-install-minikube-on-ubuntu-18-04/ for helping me get started.

Cisco Hyperflex – #700-905 – Notes

Intro to Hyper-Convergence

Started out with local storage

Couldn’t expand

Moved to centralized storage

Server availability issues

Moved to virt and converged for clusters

FlexPods, VersaStacks…chassis had limitations

Back to local storage

Scales similar to cloud

No limits

Intro to HX platform

Based on C-series

Wizard-based installer

Converged Nodes – Disk, network, cpu, memory

Data Platform

StorFS – Distributed File System

Springpath is underlying backbone

Ensures copy data is always there

High performance networking is used by StorFS log structured FS

Communication channel for VMs and management etc

FIs are hardware management platform

HX Software is not supported on C-Series

HX Installer is a VM (OVA) – requires existing vCenter

Expansion is easy and non-disruptive

Cloud Center

Intersight

Cisco Container Platform

HX Flavors and use cases

HX Edge

2-4 Nodes (ROBO) – No FI needed

HX

Converged

Compute-Only

Up to 32 converged nodes with up to an additional 32 compute only nodes

SFF Drives

Either all flash or 2.5 spinning drives

LFF Spinning – 6 or 8TB

HX240 M5

6-12 of LFF in one server

Caching and housekeeping drives are still SFF Flash in back

HK – 240GB Flash

Caching – 3.2 TB SSD

HX 3.5 introduced stretch cluster for LFF drives

HX Edge 3 node HX 220 based connected directly to ToR switches with no FI

Managed through a centralized vCenter

Central backup site (LFF Cluster) with Veeam

3.0 introduced Hyper-V support

3.5 introduced LFF Drives support but no stretch or edge for Hyper-V

Scaling and Expanding a Deployment

Scales from 3 – 64 nodes

Node – Additional memory and compute

Converged Node – includes storage

3.5 added Hyper-V supprt and stretch clusters

Node must be compatible

Storage type

Size

Generation

Chassis size

Similar in ram and compute

You can introduce M5 to M4 but not vice versa

Use HX Installer to expand

Use config file

Provide creds

UCSM/Hypervisor/vCetner

Select cluster to expand

Select unassociated server from list

Provide IP info

Provide Name

Add VLANs

Start install

Go to HX Connect

View added node

Data is automatically distributed to new disks on new nodes

You can leverage numerous platforms for compute only node

Must be connected to same domain

Must have HX Data Platform installed via Installer

Mounts HX storage via NFS

Disks are not added to shared pool (for datastore creation)

CVM on compute only require

1 vcpu

512 MB RAM

Software Components Overview

StorFS – Distributed Log-structured file system

Writes sequentially, uses pointers – increases performance

Index has references to blocks for files across the distributed log

New blocks are written, old are deleted and housekeeping occurs.

File system index is stored in memory of  a controller VM.

Runs on every node in HX cluster

Logs caching compression duplication

Disks are passed through to CVM

Space is presented as NFS

HX220

CVM needs 48 GB RAM and 10.8 GHz

HX240

CVM needs 72 GB RAM and 10.8 GHz

HX 240 LFF 78 GB RAM and 10.8 GHz

CVM is doing cluster management

HX management

HX Connect management interface

HTML 5 GUI

Monitoring capability/Utilization monitoring

Replication

Clone

Upgrades

CVM CLI – not all commands are supported through GUI

CVM CLI is ubuntu VM – connect via SSH

stcli command

IOvisor (vib) – responsible for data distro

captures data and sends to any available node for caching

Not dependent on CVM so if CVM fails, fs ops will be directed to appropriate node

VAAI is used in CVM and Hypervisor

Allows direct FS ops on a Datastore (StorFS ops)

Allows for snaps and clones (much faster) using FS

Distributed File System

StorFS

When deploying

Select RF2 or RF3

Caching Tier

In all-flash systems

Not used for read cache

In all systems

Write cache works the same way

All flash

Hybrid

Caches writes as it gets distributed

De-stages writes

Split between active and passive

Active – caches fata

Passive – moves data to capacity drives

Number of cache level segments depends on RF factor

2 for RF 2

3 for RF 3

Hybrid systems

Write cache still works the same

Caching drive is ALSO use for read caching

Frequently used

Recently used

VDI mode

Only caches most frequently accessed

Not most recently accessed

Hardware component Overview

3 tier storage

Memory Cache – volatile

Controller VM, File system metadata

Cache Drive

SAS SSD, NVMe SSD or Optane

Capacity tier

All spinning or all flash

Hybrid (blue bezel) or all flash (orange bezel)

2 chassis types

HX220 (1U) M5 (Dual Intel Skylake Platinum’s)

10 Drives

Min6 max 8 capacity

Cache drive is front mounted

M.2 drive installs esx (internal)

Housekeeping (logs, storage)

HX240 (2U) M5 (Dual Intel Skylake Platinum’s)

Cache is on back

Capacity and housekeeping on front

Min 6 up to 23 capacity drives

Up to 2 graphics cards

Memory channels

6 per proc

2 sticks per channel

Max 128 per stick

16, 32, 64, 128

6 or 12 sticks per CPU for optimal performance

If M type procs, 1.5TB per CPU (total 3TB)

1.2TB

1.8 TB for hybrid

(cost effective)

960 and 3.8 TB for all flash (performance/density)

Network:

VIC1227

Dual 10G

VIC 1387

Dual 40G

Fis

6248s, 6296, 6332, 6332-16UP

UCSM

Other Notes:

Can you install HX without network?

No

Can you use install software as NTP server?

No. 2.0 and 2.1 disables it after 15 mins

Can I install vCenter on HX?

Yes. With 4 nodes with earlier versions.

Should storage VLAN be layer 3?

No

Can you setup multiple VLANs in UCSM during install?

Yes but you have to rename them

Are jumbo frames required?

No but enable them

HX Tech Brief – An App, Any Cloud, Any Scale

ROBO – HX Edge/Nexus

Private – HX/ACI (private cloud)

Intersite federates management

Edge leverages Intersite as cloud witness

Public – Cisco Container Platform on top of HX for Kubernetes aaS with HX in Prem

Cloud center – model apps/deploy consistently

App dynamics – performance/application transaction visibility

CWOM – optimizes resource utilization for underlying infra

Tetration – workload protection – enforce versions at the host level

HX Edge 2-4 nodes (up to 2000 sites) – Intersight can deploy in parallel to multiple sites

HX – 32nodes –> 32 more compute nodes

Installation Notes:

Deploy/Launch Data platform installer – OVA – can be on a SE laptop

Root:Cisco123

Create new:

Customize your workflow

Run UCSM config (unless you have edge (no Fis)

Check everything else

Create cluster

Cluster expansion:

In UCSM, validate PID and make sure its unassociated/no profile

In installer:

Supply UCSM name/creds

vCenter creds

Hypervisor creds

Select cluster to expand

Select server

Provide VLAN configs

Use ; for multiple VLANs

Enable iSCSI/FC if needed

For mgt VLAN and Data VLAN

Provide IP for esxi host

Provide IP for storage controller

NetApp A800 Specs

NetApp introduced its flagship model, the A800 and ONTAP version 9.4 which included a number of enhancements. Some details are below.

48 internal NVMe drives – 4U

1.1 Million IOPS – 200-500 microsecond latency

25 GBps sequential reads/sec

Scales 12 or 24 nodes (SAN/NAS respectively)

12PB or 6.6M IOPS for SAN (effective)

24PB or 11.4M IOPS for NAS (effective)

Nearly 5:1 efficiency using dedupe, compression, compaction, and cloning

Support for NVMeoF — connect your existing applications using FC to faster drives. Increase performance vs FC connected devices by 2x.

FabricPool 2.0 allows for tiering cold data to Amazon, Azure or any object store such as StorageGRID. You can now move active filesystem data to the cloud and bring it back when needed.

 

SolidFire Install Notes and Guide

I had the pleasure of installing a 4-Node SolidFire 19210 recently. While the system was being used strictly for Fiber Channel block devices to the hosts, there was a little bit of networking involved. There are diagrams below that detail what I have explained for the most part if you want a TL;DR version.

The system came with 6 Dell 1U nodes. 4 of the nodes were full of disks. The other two had no disks but extra IO cards in the back (FC and 10Gb).

First step was to image the nodes to Return them To Factory Image (RTFI).  I used version 10.1 and downloaded the “rtfi” ISO image at the link below.

https://mysupport.netapp.com/NOW/download/software/sf_elementos/10.1/download.shtml

With the ISO you can create a bootable USB key:

  1. Use a Windows based computer and format a USB key in FAT 32 default block size. (a full format works better than a quick format)
  2. Download UNetbootin from http://unetbootin.sourceforge.net/
  3. Install the program on your windows system
  4. Insert USB key before starting UNetBootin
  5. Start UNetBootin
  6. After the program opens select the following options:
  7. Select your downloaded image […]
  8. Select Type : USB drive
  9. Drive: Select your USB key
  10. Press okay and wait for key creation to complete (Can take 4 to 5 minutes)
  11. Ensure process completes then close UNetBootin once done

Simply place the USB key in the back of the server and reboot. On boot, it will as a couple of questions, it will start installing Element OS, and then shut itself down. It will boot up into what they call the Terminal User Interface (TUI). It will come up in DHCP mode. If you have DHCP, it will pick up an IP address.  You will be able to see which IP it obtained in the TUI. You can use your web browser to connect to that IP. Alternately, using the TUI, you can set a static IP address for the 1GbBond interface. I connected to the management interface on the nodes once the IPs were set to continue my configs although you can continue using the TUI. To connect to the management interface, go to https://IP_Address:442. As for now, this is called the Node UI.

Using the Node UI, set the 10Gb IP information and hostname. The 10Gb network should be a private non-routable network with jumbo frames enabled through and through. After setting the IPs, I rebooted the servers. I then logged back in and set the cluster name on each node and rebooted again. They came back up in a pending state. To create the cluster, I went to one of the nodes IP addresses and it brought up the cluster creation wizard (this is NOT on port 442 instead, port 80). Using the wizard, I created the cluster. You will assign an “mVIP” and and “sVIP”. These are clustered IP addresses for the node IPs. mVIP for the 1GbBonds, and sVIP for the 10GbBond. Management interface is at the mVIP and storage traffic runs over the sVIP.

Once the cluster was created, we downloaded the mNode OVA. This is a VM that sends telemetry data to Active IQ, runs SNMP, handles logging and a couple of other functions. We were using VMWare so we used the image with “VCP” in the filename since it has the plugin.

https://mysupport.netapp.com/NOW/download/software/sf_elementos/10.1/download.shtml

Using this link and the link below, we were able to import the mNode into vCenter quickly. Once it had an IP, we used that IP to connect to port 9443 in a web browser and register the plugin to vCenter with credentials.

https://kb.netapp.com/app/answers/answer_view/a_id/1030112

I then connected to the mVIP and under Clusters–>FC Ports, I retrieved the WWPNs for zoning purposes.

Your SAN should be ready for the most part. You will need to create some logins at the mVIP along with volumes and Volume Access Groups. Once you zone your hosts in using single initiator zoning, you should be able to scan you disk bus on your hosts and see your LUNs!

As mentioned there were some network connections to deal with. My notes are below.

I am not going to cover the 1GbE connections. Those are really straight forward. One to each switch. Make sure the ports are on the right VLAN. The host does NIC teaming. Done.

iDRAC is even easier. Nothing special. Just run a cable from the iDRAC port to your OOB switch and you are done with that too. IP address is set through the BIOS.

FC Gateways (F-nodes)

These nodes have 2x 10Gb ports onboard and 2x 10Gb ports in Slot 1 along with 2x Dual Port FC cards.

For the FC gateways (F-nodes), there were 4x 10Gb ports on each node. 2 were onboard like the S-nodes and 2 are in a card in Slot 1.

From F-node1, we sent port0 (onboard) and port0 (card in slot 1) to switch0/port1 and port2. From F-node2, we sent port0 (onboard) and port0 (card in slot 1) to switch1/port1 and port2. We then created an LACP bundle with switch0/port1 and port2 and switch1/port1 and port2. One big 4 port LACP bundle.

Then, back to F-node1, we sent port1 (onboard) and port1 (card in slot 1) to switch0/port3 and port4. From F-node2 we sent port1 (onboard) and port1 (card in slot 1) to switch1/port3 and port4. We then created another LACP bundle with switch0/port3 and port4 and switch1/port3 and port4. Another big 4 port LACP bundle.

We set private network IPs on the 10g Bond interfaces on all nodes (S and F alike). Ensure jumbo frames is enabled throughout the network else you may receive errors when trying to create the cluster (xDBOperation Timeouts). Alternately, you can set the MTU for the 10GBond interfaces down to 1500 and test cluster creation to verify that jumbo frames are causing issues but this is not recommended for a production config. It should simply be used to rule out jumbo frames config issues.

As mentioned, each F-node has 2x Dual Port FC cards. From Node 1, Card0/Port0 goes to FC switch0/Port0. Card1/Port0 goes to FC switch0/Port1. Card0/Port1 goes to FC switch1/Port0 and Card1/Port1 goes to FC switch1/Port1.

Repeat this pattern for Node 2 but use ports 2 and 3 on the FC switches.

Storage Nodes (S-Nodes)

On all S-nodes, the config is pretty simple. Onboard 10Gb port0 goes to switch0 and onboard 10Gb port1 goes to switch1. Create an LACP port across the two ports for each node.

If the environment has two trunked switches, where the switches appear as one, you must use LACP (802.3ad) bonding. The two switches must appear as one switch, either by being different switch blades that share a back-plane, or have software installed to make it appear as a “stacked” switch. The two ports on either switch must be in a LACP trunk to allow the failover from one port to the next to happen successfully. If you want to use LACP bonding, you must ensure that the switch ports between both switches allow for trunking at the specific port level.

 

There may be other ways to do this. If you have comments or suggestions, please leave them below. If I left something out, let me know.

Some of this was covered nicely in the Setup Guide and the FC Setup Guide but not at length or detail enough. I had more questions and had to leverage good network connections to get answers.

Thanks for reading. Cheers.

Mining Ethereum (locally)

I setup my own Ethereum-based blockchain this morning and mined some ether. I created two accounts (wallets) and passed Wei (ether “pennies”) back and forth to each other and performed mining in-between to verify the transactions (aka move the money).

  1. Download geth
  2. Unzip it
  3. Initialize geth using this genesis.json file (strip the .zip extension off the filename)

geth –datadir=./datadir init genesis.json

4. Create an account:

geth –datadir=./datadir account new

5. Start the java console:

geth –datadir=./datadir console

Mine some ether!

miner.start(1)

INFO [01-23|11:45:55] Updated mining threads                   threads=1

INFO [01-23|11:45:55] Transaction pool price threshold updated price=18000000000

null

> INFO [01-23|11:45:55] Starting mining operation

INFO [01-23|11:45:55] Commit new mining work                   number=1 txs=0 uncles=0 elapsed=88.981µs

INFO [01-23|11:46:04] Generating DAG in progress               epoch=0 percentage=0 elapsed=7.167s

INFO [01-23|11:46:11] Generating DAG in progress               epoch=0 percentage=1 elapsed=14.543s

INFO [01-23|11:46:18] Generating DAG in progress               epoch=0 percentage=2 elapsed=21.008s

INFO [01-23|11:46:24] Generating DAG in progress               epoch=0 percentage=3 elapsed=27.073s

INFO [01-23|11:55:54] Generated ethash verification cache      epoch=0 elapsed=9m57.721s

INFO [01-23|11:56:00] Successfully sealed new block            number=1 hash=9c9452…2b3045

INFO [01-23|11:58:51] 🔨 mined potential block                  number=18 hash=c0b3d4…823d41

INFO [01-23|11:58:51] Commit new mining work                   number=19 txs=0 uncles=0 elapsed=205.898µs

INFO [01-23|11:58:52] Successfully sealed new block            number=19 hash=8940d2…e026a6

INFO [01-23|11:58:52] 🔗 block reached canonical chain          number=14 hash=e08f17…b69966

INFO [01-23|11:58:52] 🔨 mined potential block                  number=19 hash=8940d2…e026a6

INFO [01-23|11:58:52] Commit new mining work                   number=20 txs=0 uncles=0 elapsed=116.913µs

miner.stop()

You should have some ether in your wallet now!

Check your balance:

eth.accounts

get string

eth.getBalance(“paste string”)

or to see balance in ether:

web3.fromWei(eth.getBalance(eth.accounts[0]), “ether”)

240

Create a second wallet:

personal.newAccount(“abc123”)

Make sure the new wallet exists:

> eth.accounts
[“0x28a3a7967d16e51b3a38c7ae12c9e036472e07ad”, “0x2765108503bbda744203d8c9d3d744 c355f2453d”]

Move 100 from wallet “0” to wallet “1”:

Unlock the sending wallet first:

(personal.unlockAccount(eth.accounts[0], “password_for_wallet“)

Send 100 ether:

eth.sendTransaction({from: eth.accounts[0], to: eth.accounts[1], value: web3.toWei(100, “ether”)})

Performed some more mining to move the money (verify)

> miner.start(1)

INFO [01-23|12:17:30] Updated mining threads                   threads=1

INFO [01-23|12:17:30] Transaction pool price threshold updated price=18000000000

null

INFO [01-23|12:17:30] Starting mining operation

INFO [01-23|12:17:30] Commit new mining work                   number=49 txs=1 uncles=0 elapsed=269.401µs

INFO [01-23|12:17:37] Successfully sealed new block            number=49 hash=cdd60d…09413f

INFO [01-23|12:17:37] 🔗 block reached canonical chain          number=44 hash=4f6fd6…898d8a

INFO [01-23|12:17:37] 🔨 mined potential block                  number=49 hash=cdd60d…09413f

INFO [01-23|12:17:37] Commit new mining work                   number=50 txs=0 uncles=0 elapsed=135.258µs

> miner.stop()

Check balances:

> web3.fromWei(eth.getBalance(eth.accounts[0]), “ether”)

155 (note 15 additional coins from mining work to vfy txn!)

> web3.fromWei(eth.getBalance(eth.accounts[1]), “ether”)

100

Voila!

Transaction details:

> eth.getTransaction(“0xef3f9391e569ff205768d3e27bf7cf73308c5e8de0859b4e8fe096c26ba53”)

{

blockHash: “0xcdd60d0b3722adc0fba3c4956a02f8e6120b717371d335f6ae6849a09413f”,

blockNumber: 49,

from: “0x28a3a7967d16e51b3a38c7ae12c9e036472e07ad”,

gas: 90000,

gasPrice: 18000000000,

hash: “0xef3f9391e569ff205768c68d3e27bf7cf73308c5e8de0859b4e8fe096c26ba53”,

input: “0x”,

nonce: 0,

r: “0xf5981cbee6da86dc162787a8814d8dc30a874f3777228cdc1d03a20de10776b2”,

s: “0x7e3606f0e791493e166212dbec20161aa0aa6cec594bfb82214ed91cb000ff”,

to: “0x2765108503bbda744203d9d3d744c355f2453d”,

transactionIndex: 0,

v: “0xed”,

value: 100000000000000000000

}

 

Block details:

> eth.getBlock(“0xcdd60d0b22adc0fba3c4956a02f8e64ed120b717371d335f6ae6849a09413f”)

{

difficulty: 131072,

extraData: “0xd7830107038467657487676f312e392e32856c696e7578”,

gasLimit: 329424,

gasUsed: 21000,

hash: “0xcdd60d0b3722adc0fba3c4956a02f8e64ed120b717371d335f6ae6849a09413f”,

logsBloom: “0x0000000000000000000000000000000000000000000000000000000000000000                                                                                                                                                             00000000000000000000000000000000000000000000000000000000000000000000000000000000                                                                                                                                                             00000000000000000000000000000000000000000000000000000000000000000000000000000000                                                                                                                                                             00000000000000000000000000000000000000000000000000000000000000000000000000000000                                                                                                                                                             00000000000000000000000000000000000000000000000000000000000000000000000000000000                                                                                                                                                             00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000”,

miner: “0x28a3a79d16e51b3a38c7ae12c9e036472e07ad”,

mixHash: “0x52f1d43a0d0f20de558ae892bd0be8d4f39231f73569a03a70d9274be9f475”,

nonce: “0x736267a32012d9”,

number: 49,

parentHash: “0x823fe9ab5c194169a5321193bbe61357cf56f958ed77f5d23ad9018b0b602b”,

receiptsRoot: “0xb91d6e796beda886fc0ea068b01d824a2a57f9ab48457046c3410feeaa0198”,

sha3Uncles: “0x1dcc4de8dec75d7aab85b567ccd41ad312451b948a7413f0a142fd40d49347”,

size: 651,

stateRoot: “0xfdc68f56dfd304be1405ccecc0f25eb70b5db2bf78f173e12022dbbd39bf20f9”,

timestamp: 151672750,

totalDifficulty: 644896,

transactions: [“0xef3f9391e569ff205768c68d3e27bf7cf7335e8de0859b4e8fe096c26ba53”],

transactionsRoot: “0x356c5cacb2a696f43a194473badaa8fa5ec1c2fef6e0e1b9cc341e569ec1”,

uncles: []

}

This is step one for creating my own DAPP.  Hopefully this helps you.

If you have any questions, comments, or suggestions, let me know.

Thanks!

“NVMe”

What’s in a name? A lot.

NVMe is getting used a lot. Im not sure everyone understands the difference between SCSI, ATA, and NVMe. Or M.2, U.2, PCIe or whatever else as it relates to NVMe. I am not sure I am completely savvy with all of it either honestly so please leave your comments below. My understanding of all this…NAND, Flash, SSD, PCIe, M.2, NVMe-oF…”stuff”… is documented here. I wont get into the details of the SSDs themselves such as MLC, TLC, eTLC, etc. in this post.

I remember the day I learned about Fusion-IO. Chips on a board that plugs into a PCI slot that gives you disk space was crazy enough. The speeds that you could achieve were even crazier. This helped resolve an internal performance issue for me overnight. Stuck these puppies in our HP BL460 Based Build Servers and BOOM! Code was getting compiled all the time, simultaneously, and quickly. Days down to hours. It was performance issue Nirvana. I wanted one of these for my PC at home! They were extremely expensive. Far too much for any home user to consider.

A few years later, I took the plunge and bought a fancy SSD drive to replace my old 7200 RPM SATA drive. I bought this fancy new drive and plugged it right back into the same connector that I pulled my slow SATA drive off of. This seemed crazy! This fancy non-spinning super fast drive and I plug it into the same old connector? Same bus? 6 Gb/sec? This couldn’t be!

Couple of years after that, I replaced my motherboard and video card etc. and noticed this little connector called M.2. “What goes in this M.2 slot?”, I thought.

M.2 (and U.2) slots are just connectors. Its the physical form factor. Depending on the way they are “keyed” (how far the little tab is away from one side (or how many “pins” away)) they can connect drives or other devices such as a Wi-Fi, Bluetooth, or cellular cards. SATA-based M.2 SSD drives are keyed differently than NVMe drives. NVMe requires PCIe x4 while SATA only requires PCIe x2. Thus the different tab locations on the drives themselves. They are keyed differently. NVMe drives wont necessarily always work on an M.2 connector and vice-versa. Although they may *fit*, the motherboard and BIOS need to support PCIe M.2 SSD and/or SATA M.2 SSD specifically.

NVMe is a command set. This is similar to SCSI commands we are all used to. Similar to ATA. A serial-attached SCSI (SAS) device generally has a single queue capable of 256 commands. Comparably, NVMe has 64,000 queues each capable of 65,535 commands.

NVMe takes advantage of high speed access that SSD drives provide and what SAS and SATA cant.

There is also the new world of Optane Technology that uses Intel’s 3D Xpoint Memory Media along with RST to provide an acceleration solution to your HDD, SSHD, or SATA SSD. Think high speed cache for your spinning disk for frequently/recently used data. Install the drive and boot windows. It will take a bit. Boot it again and then again and you will see it boot faster. This is the cache of the Optane drive warming up that you feel.

These drives use the B and M keys of the M.2 form factor. This must be PCIe M.2. and not SATA M.2. You must also have a 7th Gen Intel Core Processor-based system. Windows 10 is also required. Linux is not currently supported as of this write up. One major drawback of using this solution in your desktop is the inability to decouple your Optane drive from your “capacity” hard drive. Remove either one and you’re hosed. No boot for you. While the Optane drive can be used as a standalone NVMe SSD drive, Intel does not support that as of this writing.

In my next post, I will cover the implications of these technologies as it pertains to the enterprise. At the consumer level, this is all “local” access and getting NVMe commands down to the disk is a little less complicated in reasons I will also explain in that post.

Hopefully this clears up your questions about all the new tech. If you have any questions or comments, feel free to leave them below. Thanks for reading and happy computing!

 

 

Cohesity

Cohesity is a web scale, appliance-based, secondary storage platform started by Mohit Aron formerly from Nutanix and Google. Each 2U appliance is Intel x86-based and consists of four 16 core nodes (Dual 8-way). Each node contains three 8TB spinning disks, one 1.6TB PCIe MLC Flash drive, and 2x 10Gb ports.

Much like the Rubrik solution, this also runs backup software natively on the platform thus eliminating the need for costly licenses for Veeam or Commvault. The need for server hardware is also eliminated.

Also similar to Rubrik, this solution leverages policies to take backups instead of the traditional method of specifying the data set, specifying a target, setting up a schedule, and saying, “go!”. Using these policies based on your RTO/RPO, one can use the built-in cloud gateway functionality to ship the coldest (idle) data off to the cloud.

Different to most backup platforms is the ability to use the data that is backed up immediately. Data isnt locked up in a backup job anymore. Leveraging the Journaled OASIS file system, they are able to present NFS, SMB, and S3 protocols natively. Part of the file system is something called SnapTree which allows for an infinite amount of snapshots with no performance penalties  like traditional arrays create.

The solution supports variable length global deduplication (even to the cloud!), encryption, replication to AWS cloud natively (CloudReplicate), compression, and even runs MapReduce natively. One benefit of this solution is the ability to index all the data coming in. Similar to what Object Storage solutions provide, you get fully searchable data. Pretty neat stuff and a real game changer.

In the enterprise, most people have their primary storage for databases for the application. Something like Pure Storage. These enterprises have “other” data which needs a home. Cohesity (and Gartner) refer to this as dark data. Cohesity is looking to be the platform for this dark data. When I was a manager at my last gig, we had a basic flash solution for our databases and then had an entire FlexPod environment for “everything else”. We also had an entire environment for backups for compliance/contractual reasons more than anything. Those environments mandated a lot of hardware, consumed a lot of rackspace and power and cooling. Not to mention the various management interfaces I would need to “run” things. Looking back, this would have helped to solve the problem of our backup environment and eliminated the need for “Utility” filer. Any storage admin knows the deal. “We cant delete that 4 TB of data”. There are usually 4-5 instances (volumes) like that, at least. I could have ripped out about 30 RU worth of gear and replaced it with 4 – 8 using Cohesity. Strategically, I would have immediately been able to leverage the cloud for cold data instead of buying yet another appliance for that use case (think AltaVault). Beyond that, my RTO would be reduced to minutes vs hours. That’s peace of mind and a HUGE win in the efficiency column.

If you would like to know more about this solution or have any questions on the data provided, please email me or leave a comment below.