Wednesday, September 9, 2015

How to get your Cisco UCS server serial number via esxcli

As much as I love Cisco UCSM, running the Java client on a MAC can be a hassle. So it's great to just be able to ssh into the system and get the info you need.

In this case, the info I need is the serial number of the server, a C220 M4. It is running in managed mode, so I can ssh into the UCSM and run a command to get the serial number thusly:

ucsm-A /server # show detail 

    ID: 1
    User Label:
    Overall Status: Ok
    Oper Qualifier: N/A
    Service Profile: Cseries_vSAN_1
    Association: Associated
    Availability: Unavailable
    Discovery: Complete
    Conn Path: A,B
    Conn Status: A,B
    Managing Instance: A
    Admin Power: Policy
    Oper Power: On
    Admin State: In Service
    Product Name: Cisco UCS C220 M4S
    PID: UCSC-C220-M4S
    VID: 0
    Vendor: Cisco Systems Inc
    Serial (SN): FCH1838V0FW

    HW Revision: 0

BUT, you can also do it with VMware's famous esxcli  command. 

[root@esx1:~] esxcli hardware platform get 
Platform Information
   UUID: 0x48 0xfa 0x8a 0x6a 0x87 0xc3 0xe4 0x11 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8f 
   Product Name: UCSC-C220-M4S
   Vendor Name: Cisco Systems Inc
   Serial Number: FCH1838V0FW
   IPMI Supported: true

Wednesday, August 19, 2015

Networking: the backbone of VMware Virtual SAN STO4474

Just to let everyone know, my friend Bhumik Patel (@bhumikp) from VMware, and I are doing a session thursday afternoon on networking with vSAN. Details below:

The popularity of Virtual SAN is growing daily. Server admins are finally free to aggregate storage in their servers to create a shared storage system that scales with their compute needs. The underlying key to making it all work is networking. All Virtual SAN data flows through it, and correct selection and configuration of networking components will mean the difference between disruptive success or dramatic failure.
This session will give deep insight in the do's and don'ts of Virtual SAN networking. Best practices for physical and virtual switch configuration and performance testing will be discussed. Virtual SAN 5.5 and 6.0 will be covered, and the networking differences discussed. Methods of troubleshooting network issues will be covered. For those configuring a Virtual SAN network for the first time, for labs or enterprise scale, this session is a must-see.
Additional Information
Breakout Session
1 hour
Software-Defined Data Center
Software-Defined Storage and Business Continuity
Advanced Technical
Virtual SAN
IT - Network, IT – Operations, IT – Server Storage

Wednesday, August 12, 2015

Two primary strengnths of Datrium DVX

   Having something of a background in Sales as a Systems Engineer for VMware, and prior to that at Digital Equipment Corp (DEC as it was known to it's friends), when I evaluate a new product like Datrium DVX, I try to find the primary strengths that make it better than it's competition. Sometimes, with some products, there is nothing to find: many products are just also-rans that don't compete at all. The companies want to compete on a "me, too" basis.
   Datrium is _not_ one of those also-rans. They have devised a system that provides real value over the competition, in the following ways:

Feature #1: Processing where processing is needed. 

   Datrium DVX uses the local server to do it's own storage processing, distributing the processing away from the centralized storage. Unlike a SAN device, or a centralized NFS server, the de-duplication, compression, etc. all go on at the server level. That way VMs on one server don't need to wait while VMs on another server have their storage processing done.

   It only makes sense, right? I mean, imagine if every burger at McDonalds had to have it's sales transaction completed at a centralized server at McHQ. Processing "at the edge" lets you conduct your transaction at the local McDonalds and go one with your lunch.

  This is exactly what people like Cisco's new CEO are talking about; processing at the edge where it's needed.

Feature #2: No need for specialists

   Since the storage doesn't present LUNs or other storage abstractions that need to be (micro)managed, you don't need a storage specialist. As one of my friends, a storage specialist, recently said of the DVX system, "“Proprietary protocol”  scares me as an admin". He can't manage it, so he can't do anything with it. He may or may not realize it, but he doesn't need to do anything with it; no need for specialists means no time spent managing storage. Storage without the storage management time.

It just works. Like all the best solutions do.

Tuesday, July 28, 2015

@DatriumStorage is out of stealth! Removing the need for those pesky storage guys.

Yet another storage system met the world today. Datrium ( just came out of the stealth mode to help simplify storage management with a cool new design.

and no more calling the storage guys for stuff...

Why it's cool...

Datrium DVX is cool in two important ways. 

1) It provides centralized storage for your ESXi server with features like DeDupe, compression, local caching via server side flash, vVol ready...all the stuff you expect in today's modern storage. 
2) It provides this in an easy to install form factor, that is entirely manageable by the VM admin! 
You don't need to know the meaning of the words "LUN" (not a word really, an acronym) or "target" or "iSCSI". In other words, no more calling the storage guys for stuff! Manage the storage just by creating VMs! 

What it has...

The solution has a 2U box (called a NetShelf) with 10Gb networking and lots of disk (29Tb usable storage). Connect it to your ESXi servers via network. The protocol is proprietary, so you don't need to know how to connect to it.

The solution also has a VIB that goes on your ESXi server. The VIB provides a NFS server on your ESXi server. This means your VMs are stored on the NFS server, which then connects over the 10Gb network to the NetShelf.

On the ESXi servers, Datrium Software (Called DiESL) runs on the ESXi server, not in a VM like many other storage systems. DiESL creates an NFS server that the VM's are then placed onto. the DiESL software takes care of all the DeDupe, RAID, server caching, etc, and transmits the data back to the NetShelf for permanent storage.

A great pairing with UCS Mini...

The Cisco UCS Mini is an 8 blade chassis of compute managed by two Fabric Interconnects inside the backplane. With one of these connected to a NetShelf, We would have one of the most robust modular pods around, with unparalleled ease of use. In the Hyper-converged space, this solution would be dominant, if done correctly. I sure do hope these guys start shipping soon! 

Tuesday, June 23, 2015

How to check scsi controller queue depth for Virtual SAN (vSAN)

I  frequently get asked how to determine queue depth of a scsi controller, mainly in relation to VMware Virtual SAN questions. Virtual SAN relies on a queue depth > 256, and a number of controllers don't supply that.

If you want to know what your controller's queue depth is, here's the info from the VMware KB article 1027901:

To identify the storage adapter queue depth:
  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, seeUsing Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677) .
  2. Press d.
  3. Press f and select Queue Stats.
  4. The value listed under AQLEN is the queue depth of the storage adapter. This is the maximum number of ESX VMKernel active commands that the adapter driver is configured to support.

Tuesday, April 21, 2015

vmware-csd.exe stopped working - vCenter 6 Server Appliance install fails

I discovered while attempting to follow VMware's easy instructions to install the VCSA for vCenter 6 that problems occur if you use Windows (what a shock...).

If you are trying to install using the vcsa-setup.html, and you keep getting errors from the vmware-csd.exe and it's stopping, it's not you, it's Windows.

When the Client Integration binary gets installed, one its critical folders doesn't get write permission. The fix for that is here:

Troubleshooting the VMware vSphere 6.0 Client Integration Plugin when it fails to function (2112086)