Sunday, June 12, 2016

Using AWS EC2 for free

    Some friends of mine at San Jose State in our Operating Systems class find themselves in need of a Linux system. Some of them have Mac's, some PCs, both of which support and allow virtualization.  But the easiest way is probably to use Amazon Web Services Elastic Compute 2 service. This allows creation of a Linux system in the cloud with all the software they need, and if they create the right level, it can absolutely free. See the below chart for the offer as of this writing.
Here are the steps I used to create one, and you can do this, too.

1) Signup for AWS.

     Go to and click the "Create a free account" button.
    You will need a credit card of some kind, but it won't be charged unless you exceed limits, and you should have no problem staying below them. 
  Be sure to select "Basic" as your support tier. More support costs more money. 

2) Take the Linux Virtual Machine tutorial, if desired.

3) Create a new linux VM

NOTE: Be sure to close your instance when done, or the billing may accrue (you can wind up spending for something that is free!!).

      Start by signing into your new account on the AWS Management Console. 

      Then click Launch Instance on the AWS Management Console. 

      Choose a Free tier compatible instance. I chose Amazon Linux. 

      Set your security so that you can only log in from your workstation. If you don't know your public facing IP address, type "myipaddress" into a Google Search box, and it will tell you.

4) Connect to your new instance. 

NOTE: Be sure to close your instance when done, or the billing may accrue (you can wind up spending for something that is free!!).

Once your new instance is running, create some SSH keys and connect! 

5) Install compilation tools

   For the Operating Systems class, you will need compilation tools. You can install them per this note by entering the follow command: 
 sudo yum groupinstall "Development Tools"

That should get you going from here. Post comments if you need more help, or if I have added something incorrect.

NOTE: Be sure to close your instance when done, or the billing may accrue (you can wind up spending for something that is free!!).

Wednesday, September 9, 2015

How to get your Cisco UCS server serial number via esxcli

As much as I love Cisco UCSM, running the Java client on a MAC can be a hassle. So it's great to just be able to ssh into the system and get the info you need.

In this case, the info I need is the serial number of the server, a C220 M4. It is running in managed mode, so I can ssh into the UCSM and run a command to get the serial number thusly:

ucsm-A /server # show detail 

    ID: 1
    User Label:
    Overall Status: Ok
    Oper Qualifier: N/A
    Service Profile: Cseries_vSAN_1
    Association: Associated
    Availability: Unavailable
    Discovery: Complete
    Conn Path: A,B
    Conn Status: A,B
    Managing Instance: A
    Admin Power: Policy
    Oper Power: On
    Admin State: In Service
    Product Name: Cisco UCS C220 M4S
    PID: UCSC-C220-M4S
    VID: 0
    Vendor: Cisco Systems Inc
    Serial (SN): FCH1838V0FW

    HW Revision: 0

BUT, you can also do it with VMware's famous esxcli  command. 

[root@esx1:~] esxcli hardware platform get 
Platform Information
   UUID: 0x48 0xfa 0x8a 0x6a 0x87 0xc3 0xe4 0x11 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8f 
   Product Name: UCSC-C220-M4S
   Vendor Name: Cisco Systems Inc
   Serial Number: FCH1838V0FW
   IPMI Supported: true

Wednesday, August 19, 2015

Networking: the backbone of VMware Virtual SAN STO4474

Just to let everyone know, my friend Bhumik Patel (@bhumikp) from VMware, and I are doing a session thursday afternoon on networking with vSAN. Details below:

The popularity of Virtual SAN is growing daily. Server admins are finally free to aggregate storage in their servers to create a shared storage system that scales with their compute needs. The underlying key to making it all work is networking. All Virtual SAN data flows through it, and correct selection and configuration of networking components will mean the difference between disruptive success or dramatic failure.
This session will give deep insight in the do's and don'ts of Virtual SAN networking. Best practices for physical and virtual switch configuration and performance testing will be discussed. Virtual SAN 5.5 and 6.0 will be covered, and the networking differences discussed. Methods of troubleshooting network issues will be covered. For those configuring a Virtual SAN network for the first time, for labs or enterprise scale, this session is a must-see.
Additional Information
Breakout Session
1 hour
Software-Defined Data Center
Software-Defined Storage and Business Continuity
Advanced Technical
Virtual SAN
IT - Network, IT – Operations, IT – Server Storage

Wednesday, August 12, 2015

Two primary strengnths of Datrium DVX

   Having something of a background in Sales as a Systems Engineer for VMware, and prior to that at Digital Equipment Corp (DEC as it was known to it's friends), when I evaluate a new product like Datrium DVX, I try to find the primary strengths that make it better than it's competition. Sometimes, with some products, there is nothing to find: many products are just also-rans that don't compete at all. The companies want to compete on a "me, too" basis.
   Datrium is _not_ one of those also-rans. They have devised a system that provides real value over the competition, in the following ways:

Feature #1: Processing where processing is needed. 

   Datrium DVX uses the local server to do it's own storage processing, distributing the processing away from the centralized storage. Unlike a SAN device, or a centralized NFS server, the de-duplication, compression, etc. all go on at the server level. That way VMs on one server don't need to wait while VMs on another server have their storage processing done.

   It only makes sense, right? I mean, imagine if every burger at McDonalds had to have it's sales transaction completed at a centralized server at McHQ. Processing "at the edge" lets you conduct your transaction at the local McDonalds and go one with your lunch.

  This is exactly what people like Cisco's new CEO are talking about; processing at the edge where it's needed.

Feature #2: No need for specialists

   Since the storage doesn't present LUNs or other storage abstractions that need to be (micro)managed, you don't need a storage specialist. As one of my friends, a storage specialist, recently said of the DVX system, "“Proprietary protocol”  scares me as an admin". He can't manage it, so he can't do anything with it. He may or may not realize it, but he doesn't need to do anything with it; no need for specialists means no time spent managing storage. Storage without the storage management time.

It just works. Like all the best solutions do.

Tuesday, July 28, 2015

@DatriumStorage is out of stealth! Removing the need for those pesky storage guys.

Yet another storage system met the world today. Datrium ( just came out of the stealth mode to help simplify storage management with a cool new design.

and no more calling the storage guys for stuff...

Why it's cool...

Datrium DVX is cool in two important ways. 

1) It provides centralized storage for your ESXi server with features like DeDupe, compression, local caching via server side flash, vVol ready...all the stuff you expect in today's modern storage. 
2) It provides this in an easy to install form factor, that is entirely manageable by the VM admin! 
You don't need to know the meaning of the words "LUN" (not a word really, an acronym) or "target" or "iSCSI". In other words, no more calling the storage guys for stuff! Manage the storage just by creating VMs! 

What it has...

The solution has a 2U box (called a NetShelf) with 10Gb networking and lots of disk (29Tb usable storage). Connect it to your ESXi servers via network. The protocol is proprietary, so you don't need to know how to connect to it.

The solution also has a VIB that goes on your ESXi server. The VIB provides a NFS server on your ESXi server. This means your VMs are stored on the NFS server, which then connects over the 10Gb network to the NetShelf.

On the ESXi servers, Datrium Software (Called DiESL) runs on the ESXi server, not in a VM like many other storage systems. DiESL creates an NFS server that the VM's are then placed onto. the DiESL software takes care of all the DeDupe, RAID, server caching, etc, and transmits the data back to the NetShelf for permanent storage.

A great pairing with UCS Mini...

The Cisco UCS Mini is an 8 blade chassis of compute managed by two Fabric Interconnects inside the backplane. With one of these connected to a NetShelf, We would have one of the most robust modular pods around, with unparalleled ease of use. In the Hyper-converged space, this solution would be dominant, if done correctly. I sure do hope these guys start shipping soon! 

Tuesday, June 23, 2015

How to check scsi controller queue depth for Virtual SAN (vSAN)

I  frequently get asked how to determine queue depth of a scsi controller, mainly in relation to VMware Virtual SAN questions. Virtual SAN relies on a queue depth > 256, and a number of controllers don't supply that.

If you want to know what your controller's queue depth is, here's the info from the VMware KB article 1027901:

To identify the storage adapter queue depth:
  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, seeUsing Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677) .
  2. Press d.
  3. Press f and select Queue Stats.
  4. The value listed under AQLEN is the queue depth of the storage adapter. This is the maximum number of ESX VMKernel active commands that the adapter driver is configured to support.

Tuesday, April 21, 2015

vmware-csd.exe stopped working - vCenter 6 Server Appliance install fails

I discovered while attempting to follow VMware's easy instructions to install the VCSA for vCenter 6 that problems occur if you use Windows (what a shock...).

If you are trying to install using the vcsa-setup.html, and you keep getting errors from the vmware-csd.exe and it's stopping, it's not you, it's Windows.

When the Client Integration binary gets installed, one its critical folders doesn't get write permission. The fix for that is here:

Troubleshooting the VMware vSphere 6.0 Client Integration Plugin when it fails to function (2112086)

Tuesday, September 30, 2014

VMware Virtual SAN (vSAN) - replacing a failed disk connected to the LSI 9271 controller.

    VMware vSAN has it's own hcl of disks and controllers. This is a subset of the vSphere hcl, and the only Cisco controller card on the list is the LSI MegaRaid 9271. This is a high performance controller, but LSI does not support running the 9271 in JBOD mode. As a result, virtual disks need to be created, SSD disks need to be marked as such, etc. I've discussed this briefly before.

    RAID 0 makes troubleshooting failed disks problematic. The disks are virtual, not physical. as a result, simply replacing the disk may not be useful; the virtual disk needs to know you replaced the physical disk. This means direct interaction with the controller.

   Many customers know that the 9271 can be controlled via the WebCLI, but that is only available at boot time. Once the server is running, one must reboot to access this tool. Fortunately Cisco and LSI have planned for this challenge.
   LSI makes a utility called StorCLI. It is available at the LSI website and also comes on the Utilities iso for UCS, found at Cisco Support.

  Once you get this iso,  you need to find the StorCLI .vib file. You could try mounting the iso to the ESXi server, but I wouldn't recommend it. Too much trouble getting ESXi to see the attached CD drive. If you can mount it anywhere else, I recommend that.

Once you get the iso mounted, go to the directory ucs-cxxx-utils-vmware.2.0.3 (1).iso\Storage\LSI\9xxx\StorCLI. There you will find the StorCLI vib file. 

   Copy this vib file to /var/log/vmware. I don't know why, but everytime I try to install that vib from anywhere else, it fails.
   Execute the esxcli install command from within the ESXi shell. (NOTE: this may well work using the esxcli install tools in the vSphere PowerCLI. I haven't tried it.)

~ # esxcli software vib install -v /var/log/vmware/vmware-esx-storcli-1.12.13.vib --no-sig-check
You need the --no-sig-check part, or else you will get an error about signing.

   In order to run any StorCLI commands, you must cd to the StorCLI directory. installation of the StorCLI binaries does not modify your path to include them or their linked library.

~ # cd /opt/lsi/storcli/
/opt/lsi/storcli #

Now we can issue commands. Here are some of my favorites: 

To create a RAID 0 virtual disk for every physical disk in one shot: 

./storcli /c0 add vd each type=raid0 pdcache=off 

/c0 represents controller 0, the only one you probably have. The pdcache=off command turns off cacheing, which VMware vSAN requests.

To delete all the RAID 0 virtual disks at once:

 ./storcli /c0/vall del

The /vall means all virtual disks. 

To delete one virtual disk for a particular slot: 

   This requires knowing which virtual disk is assigned to which physical disk and slot. Most likely we'll know the drive to be replaced by it's slot number. The 9271 uses the concept of "Enclosures" which are contained on the controller, and contain the slots (drive bays). Issue the command:

./storcli /c0/eall/sall show

which yields a chart that tells us which drive group is attached to which drive. 
 Let's say we need to replace the disk in slot 7. This slot and disk is assigned to Drive Group 5. Now let's find the virtual disk for Drive Group 5.

./storcli /c0/vall show 

gives us a list of virtual drives to drive groups. Drive group 5 happens to hold virtual disk 5. Don't assume these numbers will always be the same. 

 Now we can delete virtual disk 5:

/opt/lsi/storcli # ./storcli /c0/vall show
Controller = 0
Status = Success
Description = None

We can now replace the physical disk. Once that's done, we can create a new virtual disk for the new drive. 

/opt/lsi/storcli # ./storcli /c0 add vd type=RAID0 name=vd5 drives=22:7
Controller = 0
Status = Success
Description = Add VD Succeeded

Notice that the slot is 7, not 5.

That should be all there is to it. This procedure was tested using known good disks, and some steps may be missing due to not having an actual bad drive. Storcli has commands for that, too, like marking a slot good. The docs for Storcli can be found here:

StorCLI Reference Manual

Monday, September 29, 2014

VMware RVC client- Installing to a MAC without the extra baggage

     With the invention of VMware vSAN, my attention was drawn to a new tool for operating on vCenter and performing configuration tasks called the RVC Client. It can be found by logging in to your vCenter Server (v 5.5 and up) and executing the command `rvc`.

VMware recommends having a separate vCenter appliance just to use rvc. Naturally, I don't need yet another VM taking up valuable space on my MacBook Air. I just want rvc, running in a terminal window. This should not be a problem; rvc started out life as a 'Fling' at the VMware Labs website. It's open source. The instructions even say that all you have to do is run the command gem install rvc.

Oh how I wish that were true...

As it turns out my Mac, running ruby version 2.0.0p247 (I don't know what that means, I'm not into ruby), doesn't respond well to that command.

Johns-Mac:~ johnkennedy$ gem install rvc
Fetching: rvc-1.8.0.gem (100%)
ERROR:  While executing gem ... (Gem::FilePermissionError)

    You don't have write permissions for the /Library/Ruby/Gems/2.0.0 directory.

OK, Easy fix. sudo it! 

I needed Xcode command line tools. 

Johns-Mac:~ johnkennedy$ xcode-select --install
Or else nokogiri won't compile properly. 

Johns-Mac:~ johnkennedy$ sudo gem install rvc
Successfully installed rvc-1.8.0
Parsing documentation for rvc-1.8.0
1 gem installed

Awesome! Now I can manage my vCenter, ESXi servers, everything from the Mac without cumbersome clients. 

So I try to get to work doing just that, and this happens: 

Johns-Mac:~ johnkennedy$ rvc 
Install the "ffi" gem for better tab completion.
Host to connect to (user@host):
0 /
> cd
/> ls
RuntimeError: unknown VMODL type AnyType

I've dealt with enough cryptic error messages in my time not to try to understand them right away, if at all. Just google em, and find a solution. But nothing worked, until...

I noticed that there was a brand new beta version of rbvmomi, the guts of the rvc. If you are planning on doing anything serious with ruby and vSphere API's, rbvmomi is the tool you need. 
it looked to be a version 1.8.2.pre. So I installed it. 

JOHNKEN-M-N085:~ johnken$ sudo gem install rbvmomi -v 1.8.2.pre
Fetching: rbvmomi-1.8.2.pre.gem (100%)
Successfully installed rbvmomi-1.8.2.pre
Parsing documentation for rbvmomi-1.8.2.pre
Installing ri documentation for rbvmomi-1.8.2.pre
1 gem installed

and uninstalled version 1.8.1

JOHNKEN-M-N085:~ johnken$ sudo gem uninstall rbvmomi -v 1.8.1
Successfully uninstalled rbvmomi-1.8.1

Now rvc works! 

JOHNKEN-M-N085:~ johnken$ rvc
Install the "ffi" gem for better tab completion.
VMRC is not installed. You will be unable to view virtual machine consoles. Use the vmrc.install command to install it.
0 /
> cd
/> cd Datacenter/
/> ls
0 storage/
1 computers [host]/
2 networks [network]/
3 datastores [datastore]/
4 vms [vm]/

P.S. the prompt to install vmrc is a red herring: there doesn't seem to be one for Mac. 

Wednesday, August 6, 2014

configuring the LSI MegaRaid 9271 for VMware vSAN

Those configuring the LSI 9271 can take heart – you don't have to configure each disk by hand for VMware vSAN. 

The problem – The only controller that Cisco sells that is also on the VMware HCL for vSAN is the LSI MegaRaid 9271. VMware vSAN recommends pass-through mode (JBOD). LSI does not support this. In this case, VMware supports creating virtual disks for each physical disk, of type RAID0. 
The solution I originally tried was to use the StorCli software, made by LSI and found on the UCS Utilities disk for C series. It installs directly in the ESXi shell as a VIB. 
BUT, creating individual disks required an individual command for each disk, i.e. 

/opt/lsi/storcli/storcli  /c0 add vd type-RAID0 name=vd1 drives=32:1
/opt/lsi/storcli/storcli /c0 add vd type=RAID0 name=vd2 drives=32:2
Even worse, the enclosure number was required for each command (shown in the above example as ’32’). Since enclosures are potentially different for each server, this made scripting difficult, as finding the enclusre number added complication. 

The solution – As it turns out, someone at LSI was thinking about this problem. They must have faced this particular use-case before (where customers want each pd to have a RAID 0 vd). So the put in a one line command to handle this situation: 

cd /opt/lsi/storcli
./storcli /c0 add vd each type=raid0 pdcache=off  

UPDATE: some commands removed as they did not work in latest testing. 

This command quickly builds all the virtual disks on the physical disks. No muss. No fuss. The extra commands turn off all the caching done by the controller. vSAN likes to take care of this itself. I haven't tested it for performance yet, so this may change in the future.

Wednesday, September 18, 2013

Clone a Host Profile with a PowerCLI script

Wow! I just noticed it's been way too long since I posted anything. The stuff I've learned about AutoDeploy, Cisco FlexFlash, and UCS has been piling up...

Here's a note for some folks who asked me about cloning a Host Profile with a PowerCLI script. Thanks to the inimitable LucD for this script, reproduced from the VMware Communities.

$hostProfileName = "MyProfile" $prof = Get-VMHostProfile -Name $hostProfileName 

$profMgr = Get-View HostProfileManager

$spec = New-Object VMware.Vim.HostProfileCompleteConfigSpec $spec.Annotation = $prof.ExtensionData.Config.Annotation
$spec.ApplyProfile = $prof.ExtensionData.Config.ApplyProfile
$spec.CustomComplyProfile = $prof.ExtensionData.Config.CustomComplyProfile
$spec.DisabledExpressionList = $prof.ExtensionData.Config.DisabledExpressionList$spec.Enabled = $prof.ExtensionData.Config.Enabled
$spec.Name = $prof.ExtensionData.Config.Name + " - COPY"


Hope folks find it as useful as I have. 

Wednesday, September 12, 2012

Adding Active Directory to GoDaddy!

I managed to add active directory records to my GoDaddy DNS, following the instructions in this article: 

Here's what my entries look like: 

My domain is (it's cut off by the ellipses). 

Friday, August 17, 2012

VMware and Cisco UCS integration: Oh blogpost, Where art thou?

     It was starting to look like I was having a "senior moment". I knew I had seen a blog entry on the VMware PowerCli blog site that described the integration between Cisco UCS and VMware AutoDeploy. But when I looked for it, it had somehow disappeared. Maybe I didn't see it after all? Maybe I was dreaming the whole thing (I typically don't dream about technology, but hey, anything is possible)?

     Alan Renouf, VMware PowerCli wizard, posted a blog entry on 31Jul12, detailing and praising the integration between VMware AutoDeploy and Cisco UCSM , which you can read more about here.

     My confidence in my sanity was buoyed by Google Cache, which has the original blog post. It's a good read for those who wish to automate AutoDeploy with respect to the type of hardware they are using.

     The reason for the disappearance remains a mystery. Anyone care to comment?

Moving Service Profiles can disconnect your Nexus 1000v!

I was just working on the vSphere 5.0 on FlexPod CVD, and found an issue where moving a Service Profile from a B series to a C series can cause your server to lose connection to your Nexus 1000v. Here's what happened:

I created a Service Profile, applied it to a C200-M2, and installed VMware ESXi 5.0 to a storage LUN. Nothing was stored on the local disk. I put all the networking through the Nexus 1000v, so neither of the network cards (P81E Cisco CNA nics) were connected to VMware vSwitches. Networking worked at this point. 

Then, one of my other C200-M2 servers died. I'm not sure why, but it wouldn't power up any more. So I decided to use one of the C200-M2 servers that were in my FlexPod. But all my other servers were B series blades, and I didn't want a lone C200-M2 running with a blade in HA mode. So that prompted me to move my Service Profiles from the C200-M2 servers to some B200-M3 servers I had. 

I suspended all the VMs on the FlexPod, shut down the ESXi servers, and disassociated the Service Profiles from the C200-M2's and associated them to the B200-M3's. When the B200-M3 servers came back up, they were disconnected from the network. I had to restore the networking from ESXi local console in order to recover them. 

I wondered why the Service Profiles didn't come back up properly, so I performed an experiment. I put one of the servers in Maintenance Mode. Then I disassociated the Service Profile from the B200-M3 and associated it with a C220-M3 I had. When I rebooted it, it didn't connect to the network. 
I then moved the Service Profile back to the B200-M3, migrated the management network to a local vSwitch (off of the Nexus 1000v dVS), and moved the Service Profile back to the C220-M3. This time, the management networking came up, but all the rest of the networking remained disconnected! 

I conclude from this that when migrating to/from the P81E and to/from the VIC 1240, the Nexus 1000v can tell the difference and won't allow the physical nics to connect. 

Next experiment is to determine whether the VMware vDS has the same behavior.