I managed to corral three of our senior consulting services engineers, Dave Harrison, Mike Jennings & Kent Noyes, and ask them about the Nexus 1000V. (All three are CCIEs and Dave/Kent have VCPs to boot.)
Dave, to start, what is the Nexus 1000V?
DAVE: Here’s the product marketing answer from Cisco, which is pretty good.
“The Nexus 1000V switch is a pure software implementation of a Cisco Nexus switch. It resides on a server and integrates with the hypervisor to deliver VN-Link (Cisco VN-Link: Virtualization-Aware Networking) virtual machine-aware network services.”
To boil it all down, the Nexus 1000V gives the network admin visibility and control of the traffic traveling inside the virtual world of VMware.
Bob: Does that mean the network admin didn’t have “visibility and control” before?
Dave: Not necessarily. Network admins had some visibility and control but with the Nexus 1000V it’s simpler. Before the Nexus 1000v switch, traffic that was going from VM to VM couldn’t easily be inspected or controlled. The 1000V implementation makes inspection and control much simpler by allowing the network admin to utilize access lists, QOS, to analyze traffic coming from VMs with a network analyzer, and obtain Netflow information on Virtual Machines within the virtual environment. In a sense, the Nexus 1000v puts the network back in the hands of the network administrator.
Bob: What’s new about the Nexus 1000V for a Network Administrator?
Mike: To Dave’s point, prior to the Nexus 1000V, network admins have been apprehensive of virtualizing the network when asked to support network activity within an ESX cluster of servers. They didn’t have the same level of visibility and control that they had with physical switches. With the Nexus 1000V, network admins now have the familiar command line access to port and virtual network switch information just like they have on physical switches. Basically, the software implementation of the Nexus 1000V provides operations and management consistency with existing physical Cisco Nexus and Catalyst switches.
Bob: Lot’s of functionality for the Network Admin, but what’s in it for the Server Admins?
Kent: There are a couple of key features related to visibility and manageability. First, the Server & VMware admins now have GUI access to detailed network port information. And second, when a VMware admin ties a VMware guest (virtual server) into the Nexus 1000V, a Virtual Ethernet Port is created and associated with that virtual server. That virtual ethernet port is configurable just like a physical port but then stays with the virtual server even after the server is VMotion’d to another physical server.
- At the command line, accessible by telnet or SSH, the Nexus 1000 switch feels just like any other Catalyst or Nexus chassis switch you’ve ever configured. Similarly, it can be managed and monitored via SNMP. Cisco provides SNMP MIBs to supplement these services.
- To the Nexus 1000V, participating vSphere servers (or hosts) appear as individual modules much like you would see in a Catalyst 6500 chassis. You will notice, however, that the module count and the virtual port counts associated with each module can scale up much, much higher than you would see in an isolated physical chassis.
- The Nexus 1000V was co-developed by Cisco and VMware and can be purchased from either company through resellers. It’s priced per physical CPU – essentially, based on the total count of CPU’s in each VEM (VSphere host)
- The Nexus 1000V Virtual Supervisor Module (VMS) plays much the same role in a Nexus 1000V environment as the Supervisor engine in a Nexus 7000 or Catalyst 6500 chassis. However, the 1000V Supervisor Engine is a Virtual Machine hosted on an ESX server. And, as is the case of a physical chassis, it can be implemented in a high availability design with a Standby Virtual Supervisor module existing on a separate ESx host
- When a VMware Administrator ties a VMware guest (virtual server) into the Nexus 1000V, a Virtual Ethernet Port is created and associated with that virtual server. That virtual ethernet port then stays with the virtual server even after the server is vmotion’d to another physical server, and is configurable just like a physical port
- When you hear about policies tied to Nexus 1000V virtual interfaces, these policies usually consist of one or more of the following attributes:, VLAN, Port Channels, Private VLAN, ACL, Port Security, Netflow, Rate Limiting, and QoS Marking
- Network admins are accustomed to creating port channels between network devices. Now they can create them between Nexus 1000V enabled servers and physical network devices using exactly the same commands, even on the server side
- Network admins can SPAN and even RSPAN traffic to a network analyzer to troubleshoot network issues down to the specific guest virtual port. This could be done before on the physical port of the ESX server but at the cost of having to filter this traffic to single out the guest(s) VM’s traffic
- In the past, server admins were worried about bottlenecks if they gave network admins access within their ESX hosts. This wasn’t necessarily the case since the network configuration tasks (i.e. VLAN, QoS, etc.) have always been required. Server admins are now dynamically presented with network configuration information through the single vSphere GUI using the vSphere/Cisco API
- vSphere vSwitches are local to each host as is the configuration on each switch. The distributed nature of the Nexus 1000v across all vSphere hosts now allows admins to configure VLAN’s (in addition to some of the newer Nexus 1000v features) once and have them available to all the hosts within vCenter.
That wraps up the 10 Things to Know. I want to send a sincere thanks to Mike, Ken and Dave. These guys are awfully busy these days working with some of our largest clients on major projects. Your time and talents are greatly appreciated, thanks again!
(They have been in the middle of a number of Cisco UCS implementations so maybe I can coerce them into a future post: “10 Things you should know about UCS.”)