HomeAbout Me

Cisco releases Nexus 1000V virtual switch for VMware

By Colin McNamara
September 16, 2008
3 min read
Cisco releases Nexus 1000V virtual switch for VMware

This afternoon Cisco released a new member of the Nexus family of switches, the Nexus 1000V. This is the first switch to take advantage of VMware opening up their ESX and ESXi platforms to for third party network device manufacturers. This switch directly addresses some pretty big pain points surrounding current virtualization implementations.

The boundary between server team and network team responsibilities has become “fuzzy”

Cisco addresses this issue by putting a switch that can be managed via the same methods common to other network devices inside the ESX cluster. This switch runs the same code that has become standard on Cisco’s Nexus series of Data Center switches - NX-OS.

Prior to adoption of virtualization, when there was a connectivity problem with a host it was quite common for the network team to verify functionality down to the switch port. The server team would do the same. This allowed for each team to focus on areas that met their core competency. Once we moved from a real switch port, to a dumb bridge inside ESX, lots of finger pointing resulted.

Now, with a Nexus 1000V sitting virtually inside the ESX clusters, the boundary between network and systems teams has been re-established. Now when there is a problem with a host inside an ESX cluster, the network team can use the same day to day troubleshooting tools available to them in other portions of the network to resolve issues faster, and with less finger pointing.

Security controls have been moved further away from the hosts than we would like

A best practice for applying security policy is to apply controls as close to the source as possible. Think of this analogy - Your kids are blasting Radio Disney from their computer. Which of the following do you do?

A. Turn down the speakers at the source B. Distribute earplugs to all members of the household

Of course, the obvious action is to go to the source, and apply a control (turn down the volume, and tell the kids to clean their rooms). The same principle is valid on the networking side. The best practice is to apply security policies such as VLAN ACL’s and TrustSec policies directly to the switchports that host your switches. Before the Nexus 1000V this was impossible to do in ESX, and forced many environments to move security controls further up into the distribution layer. The side effect of this was that now the security stance from host to host inside ESX clusters was diminished.

The Nexus 1000V brings something called port policies to the table to address this. What these are is pre-configured application security descriptions that are available to your systems administrators to apply in a point and click fashion. Once these policies are applied to the virtualized host, they follow the host wherever it is moved in your virtual cluster.

Provisioning and integrating the networks of VMware ESX clusters with classic networks for most is challenging at best

I wrote an article in March about this specific issue in my post - Challenges integrating VMware into Cisco networks. The core of this issue is that in general the network integration portions of VMware ESX clusters is not really designed to address server teams, or network teams. In fact, you need to be pretty savvy with both portions to successfully integrate VMware clusters into your network. In the real world, you generally find people that are good at one or the other, not both.

By putting a Nexus 1000V in your VMware clusters, you now give the networking teams something they can understand without having to learn Linux, and how it handles bridges (key to understanding ESX networking). With a Cisco switch running virtually inside your clusters, network teams can follow standard core / distribution / access models with the access layer now residing inside the ESX clusters. The network teams can also leverage their existing LAN switching skills for integrating the virtual switches in the clusters with the existing Data Center switching fabrics.

With these roadblocks addressed, Cisco is moving to further the DC 3.0 vision

To realize the DC 3.0 vision, the network inside of VMware clusters had to be under control, and follow the same architectural guidelines that the rest of our network is subject to. With the Nexus 1000V this is now a reality. The next steps within the DC 3.0 vision are to extend virtualization and mobility throughout our storage fabrics, and to continue to extend virtualization to the network as a whole, as well as focusing on application virtualization and acceleration to truly realize the vision of cloud computing in the data center.

On the storage virtualization side, Cisco will be using a technology called FlexAttach to enable virtual and physical hosts to change locations in the datacenter without storage team intervention (more on this in a near future post). And on the application virtualization and acceleration side, expect Cisco to continue to enhance its existing Application Control Engine (ACE) and Wide Area Application Services (WAAS), and further integrate these into their virtualization offerings.

Want to learn more?


Tags

ciscovmwarevirtualizationnetworkingcloud computing

Share

Previous Article
Measuring and mitigating risk involved with sharing virtual infrastructure between DMZ and Internal environments
Colin McNamara

Colin McNamara

AI Innovation Leader & Supply Chain Technologist

Topics

Business & Strategy
Personal & Lifestyle
Sustainability & Ethics
Technology & Innovation

Related Posts

Open Letter to Cisco: Improving the DevNet Developer Experience
February 11, 2014
7 min
© 2025, All Rights Reserved.

Quick Links

About MeContact Me

Social Media