Well it finally happened; A new VLAN, new VM and a migration gone wrong as I stuffed up the provisioning of the VLAN tag on one host. Only 2-3 minutes downtime, but annoying enough.
One obvious solution, convert to my first vSphere Distributed Switch
As I had not worked with this before I did a bit of research and found an excellent write up by Zod Chen on his blog which is worth checking out: www.dashvue.com. This was much easier than VMWare’s own guide for understanding the software when my networking knowledge is already strong.
Anyways the networking conversion write up is here: www.dashvue.com/2011/04/migrating-to-vnetwork-distributed-switch/
Cheers
EDIT 2014-09-14: Zod’s site appears to have disappeared so I’ve posted my archived copy
Standard or Distributed Switch?
I recently setup a new vSphere environment and thought about this question. The answer came quickly.
vNetwork Distributed Switch.
Although it is more complicated to understand and setup, it is worth the time and effort to deploy it in your environment. Especially if that environment is poised to scale multiple ESXi hosts. Applying a standardized switch, distributed across all you hosts makes configuration changes easy. Most importantly, it brings network consistency and compliance to your vSphere datacenter.
First, understand the standard and distributed networking concepts. VMware also provided a must read guide to migrating and configuring distributed switch.
I will cover in this post, a guided example to migrate from the default standard switch to distributed switch; with no downtime to your production VMs and virtualized vCenter Server.
Before I begin, keep the following note firmly in mind.
You must maintain a connection to your ESXi vmkernel and
vCenter Server (VM) at all times.
Any mis-step will break your connectivity and you may have to reset the network connections via physical console CLI.
For migration with no downtime, you need a minimum of 2 physical NICs connected to each vSwitch that in turn connects to ESXi vmkernel and vCenter Server VM. The idea is to migrate 1 physical NIC at a time. This will provide a connection to either the old standard or new distributed vSwitch at any one time to complete the configuration change.
See below for my vSwitch architecture before and after the migration.
1) This is my starting vNetwork Standard Switch configuration.
2) Switch to Home > Inventory > Networking view. Right click on the datacenter to create a vNetwork Distributed Switch.
3) A wizard pops up. I choose version 4.1.0 since this is a new deployment and all hosts will be running 4.1.0 or later.
4) Specify number of physical uplinks. My first vDS will be for management and is supported by 2 physical NICs.
5) Add hosts and link physical NICs later.
6) Create a default port group and finish the wizard.
7) vNetwork Distributed Switch is created and appears in the left panel.
8) Enable jumbo frames on the Distributed Switch by changing MTU to 9000.
9) Rename/create Port Group and set security settings.
10) Add hosts to the Distributed Switch.
11) Confirm dual redundant network links to virtual switch. Migrate 1 of the physical uplinks to Distributed Switch. Note that 1 uplink still remain on the Standard Switch to maintain connectivity. If you are doing this migration on a production environment, you might want to migrate 1 host at a time.
12) Next, migrate ESXi VMkernel port group over.
13) The next screen asks you to migrate VM Network. Do not migrate at this time.
14) Verify settings. Note both ESXi’s VMkernel connected to new port group; and the first of two physical uplinks attached to this portgroup.
15) After vDS is created and migrated, HA reconfiguration will take place. If any HA reconfigure fails, manually “Reconfigure for VMware HA” for that particular host again.
16) Next up, migrate VM networking.
17) Select your VM network source and destination. In this case, I am migrating my vCenter Server network. If you are migrating other production networks, ensure you have gone through the steps prior and the uplinks for distributed switch are connected to the proper production network. I recommend creating a test VM in your environment and migrate that first to test connectivity in the new distributed switch.
18) Last step is to migrate the second redundant uplink over to distributed switch. Follow through the guided wizard.
19) Final Distributed Switch configuration should look similar as below.
20) Optional. The default Load Balancing policy “Route based on originating virtual port” works well, but can be furthur improved on. I create a 2nd VMkernel Port solely for vMotion traffic and use explicit failover as below. Segregating management and vMotion traffic provides better performance and isolation.