They have recently introduced hardware that has 144 GB of ram compared to their existing standad of 64. because these hosts can take a lot more guests they have run out of ports on the virtual switches.
[root@vpshere03 ~]# esxcfg-vswitch -l
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 32 32 32 1500 vmnic0,vmnic1
PortGroup Name VLAN ID Used Ports Uplinks
Vlan 639 639 0 vmnic0,vmnic1
Above we can see 32 Ports available and 32 used, DRS and vmotion do not check availibility of ports before doing a vmotion so when DRS decides to Vmotion a guest that uses a nic on this vswitch, after the vmotion the nic will be disconnected.
An immeadeate workaround is to vmotion that gues to another hosts with enough resources then change the NIC to connected.
You will see errors like these if you are getting this issue.
[root@XXX ~]# cat /var/log/vmkernel | grep resources
Mar 23 16:03:48 hostXXX vmkernel: 13:17:35:52.667 cpu12:26929)Net: 1318: can't connect device: Vlan 61: Out of resources
Mar 23 16:04:34 hostXXX vmkernel: 13:17:36:38.663 cpu15:26968)Net: 1318: can't connect device: Vlan 63: Out of resources
For a long term workaround you will need to increase the number of ports on your virtual switches which requires a reboot.
If you want to put this into your build procedure you can use
#esxcfg-vswitch -a vSwitch3:128
creates a 128 port vsiwtch
Fore more info about Vswitch sizing have a look here http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008040