*In light of the clarification, by VMware, that VXLAN is not required for vCloud Connector Datacenter Extension (DCE), I took content from my original post to create this post discussing VXLAN.
VXLAN
Virtual eXtensible Local Area Network (VXLAN), is a network overlay and tunneling technology, co-authored by VMware, Cisco, and Arista Networks as an IETF draft standard, whereby logical networks are created that can span physical network boundaries. This is done through MAC-In-IP encapsulation so that large numbers of layer 2 VXLAN enabled networks can co-exist across a common layer 3 infrastructure. Instead of being limited to 4,000+ VLANs, VXLAN can theoretically be used to create millions of layer 2 networks. For a more detailed explanation of VXLAN, I encourage everyone to read Kamau Wanguhu’s extremely informative primer and Scott Lowe’s blog post on “Examining VXLAN.” The benefits of VXLAN for data center administrators include the following:
- The ability to overcome, for some users, a shortage of network address space, due to the 4,096 VLAN limitation
- The ability to enable large-scale multi-tenancy by creating large numbers of logical networks within a data center or cloud
- The ability to move workloads between layer 3 networks within the data center
One other benefit that has been championed by some but met with skepticism by others is the use of VXLAN as a data center interconnect/layer 2 extension technology. The question is, given the fact that VXLAN allows layer 2 traffic to be tunneled across layer 3 boundaries, why shouldn’t it be a logical choice (no pun intended) to be used to facilitate the movement of workloads between data centers, using features such as vSphere vMotion?
The Limitations of VXLAN
As good a technology as VXLAN my be for use within the data center, it has limitations as a data center interconnect technology (DCI). These limitations of VXLAN are detailed in Omar Sultan’s deep dive post, and Ivan Pepelnjak’s post on why VXLAN is not a suitable DCI technology; I would also encourage everyone to listen to the Packet Pushers podcast on VXLAN and the Nexus 1000v. Some of the limitations include the following:
- First is the requirement for IP multicasting since VXLAN does not define a control plane and requires the use of multicasting with flooding and dynamic MAC-learning for discovery. This is an issue since most ISPs will not support multicasting across the WAN, limiting the number of scenarios where this is even applicable.
- VXLAN potentially extends the layer 2 fault domain across multiple data centers since it does not have fault-isolation features as part of its implementation, such as the prevention of unknown unicast flooding.
- Because it uses MAC-in-IP encapsulation, VXLAN requires a 1,600 byte MTU to accommodate the 24-bit header that is added. That will be a challenge since it will require all the networks switches carrying VXLAN traffic, across two data centers, to support jumbo frames, which is often not the case.
- There is also the potential “traffic trombone” issue where a VM that has been moved to the target data center needs to communicate with VMs or clients in the source data center; in this scenario, additional network traffic has to be pushed through the WAN connections between data centers.
For these reasons, many do not see VXLAN as been a viable DCI technology. In fact, VMware themselves have been cautious about making such claims but others have been pushing VXLAN as a DCI solution, alongside technologies such as OTV and MPLS.
Looking Into the Crystal Ball
Let me preface this section by stating upfront that I do not possess inside information on the roadmap for VMware in this particular area. However, if we assume that VMware is well aware of the limits of VXLAN as a DCI technology, it is not too difficult to imagine them doing any of the following:
- Integrating VXLAN with a more robust DCI technology such as Cisco’s Overlay Transport Virtualization (OTV) so that the multicast over a WAN requirement goes away while also leveraging OTV’s fault-isolation features. This has already been suggested by others like Greg Ferro over at Packet Pushers.
- Taking the Network virtualization Platform (NVP) from their Nicira acquisition and either replacing or integrating it with VXLAN. The benefit of NVP is that it removes the requirement for multicast because of the presence of a central controller. It has other fault isolation features that makes it a more viable DCI technology, like OTV.
I am looking forward to getting feedback from the community, particularly from VMware and true networking engineers who understand VXLAN much better than myself. Am I missing something and am I overstating the limitations and liabilities of VXLAN as a DCI technology?
Related articles
- Vincent Bernat: Network virtualization with VXLAN (vincent.bernat.im)
- VXLAN is not a Data Center Interconnect technology (ioshints.info)
[…] https://cloudarchitectmusings.com/2013/01/03/word-of-caution-about-overextending-the-use-of-vxlan/ good write up about possible limitations and other solutions. […]
[…] VXLAN considerations – https://cloudarchitectmusings.com/2013/01/03/word-of-caution-about-overextending-the-use-of-vxlan/ […]
[…] Word Of Caution About Overextending The Use Of VXLAN […]