Independent Bridging : NSX-v to NSX-T In-Parallel Migration use case.


                   Independent Bridging : NSX-v to NSX-T In-Parallel Migration use case.

–  Prashant Pandey, VCAP-NV-2021


Agenda

NSX-v to NSX-T workload migration using NSX Independent bridging option.

         Different Available Migration Approaches – High level

         Independent Bridging approach – In detail

         Customer use-case discussion – Why independent Bridging ?

         L2Bridging Readiness

         Validation & Testing


Why Migrate to NSX-T ?

NSX for vSphere (NSX-V) has already been out of general support since Jan/2022 & will be out of technical guidance from Jan/2023 onwards.

Top of mind question for customers who have adopted NSX-V is - how do I upgrade my NSX-V based data center to NSX-T ?

Scope of blog:

In this blog, we will be discussing about available approaches we have, to migrate from NSX-v to NSX-T and one of the use case which I have experienced & used in recent past.

At a high level, we have two primary approaches when migrating to NSX-T.

         In Place

         In Parallel


In Place v/s In Parallel Migration           


                1. In Place


     NSX-T has built in migration tool called Migration coordinator.

     The Migration Coordinator tool is available since the 2.4 release of NSX-T & helps in replacing NSX-V with NSX-T  on existing hardware.

     In this blog we will not be focusing on this approach.

2. In Parallel

      NSX-T infrastructure is deployed in parallel along with the existing NSX-V based infrastructure.

      Management cluster for  NSX-V and NSX-T can be common, however compute clusters running the workloads would be running on separate hardware.

      There are some of the advantages with this approach, namely flexibility. workload migration can have majorly below two approaches

  • New workloads are deployed on NSX-T and the older workloads can die over time.
  • Lift and shift workloads over to the new NSX-T infrastructure. 


Different In-Parallel Bridging Approaches : When running NSX-V and NSX-T in parallel, there are two approaches available for Bridging the two domains:

1. NSX-T Only Bridging - In this approach, an NSX-T edge node which is responsible for L2bridge is placed on a host that is prepared for NSX-V.

This allows the Bridge to take advantage of decapsulation of VXLAN frames that happens on the ESX host and only needs to perform Geneve encapsulation.

However, with this approach only one network can be bridged per Edge VM.

2. Independent Bridging - In this approach, both NSX-V and NSX-T would enable bridging that is independent of each other,

but both connected to the same VLAN backed network. This model allows more than 1 network to be bridged simultaneously, per edge VM, if the number of such networks is within the scale limits of bridging for NSX-V and NSX-T.


Bridging Scale Limit for reference :

https://configmax.esp.vmware.com/guest?vmwareproduct=NSX%20Data%20Center%20for%20vSphere&release=NSX%20for%20vSphere%206.4.0&categories=16-0





Downsides consideration - In Parallel Migration

While the in parallel approach has benefits such as planning and migration on demand that’s based on workload requirements.

there are certain downsides to it that need to be considered.

1. Need of separate hardware
2. Need to recreate topology in NSX-T as per existing NSX-v
3. Manage security Policies (if current environment using DFW/Micro segmentation)

==================================================================  ============================================================

Use-case : Existing NSX-V Topology 






Why we decided to go with Independent Bridging ?

Mostly VMs in customer environment has multiple IPs which is being used for different services, these IPs are part of different subnets hence different logical switches, to migrate one workload from NSX-v to NSX-T setup, all associated logical switches (sometimes 8-10) must be stretched to NSX-T environment.

To facilitate above ask, there was need to stretch multiple VXLAN at a time.

      NSX-T only bridging option supports - one network stretch per edge node at a time.

      With Independent bridging option we could achieve above ask without deploying multiple edge nodes and used only 2 EN for active & standby purpose.

Note : this use can be achieved with VMware HCX tool, above solution make sense when HCX is not an option, either because of license cost or any other limitation. 


==========================================================================================================================================

Base build – vCenter & NSX-T

We created new logical Setup for NSX-T along with new vCenter, with in same physical DC.

High-level steps: vSphere Front

1.         We identified less utilized clusters & took out 3 ESX nodes from existing NSX-v clusters to virtual DC level.

2.         Disconnected those nodes from existing vCenter

3.         Deployed new vCenter on one of the ESX host.

4.         Created virtual DC in vCenter.

5.         Added all 3 ESX hosts inside virtual DC.

6.         Created cluster named "NSX-T Cluster" & moved all 3 nodes into that cluster.

7.         Created distributed switch & added all ESX hosts into distributed switch.


High-level steps: NSX Front:

1.         Installed NSX-T 3.2 OVA in one of the node.

2.         Configured new vCenter as Compute manager.

3.         Configured 3 node management cluster.

4.         Prepared 3 node ESX cluster for NSX.

5.         Configured new topology in NSX-T as per NSX-V




L2Bridging Readiness:

Physical level prerequisites & recommendations

      Physical VLAN which is being used for L2Bridge must be trunked on NSX-T as well as NSX-v hardware/ESX interfaces, where your servers/VMs & DLR (used for L2bridging) resides.

      Mapping should be one to one, that is 1 VLAN to 1 logical switch/Segment.

      Bridge instance gets created on specific ESX, where DLR resides. However, if an ESX fails which contains bridge instance, NSX controller will move bridge instance to different ESX and will push a copy of MAC table to the new bridge.

Since bridging happens in a single ESX server/ Hardware. It is always recommended to keep different Bridged enabled DLR on different hardware to improve the throughput.

NSX-v Readiness - considering base build is already completed

      Logical switch & DPG (physical VLAN used for L2Bridge) should be part of same virtual distributed switch, vDS.

      Create logical switch.

      Attach the same to DLR with LIF.

      Go to DLR > Manage > Bridging

         provide the logical switch & Distributed port group name here (Note if DPG & LS are not part of same vDS, it will not show-up here)



         Once you publish your changes & bridging is enabled, logical-switch will reflect "Routing Enabled" option, as shown in figure.




NSX-T Readiness - considering base build is already completed

• Create an Edge bridge profile.

 Map to edge cluster.

 Choose primary & standby Edge Node used for bridging.


 Create an overlay segment and save, in this case "Bridge-Segment “

 No need to connect gateway & Subnet since we are going to map this with VLAN backed segment.



Edit & expend additional setting and add edge bridge profile to this segment, shown in the diagram.



   Add the edge bridge profile which we created, also select the TZ as VLAN backed & mention the VLAN id, this is the main point where we map overlay segment with Underlay to make L2bridge work


       lastly, we need to login to vCenter environment of NSX-T.

       Turn on Promiscuous port & forge transmit ON for ALL-Trunk-port-group which we have used for edge nodes uplink during our base-build configuration.



       Without this setting, the vSwitch/port group will only forward traffic to VMs (MAC addresses) which are directly connected to the port groups, it won't learn MAC addresses which - in our case - are on the other side of the bridge.


NSX Bridging Validation: Independent Bridging – logical view



This diagram only depicts single logical switch (10.89.208.0/27) stretched from DLR to Edge Node via physical L2 VLAN1608.

1.  created L2bridge at NSX-V side in between Bridge-logical-switch – DPG-V1608

2. created L2bridge at NSX-T side in between Bridge-segment – DPGV1608

3. with above approach we have stretched broadcast domain of NSX-V (10.89.208.0/27)to NSX-T via physical network. (VLAN1608). 




Workload migration from NSX-V to NSX-T:

  • Shut VM at source/NSX-V side.
  • Unregister VM from old vCenter & register the same in new vCenter. (same datastores should be mapped to ESX hosts)
  • Replace existing logical switch (NSX-V) to new bridged segment (NSX-T).
  • Power on the VM at NSX-T side.

above approach demands downtime of VM, you may do vMotion also if downtime is not the option & vMotion prerequisites are in place. 

 At this moment gateway still exists at DLR. 


==================================================================================================================================================

MAC learning Validation - Logged in to ESX where DLR resides to check on MAC address learning.

net-vdr -b --mac edge-57             // (Pl. change edge id as per your environment)

  • MAC of VMs (connected to NSX-v Overlay) is being learnt via Overlay segment.
  • MAC of VMs (connected to NSX-T Overlay) are being learnt via VLAN segment.


==================================================================================================================================================

Testing - VM moved to NSX-T and has been attached to NSX-T overlay segment.

VM IP - 10.89.208.5

Gateway – 10.89.208.30


Result - Able to ping 10.89.208.1 where network is connected to NSX-v Overlay segment.

  • Remember we have not assigned any subnet to NSX-T overlay, but still, we are able to assign IP from overlay subnet of NSX-v & ping within the broadcast domain, all happening with L2Bridge.
==================================================================================================================================================




Network Un-Stretch / gateway migration from NSX-V to NSX-T:

  1. Once all VMs are migrated to new environment, we are ready to move the gateway from NSX-V to NSX-T.
  2. Attach Bridge segment to Tier-1 gateway and assign the gateway address, Do not publish this for now.
  3. Disable the LIF at DLR (NSX-V) and then publish the gateway configuration at NSX-T side.
  4. Route distribution for the connected gateway in NSX-T side should already be in place, taken care during base build itself.

Note: This gateway migration is a disruptive action, we should plan LS disconnect at DLR and Segment connect at Tier-1 accordingly to minimize the downtime.


PS: Any Improvement points or suggestions are welcome.

-----Thank You-----

Prashant Pandey

Comments

Popular posts from this blog

Decision Factors to choose in between NVDS & CVDS during NSX-T deployments

NSX-v to NSX-T workload migration with Network Coexistence via L2Bridge