Welcome to our presentation. My name is Paul Mancuso. I'm a Technical Product Manager within the network and security business unit. We're here to discuss how VMware NSX will be designed and set up and constructed over Cisco ACI Underlay. Let's get started. Our agenda today, we'll first go into a little bit about our next generation networking concepts and vision, we'll then discuss NSX design ideals, the basic constructs and ideals that we've been stating ever since the inception of NSX. NSX over a Cisco ACI Underlay that will be our main focus of the session itself, those are our design ideals. What is required of the ACI fabric? What are the dependencies in service infrastructure abstraction objects that need to be created in the ACI Underlay in order for us to normalize its fabric and then deploy NSX on top of it? Then we have a small section that we'll discuss are design ideals and the advantages of using the design that we have constructed for this presentation. All right. Ever since we begun, NSX has essentially had this idea of our platform structure. It begins of course at the very top of the cloud consumption model, a management plane with vCenter and NSX with our NSX for vSphere product. We have a control plane with NSX controllers and a series of data plane components to provide high-speed servicing, switching, routing, fire walling, and an edge appliance that has a variety of service functions that can be adapted for uses within the virtual environment. These are load balancing nut DHCP, as well as its own Edge routing and firewall service model. All of this overlays any infrastructure, any highly interoperable switch fabric, if you will. This particular model has been our model since the beginning but it has grown. Our approach now has grown beyond what was once delivered within a single data center site. VMWare software based approach is delivering a networking security platform that enables our customers to connect, secure, operate, and end-to-end architecture to deliver services and applications wherever they may land within the environment. This enables designing, building the next generation policy, Jodin data center that connects, secures, automates traditional environments such as the hypervisor-based workloads. As well as now the newer micro services based platform with containers etc. This gives us a range of deployment targets that we can also enable and utilize. We can embed security into the platform, compartmentalizing that network through micro segmentation, encrypting in-flight data, and automatically detecting and responding to a variety of security threats. We now have a deliver a WIN Solution that provides full visibility, metrics control, and automation of all endpoints. This integrates with VMware's management analytics and automation platform so that we've essentially close the loop cycle to define deploying, monitoring, and taking all of your business workload needs across a variety of locations sites and any endpoint. Therefore, we've taken our traditional platform ideals that you've seen in the previous slide that shows our stacked architecture of management control and data plane and we've extended this now across a variety of locations and sites, inclusive of the Public Cloud. So, now we have that Private Cloud as we've noted before, is any infrastructure that we can service and provide a platform allowing secure application deployment within the Private Cloud, now extended into the Public Cloud as you can see here with Amazon, Azure, IBM, etc. Our control cluster still provides that same control plane service model that we've talked about in the past, in case you've seen any of our previous presentations. The NSX manager along with a variety of compute management capability now. Inclusive of course are NSX vCenter of course. Of course a cloud service management, a model that we can adopt a crossed an OpenStack, or even utilizing our own vRealize automation. So, a variety of tool kits to R descript and automate your environment. In doing so, this brings us back to that same concept of that virtual cloud network we just talked about momentarily ago. Virtual Cloud Network is that network model for the digital era. We believe our customer apps are going to be running everywhere. Virtualizing the network fabric, this affords the cloud teams to operationalize and automate a deployment of the application, their security and all of their day to operational tasks. We started by building this model in a single private data center. We've now extended this model across multiple data centers, multiple sites, and now into the Public Cloud. So, let's discuss our basic NSX ideal, design ideals. Our NSX platform design over any underlay has afforded us the ability to essentially tell the customer that you can leverage any underlay, any infrastructure. There's no requirement or dependency placed upon the switch fabric. Whether that switch fabric is a legacy architecture and that's fine. We can deploy NSX on it and in fact even mitigate some of the migration concerns and deploying NSX on top of it to push workloads over onto the new fabric once it's been established. But the idea being that we essentially still provide the same type of automation set of service functions, integrated NSX service deployment over any style of fabric, we don't really have a necessity or dependency upon it. All of our services are laid on top of a network virtualization layer which is our NSX platform. We can provide a virtual to virtual, as well as physical to virtual integrated security services and communication needs. The inherent services that NSX platform delivers is local to the application. This is delivered through that Edge service model of ours. As well as all of the security functions, we have are inherent in the platform, whether it be distributed firewalls, context aware firewall and a simplified model for service insertion of higher level of security functions from our third-party ecosystem. The infrastructure benefits or the customer benefits from the ability to not have a dependency on the infrastructure. This gives them a choice of any underlay, as well as the ability to extend the life of their existing underlay or extend the life of the underlays that they eventually purchase and deploy NSX on top of. There's no necessity for concern for what hardware service functions need to be applicable to its usage or a churn of the hardware in order to afford themselves accessibility to newer services that come in the next generation of hardware fabric. Our service model is all deployed with the applications dependency inside of that software layer. Our basic NSX design has not changed. So, in other words, even though we're doing a session today with NSX over an ACI and delay, we're not changing the basic constructs of our design. What you see here is exactly what we've been telling our customers since day one. We have three functional cluster groupings, if you will, of compute for service needs inside of the hypervisor based environment for managing the NSX and virtual center environments, so the management. We also have compute where all the basic needs of the workloads will be deployed upon and computed. Then of course we have an edge cluster for that North-South and to East-West communication negotiation. Now, these particular three clusters don't necessarily have to be independent clusters but we portray them as such in here just for in a simplified manner. Very often customers that have a bit smaller deployment don't require the necessity to stretch these three functional ideas across three separate clusters. Many of our customers will deploy let's say management and edge in a single cluster and then of course create a separate compute cluster to scale out workloads as the need arises. As I mentioned before, we have deployed NSX or that basic design that we just discussed across a variety of different infrastructures. As you see here, we've got all three of the main ones that are out there, right? The Legacy, Layer 2, VLAN backed pod style architecture. Right? Next to it we have the newest generation where it would have a leaf spine architecture although this one does show a core on top of it but the idea is that layer three now is no longer demarked at the distribution layer, layer three is now done so at the Tor. So, VLANs demarked at the top of the rack and no longer extend into the wealth or the entirety of the fabric itself. If we wanted to or if the customer wanted to utilize some of the newer VXLAN backed fabrics, that's fine as well. This gives the customer the view of Layer two anywhere even though technically Layer three can be demarked at the top of the rack switch itself. The idea is that we can provide NSX deployment on any one of these three proto-typical environments. The majority of our deployments initially were done so on that Legacy style VLAN backed environment and as time has gone on with customers purchasing some of the newest fabrics, that have graduated toward using either Layer three at the top of the rack or possibly a VXLAX based fabric. Either way, all we need is a highly interruptible switch fabric. The concern for the fabric is really meager. It's just high speed, low latency IP communication with the ability to adjust that MTU to provide our NSX overlay and VXLAN land or for NSXT or Geneve overlay services. So, let's go into the heart of our design. Let's start talking about it. What do we need to do to normalize the ACI fabric and provide a series of constructed objects inside of it to deploy NSX and essentially connect our vSphere host or our hypervisor based host whether they may be vSphere or KVM if we have an NSXT deployment involved. This gives us just that 10,000 foot view of what's involved from or what's essentially almost our endpoint if you will. In the endpoint idea we're going to have a series of endpoint groupings that we are going to require of an ACI tenant and we'll get to what that's about. Those endpoint groupings will be managing the kernel level networking needs of the infrastructure. So that's management, VMotion, IP, as well as our IP storage I should say, as well as the transport. So, the transport of course can be VXLAN for NSX V for Datacenter or NSXT for Datacenter. Then we'll also construct an erotitic domain set of features to provide our ACI L3 outs so that we can have the NSX overlay disaggregated from the infrastructure as our prototypical design allows for our software network virtualization fabric ideal. We'll have our edges provide that transit communication to and from the top of the rack switch. This supports the attachment of all hosts. It defines the idea of physical domain of host attachment. We treat all of the hosts inside of this design as bare metal. So, we don't use any of the proprietary extensions of ACI's virtual network management or any of those ideals as they are not supported. VLANs and Switch Interfaces and policies, all of that is constructed inside of the ACI's access fabric or policies and we'll discuss those. In addition to that, we'll also look into what domains that need to be constructed but it's prototypically two, physical one and external one. Then we'll talk about the creation of an application profile that will contain the endpoint groupings as well as all networking needs. So, we break it down into three basic ideas. First off, how do we treat the ACI fabric as I've mentioned? Well, it's a VXLAN based backed fabric which essentially gives us that concept of layer two is somewhat pervasive. Because ACI treats all endpoints as slash 32 routed host of sorts. We're going to also create a single ACI tenant. Now, you could use the ACI common tenant but we would prefer that you would create your own tenant probably for our back control. ACI does have the ability to, we do have the ability to leverage ACI's role-based access management services which are quite well. So, we definitely would leverage it in this instance here, create a separate tenant in case you would wish to manage which particular manageability of the services for the ACI tenant itself for, excuse me, the NSX tenant on the ACI environment or if you wish to segregate some of its traffic through the use of Cisco's ACI's tenant model. Our design also utilizes very few ACI contracts. We don't really have a need to use too much of them because remember the security is going to be employed through our inherent security services in our network fabric. We will map static endpoints for the vSphere host or the hypervisor host that you may be using in case this is an NSXT deployment with KVM involved. We'll also need to map the NSX Edge to the ACI border leave. So, pick two ACI leaves that will become our border leaves for communication that will not only provide routed communication into the NSX Overlay but they'll also provide route of communication into the interior of the datacenter. Therefore the fabric infrastructure, the basic initial set of objects we'll need to create will be of the following. They'll be a physical domain, an external routed domain. We need at least one now, you will probably have two because you'll have one for the North side communicating into the datacenter itself and then one also communicating South down into the NSX Overlay itself. You'll create a series of VLAN pools. You'll have one for the infrastructure VLAN. In other words, we need to provide encapsulation of those kernel elements that we talked about before, management, IP storage, VMotion, and Overlay. So, we need a VLAN pool for those and then we'll create a separate VLAN pool for the external router domain. Now, there's no real requirement, we could just use the same VLAN pool for both but operationally it's probably best to separate those. Then we'll also create a single attachment any point for all of it. This provides the negotiated abstract connection, a logical connectivity if you will within the ACI system itself so that we can attach the domains to the policy fabric objects that we will require to create. Then NSX over ACI. At the point we've created all of those fabric objects that we're going to talk about shortly, you're then going to create a single ACI tenant. Now, as we mentioned before, this should be a separate tenant, shouldn't be the ACI common tenant. We'll create a single application profile. We don't have any necessity to create more because we're going to establish a single infrastructure network of all vSphere, NSX, and KVM communication needs all through that single tenant. So we'll create 4 EPGs, obviously the motion will be required if we have vSphere involved, correct? In those 4 EPGs, we'll also provide an establishment of four different bridge domains, a one-to-one relationship to those endpoint groupings, and we'll take all of those bridge domains with a specific subnet value that we will attach to each of those and assign them and we'll make sure that we associate it all to a single VRF or a single virtual routing and forwarding instance within ACI. We'll need to have at least a single L3Out for the communication that will be necessary for ACI to connect to the NSX overlay and then we'll need another L3Out in order for you to establish communication into your data center fabric. Now, we don't really talk too much about what your needs are for the northbound one into the data center because you may have other workloads that will reside outside of the NSX overlay, so your needs there are going to be customary to what is required of your environment. So, let's talk a little bit about NSX, the ACI fabric policies. Now, there's fabric policies that deal with the general switch fabric itself and then there's fabric access policies. And in the fabric policies, we don't really have much need there, I'm only mentioning it here so that you realize why we don't really discuss too much of it because most of those will be custom to your own needs. We won't truly have a requirement for that. So, in our design itself, we stipulate that you're probably going to want to have DNS, you want to have some management access needs, create in-band versus out-of-band, et cetera, for your ACI manageability and we don't play a role in determining any of those needs. You'll have sys log Network Time Protocol, etc. You're going to need to establish all of that. There will be one set of policies we will talk about a little bit later on for the system that deals with Route Reflection but we're going to leave that to a little bit later on. The idea though is that you can utilize or leverage some of ACIs capabilities such as spanning. So in case you wish to pick up communication to visualize what is being transmitted inside of the NSX overlay, you can go ahead and use span source to destination groupings if you will and you can pick up that packets that's coming out of a hypervisor tip or tunneling endpoint being communicated to a destination tunneling endpoint and then you can go ahead and visualize to communication in order to see that you can strip away Cisco's VXlan encapsulation that's placed upon the top of our VXlan encapsulation. We'll have more discussion of how the overlay is interpreted and integrated into the use of Cisco ACIs fabric. Then, there's fabric access policies. We'll have another slide coming up with a little more detail in this regard. But essentially, we really only have a need for you to use the default which in essence means whatever the switch fabric default to using we're fine with that. We don't really have any needs but the problem is you still need to construct the various set of objects in order to provide the capability of ACI to understand where host are interconnected and where the domain boundaries are for ACIs domain boundaries not ours but that is their constructs and their terminology for it and then how they will go ahead and utilize their VLAN pools for the attachment of the host to the fabric itself, so we'll need to create those policies. We do suggest though that you use at least one of two different discovery protocols, LLDP or CDP. If you're familiar with them, you know that they will provide an inherent value when troubleshooting connectivity between the hosts up link and your top-of-the-rack switches. Creating the fabric policies itself. In this slide here it's just sort of an all in one graph if you will of all of the different fabric abstraction objects that you are going to be required to create. Now, what should be noted and I'm going to do so from the start here is that these objects aren't something that we're requiring, they are something that the ACI fabric requires if you too manufacture. You would have to do this regardless of whether NSX was involved in the picture or not. These are required objects in order to attach hosts and systems to the environment so that they can be noted by the ACI fabric of which domain and then which VLAN pool ultimately will provide encapsulation of their communication from their up links into the top-of-the-rack switches. So, starting from the right side of our slide here, we have the Leaf policy, we'll start with the lowest portion inside of that set of concentric circles or enclosures of the slide objects here. So, we had the Leaf policy. The Leaf policies here, those are those LLDP and CDP policy as well as a variety of other switch based protocols that you may wish to turn on globally in your switches and allow them to be enabled. This is more consistent with establishing the needs of what are the global style policies that have to be enabled. For us, again, you can leave the defaults, they're perfectly fine, we have no needs for those. Next thing, you'll need to group those together but we still need the objects, it still required because ACI will request that object as a required component to be associated to something called an NSX switch policy group. So make sure that you would then establish that particular object. Then, the next object you would create would be that NSX-Leaf-Profile object. That NSX-Leaf-Profile object will associate something called the Leaf selector object. So the Leaf profile object will require you to assign the Leaf selector object. In addition to that, you'll also then associate that switch policy group. So this means you'll have your global protocols if you've enabled others for your own needs from those Leaf policies. You'll have the Leaf selector which determines which of the ACI Leaf switch IDs that are going to have these policies attached to it at some point and then of course that Leaf profile then will be used as an association to our interface policy objects as you see here. So let's start there. Once again, you'll have to create a series of interface policies. Now, the Leaf policies we talked about before the actual naming constructs idea the analogous ideas that those were for the switch this is per the interfaces. So, which interface policies do we wish to enable usage of? We would suggest of course you would enable LLDP and or CDP. Make sure you're aware that if you use them together, Cisco ACI has some prioritization on their use but nevertheless, the interface policies. From there, you'll kind of create a very similar concept that we did with the switch instead with the access ports, you'll create a port policy group that groups all of those policy ideals together. Once again, you'll create a host interface profile. This will then associate that port policy group. From there, you'll also create something called an NSX host ports or access ports selector. Sorry, the NSX host ports is the policy name I gave it but the access port selector policy object. That access port selector policy objects basically says which interfaces so CISCO's obstructed away the interfaces from the Leafs itself, so you would have to have an awareness of where you are going to assign the the physical servers up link connectivity into those Leaf switch interfaces and which ones you would use particularly you will end up setting up like you know, say ESXI host one you would connect it to port one of let's say Leaf 101 and port one of Leaf 102. So in essence, it will still use the same access port on both of those hosts or both of those switches and then you'll end up assigning that port selector that makes sure that port is enabled and then on the Leaf selector object, you'll make sure that you've enabled ACIs Leafs 101 and 102. But the idea is that it's obstructed away, you've got policies for the Leaf that are globally, you got policies for the interfaces, and as you seen here, other than the fact that we are requesting that you create them because ACI requires them. So you need to create those series of objects. Now, in addition to that, you could have created these other next few objects in any order per se, you could have created them first if you'd like, create your domain objects. The domain objects essentially establishes where you're connecting and that's more of a logical where and in this case here, it's where is more of a bare metal ware. We're basically going to say that these vSphere or KVM hosts are going to be treated as bare metal endpoint systems. So you're going to create a physical domain as opposed to any of the virtual machine management domain or any of the L2 or L3 domains, we're going to talk about the L31 shortly. But you'll create a physical domain so that will contain the infrastructure systems themselves for all the clusters, so they will all be associated to that in some way very shortly. The L3 domain that we see here, that L3 domain, is going to be used for the routed external domain that we're going to create inside of the ACI tenet. All right. Next we're going to have the NSX ATP or that ATP, the Attachable Access Entity Profile. This particular profile object provides that go-between of how the domains associate to the switch fabric policies that we just talked about before. In particular, it's essentially associated to the Port Policy Group Object, because the Port Policy Group object is contained by the Leaf Interface Profile Object, and the leave interface profile object is associated to the Leaf Profile Object. You now see how you have the leaves themselves with the various ports, the protocols you've enabled inside of each of those access policy, relative ideals inside of them and then now this is associated to the Attachment Entity Profile which it gets associated to your specific domain objects at a later time. All right. So we'll talk more about that at this time right now. When you create that object, you will be required to associate it to the physical and any domains you're going to wish to use it for. Now, we talked about the use of an ACI Tenant. The ACI tenant is ACI's role-based access control as well as a container of all of the policies specific to a deployment within your environment. That deployment could be some business unit et cetera, or it could just be let say the non-prod versus the prod, the Dev for instance or something to that effect. Whatever it may be. For us that tenant will simply be a single Contain tenant for all of the NSX overlaid deployment and we would suggest then you would put the prod and Dev for instant tendency servicing inside of the NSX overlay avoiding the necessity to create another series of all those objects we just got done talking about, and reducing the complexity. Because remember, the object we just got done talking about, all of these on this page, these are required to be created whether we're in the picture or whether NSX is going to be deployed or not. In fact you will end up constantly manufacturing modifications to these updates, scaling them out et cetera on a relatively continuous basis using an ACI model that does not have NSX in the picture. So we're essentially simplifying the usage of their abstract model by requiring only those to be created one time. The NSX deployment therefore will manage all of your tenancy and the services that are going to be related to the applications themselves, as well as security needs of those applications. The fabric itself will be provided attachment to the bare metal hosts, and we're going into that right now. All of that will be done through a single ACI tenant. Now that single ACI tenant that we spoke of was going to have a series of different objects that also need to be created. There's going to be an application profile, that's the top level object inside of the tenant itself that contains the endpoint grouping objects for the application itself, those are the established tiers or the necessary infrastructure communication for our needs. So we're going to go ahead and take those endpoint groupings, map them to VLAN values and use them as single points or single groupings of all of the infrastructure communication of a respective need of ours. So for instance we'll create one for management, one for IP storage, one for vMotion and one for the transport or overlay. Now the vMotion as we noted before is necessary if vSphere is part of that need. So if you're using NSXT with KVM and RB sphere, if vSphere is playing a role in, vMotion is essentially necessary in that ideal. IP storage of course would also be optional if you're using it. Now, when you create those endpoint groupings, you'll end up having to go back to those groupings and statically assigning a handful of things. First off you'll assign that domain object that we spoke of earlier, that physical domain object I think I called NSX ACI domain or something like that. You're going to assign that to the endpoint grouping. So it knows how it's connected, it's where it's connected and then the attachment at any point will refer to how it's connected because it associates it to all of your fabric policies and your fabric leaf policies, right? In addition to that, you're going to create those four bridge domains that we spoke of earlier. This is how a VX land fabric with ACI's management of it takes care of the assistance of Bong traffic and flooding, and whether or not there's a hardware proxy or if you're going to use a flood and learn type of concept and if there is any arbitrary, other needs that are required of that for multi-casting and things of that nature. For us, we suggest you leave the defaults of those bridge domain. So you'll create four bridge domains. You'll leave the defaults for those. You'll create a single IP sub-net for each one of those four Bridge Domains. All right. You're going to align each of those bridge domains to those four input EPGs. So as you create those endpoint groups EPGs you're going to need to have those four bridge domain. So, oddly enough it's probably best for you to create the networking components first as opposed to the endpoint groupings. So you'll create the bridge domains. You'll create all four of them. You'll create the single Subnet prefix that will contain all of the IP needs for that infrastructure. We suggest that you use a /22 or greater. So in other words use the largest prefix value you feel your deployment will scale to. So if you're using a private IP scheme of course within this location which is most likely, you'll have usually the luxury of just selecting a very large prefix so that way you can encompass any scale even if your environment starts off with just a dozen to a few dozen host and it grows to hundreds if not something much grander than that, you'll have the necessary IP addressing available to it. So we would suggest /22 at a bare minimum or /26 sign 16, whatever. But the idea is you'll create that IP prefix which obviously has the gateway value. I just mentioned to you that we're going to stick all of the management in one of those groupings and assign a bridge domain for it, leave the default so the bridge domain and then you're going to create an IP prefix value or a gateway value that will essentially establish the prefix that will be used for that subnet. It also establishes the distributed gateway that ACI uses. Now in doing so, we're only using that gateway valley just in case there ever is a routed need outside of that infrastructure network service. Management will probably have that need. You'll probably want management to route in and out of the fabric for accessibility to operational services outside of the fabric itself, or for communication in some manner. The rest of the other ones probably don't have much of a need, IP storage being the exception depending upon where your IP endpoints for your fabric or I should say I guess your sans for IP. So that will be dependent upon it but of the four most likely management which is why we suggest for all four of them just go ahead and give that standard distributed gateway even though it probably has no real critical value to the others other than management. At this point you'll also have created or been required to create that VRF, because when you create the Bridge Domains you need to create that VRF so that you can assign the Bridge Domain Subnet to that VRF. All right. So we're going to use a single VRF for all of our network needs. We're going to later on use that same VRF for our L3Outs. We'll talk about that very shortly. We would suggest in the very beginning if you're just parking NSX over it or piloting it initially is just to go ahead on that VRF and enable unenforced mode for all contracts so that way or for all communication just so it makes it easier to establish communication as well. You can go back and enable enforcement of it but remember, within the endpoint groupings management to management communication, whether it's vCenter to the host or it's NSX when it's deployed later on to vCenter or the host, its communication will all be inside or that management communication, will be all inside of that single endpoint grouping. Therefore there is no necessity for contracts anyway. So the contract issue will come a little bit later on for instance that idea we mentioned before if management needs to communicate to through an L3Out to the outside environment we're going to need to establish that setup. There's a couple of means that are required for that. We also talked about the fact that we're going to treat the hypervisor host as bare metal. So this means that when you attach them to the fabric itself, you're going to need to statically assign the ports of their up links. So as we mentioned host one it would go to ACI D101 and port one on it and ACI fabric leaf 102 and on its port one, and that would be the two up links from that host and you would do this for all of the host that you would set up. You'll also enable an encapsulation of a static VLAN and you'll pick the VLAN from that VLAN pool that you associated to the physical domain because remember this is now that physical domain community setup. So you'll have the endpoint groupings mapped to the Bridge Domain. The endpoint groupings will be matched to that physical domain and you'll have statically assigned all of the hosts with the respective VLAN value that you will use for each of the various four infrastructure services. So, getting to the NSX Edge connectivity portion of this, this is just an overview of it. As we mentioned before you'll have at bare minimum two most likely. So NSX will route into the overlay. All right, to the top-of-the-rack switches using that edge communication that we're about to discuss. Then you'll also have the DC Core for the routed external. Now we don't make any suggestions for the DC Core, you'll use whatever you feel is best suited for your purposes there as well as any other route protocol concerns needs et cetera, and policies for those particular aspects. For us, we go ahead and we provide a preset of ideals that we recommend usage in our connectivity to the ACI fabric. So, we're going to require a single L3 routed outside from the ACI leaves, all right for our edge communication will use standard EBGP connectivity. Although, OSPF is also supported, but our model here discusses the uses of EBGP for that connectivity. NSX overlay will be essentially create the connectivity to the ACI fabric and then the ACI fabric connectivity up to the DC Core, the ACI fabric in this essence does become a transit network, right? But the way we provide our connectivity to it it's very simplified model. It's essentially just a top-down approach where the DC Core or anything within the infrastructures such as hosts that are going to get routed or bridged into the ACI overlay will all come through either the fabrics infrastructure with an ACI or end points outside of the data center or outside the ACI fabric itself, inside of- outside of some L3out and then into the NSX overlay. We show you how you can set this up in a very simple manner so that there is no real necessity for complex routing protocol profiles and policies. In addition to that, we do suggest that you have in-band management set-up of some sort. You'll have to follow your own needs Cisco ACI guides will walk you through doing that. Also, we do recommend the use of two transit VLAN that need to be set up for that edge communication. Those transit VLAN that we talked about before will be in that external VLAN pool that will be mapped to your external L3 domain or associated. You'll need to create a BGP route reflector policy. You could create timers and a bidirectional 40 detection policy that pacific to the L3 routed domain that we're going to set up shortly and a single default route leak policy for all routes that need to be for, excuse me, a default route leak policy. So, you only have to inject that default route into the overlay, we don't really require anything else. If you're going to need something beyond that, then you'll have to go a little bit beyond what our model is showing here. Because prototypically we suggest just injecting a default route because in most cases NSX is going to need communication to just about any endpoint, specifically if you have workloads that need Internet communication. This here shows the initial setup ideals that if you've set up an ACI fabric you're aware that you'll need to go in and stipulate two of your spines to be part of a multiprotocol BGP route reflector policy. This permits external routes learned through L3outs to be distributed to internally ACI tenants. So you will probably have a necessity to do this. Now remember because we're only using a default route for ourselves this policy really is more so for your nor site communication not really much to do with the ACI tenant you're going to create for NSX because you're just going to inject a default route, we really don't need anything else. So going a little bit deeper, although it looks like the same PowerPoint, this one goes into a little bit deeper depth of understanding of what's going to be necessary. We'll concentrate and focus more on the NSX overlay connectivity with the edge and the lower portion here. Define networks accessible by the L3out EPG, so we're going to talk about that. But remember, we really don't have a big need to do anything other than just tell the ACI fabric, configuring that fabric what will be the IP subnet prefix for the deployment platform in the overlay itself. So, if your, let's say your model is to use a series of 10x infrastructure subnet prefixes for those and then your your NSX over ACI tenant will use anything 17216/16. That's the single external route value that really needs to be aware by ACI, and ACI needs to use that so that it can essentially communicate from the ACI fabric into those IP subnets and L3out needs two things. It needs to have an understanding of what is going to be imported and exported into the route table values and it also has to have a value that says what IP net prefixes it's permitted to communicate to and that's the net, what I was just talking about, that particular aspect. All right, the external networks for the external EPG. So the NSX overlays summary, if you needed to summarize any of its routes in case there was multiple subnets that were going to be necessary. But if you're constructing this from scratch, the idea would be that you can consolidate those all into a single prefix. Default route leak policy, that will be that single policy value that will inject a default route into the NSX overlay giving it awareness of how to communicate and egress all of its communication needs for anything that it is not aware of within the overlay itself as far as network routing. Now on those ACI border leaves and the downstream edges for the edge cluster that you will have providing ECMP edge cluster communication to and from the ACI top-of-the-rack switches. Inside of those ACI leaves, you're going to need to construct a series of switch virtual interfaces. Those SVIs will be providing the route adjacency as well as the routed hop for the edges themselves for that ECMP communication. And inside here you see that you'll have one ACI leaf that will have one set of SVIs all bound to a specific VLAN value, one of those two VLAN we talked about. Then the SVI itself is essentially reproduced a multiple times so that you can tell the ACI policy or SVI policy will essentially inform the ACI that it needs to establish that SVI communication through a series of two or more external interfaces. Those interfaces are where your downlinks or you're up links I should say I guess from the hypervisor hosts are connecting into the top-of-the-rack switches. So, if you have let's say and this picture shows here two hosts, and those hosts have to up links, you're going to need at least two SVIs per each border leaf. If I had four hosts, again with two uplinks a piece, I would probably need that SVI produced four times on each of those border leaves. They would essentially have the same IP for it's routed adjacency, for its connectivity to and from the edges. But it would have a different value for the port that it's going to be ingressing and egressing its communication on its behalf. That SVI also will have an encapsulated VLAN value, that will always be the same as well. So that IP and the VLAN value stays the same, it's the port that will be reflected differently for each SVI you construct. In this case here you're going to construct a series of them as shown so that you can permit the VLAN communication, the communication using the VLANs that are encapsulating the traffic between the hosts so that each vSphere host also has a connection to each VLAN of each top-of-the-rack switch. We suggest inside of the NSX configuration that you use a source ID means of load balancing its connectivity and avoiding the use of VPC. If you recall, our whole concept is trying to extrapolate or disaggregate the entire NSX infrastructure from anything that's proprietary inside of any of the switch fabrics, providing a commonality of of configuration and communication needs, whether you have ACI in one location you have a legacy switch architecture. In another location or simply have chosen a different switch fabric in another location. Maybe it's just nexus and XOS, whatever. This is a screenshot then showing what will ultimately arise from the setup and let's go through what's necessary to get to the standpoint. So, inside of that routed domain that you're going to need to create, you're going to create a series of objects. So first off, you'll create the L3 outside object and it's referred to in ACI as the L3 external out. Beneath it you'll then create a series of other objects that provide the policy, the IP communication and those SVIs that we just spoke of. You'll create logical node profile, the logical node profile specifies the top-of-the-rack switch, you'll give value such as whether or not it's going to use a loopback et cetera and if the loopback will match some other value for the for the ID that you're going to assign to it. You'll then also create those SVIs we've spoke of, you're going to need to do so, those will all be contained inside of that logical node profile. The logical node profile will also contain your peer connectivity objects. Those what you see there on the left side of the of the depiction. It shows you four BGP peer profiles attached to each of the leaf profiles to allow the establishment of eight adjacency objects as you can see here. So, in this case you'll see that we now have full redundancy of ECMP communication for the four edge nodes that you saw on that preceding picture. Each of those four edge nodes are connected to both the VLANs, hence, the eight BGP peer connectivity objects. All good? The external network instance profile, this is where you basically tell Cisco how to or ACI how to negotiate importing and exporting of routes. Well, we have no need for that. The only thing you really need to do is create a single object for the the external networks for the external EPG and that's that summarized prefix value that says ACI is permitted to communicate to these networks inside of the NSX overlay. Now, when I say permitted it's essentially one of the two feet things you'll need to do. You'll still need to create contracted communication to permit that transmission because Cisco's whitelist policy will deny its communication needs. Unless of course in that VRF that we talked about earlier, you selected that all contracts were unnecessary not a word you unenforced communication on it that would negate the necessity for a contract for those L3outs as well. In addition to that, you'll then create a default route leak policy. That will be for the routed external domain this will tell the ACI- The ACI topple direct leaves to inject into its route adjacency communication to those edges a default route. You'll see that in your edges. You can pierce through that using the CLI, for instance right on the edge nodes themselves, et cetera, or if you're using an NSX for vSphere for datacenter, you can use the centralized CLI to see these values. So, at this point then, it brings us back to what we were discussing just a moment ago. All right. We've essentially now configured the border routing for the lower domain. What we have still left to do would be the north side. Now we're not going to go into it but there are some suggestions there of what needs to be done. One of them is to use the default route value for the external networks, for the external EPG on the north side. Don't use it on the south side. Now, technically, it would work. What can cause is a problem that has been known inside of ACI if you were to establish usage of that same subnet prefix value for the external networks, for the external EPGs on two different L3Outs in the same VRF because we're suggesting that you would use the same VRF on the north side. This avoids having to figure out a policy of sharing those networks between two different VRFs of two different L3Outs or if you wish to still use the same VRF but just establish the use of separate L3Outs. The idea being is simplifying the model, single VRF, abide by Cisco's recommendation. I'm not using the same prefix length value on both sides. So the south side, we will have a very specific one that says if it's only going to the select networks, we're going to go ahead and permit its usage on this L3Out. For the north side, if it's anything we don't care. Go ahead and ship it out that way. Again, you'll still need contract communication set up for those flow to be permitted. So make sure you're aware that this is just one of the two things you will require and use for that flow to be permitted. So, let's talk about some of our design advantages. First off, what are we trying to do? From the very beginning, our whole idea was to provide independent overlay set of service functions, or NSX platform if you will, on any infrastructure or any infrastructure you don't even manage, cloud. But the idea is that the infrastructure becomes seamless to the deployment of the app and its needs. This shows in this particular diagram here the ability then to manage all of the infrastructure service inside of the NSX model for our NSX over ACI design utilizing the basic abstractions that we told you to set up. Whether they are the access policies and the tenant policies. So this gives you that software defined managed network, a seamless agnostic approach to the infrastructure and then our overlay runs through that fabric as well as between any of your sites. Another advantage is the use of our edge cluster design ideals. This completely disaggregates the deployment of the application services from the fabric because we have no necessity than for any of our tenant functions that we're going to deploy inside a tenant edges. We're going to use a provider-based edge cluster. Whether it be two, four, or eight edge nodes of sorts, all providing ECMP traffic communication at a high-speed rate so that you'll get the amalgamation or the aggregation of its flow value on a north-south basis giving you an exceeding amount of north-south capability and a very efficient and operationally simplistic manner. Also, allowing you to deploy all of your application needs local to the application, independent of the hardware infrastructure, without concern of any service needs or any updates to the infrastructure itself. This essentially desegregates also the idea of creating firmware update, software updates, OS updates to the fabric with any service needs inside of the NSX platform. The vSphere NSX platform, I should say, is independent of those ideals. This also avoids a major issue that has come about and that is the idea that virtual machine management. Cisco has its own non-partnered service that it has manufactured and used within ACI and it does it through a process that is referred to as virtual machine management, it's the overall name. It's also referred to inside of the interface as virtual network. The use of constructing objects or CRUD operations, create, read, update and delete objects itself through the use of global APIs for either the vSphere objects and services for the distributed switch, all of that is fine and supported. The issue is how all of this is synchronized into a stateful set of understanding inside of ACI is the part that is not supported by us. That would end up falling upon Cisco. This would cause somewhat of an issue at times because if the support a need arises, the only issue we can help with is the portion that we are aware of, those CRUD operations that our issue through API calls into vCenter for the manufacturing of the distributed switch and modification of all the distributed port group functions, et cetera. Those are fine. We can handle and deal with all of those but anything that deals with the manufacturing of service function needs and dependency needs inside of ACI, those would fall outside of any of our help in that regard. We would suggest them to avoid these types of support aspects by not utilizing virtual machine management and it is not supported inside of the vSphere and NSX deployment. Now, another advantage is the fact that because it's software bound, the services that we're offering, software is much easier and much more adept and agile at scaling as time goes on. We have no constraints limited by an ASIC or a chipset that's engineered inside of a hardware device that you purchased a year or two later after the engineering of that particular chipset has already been done. So this chip's designed years ago with a specific maximum capacity and service function that are accessible and usable, but that same point then, software is much easier, much more agile to deploy, increases, scaling it out, as well as service needs can be enhanced at a much easier format as the cloud model has essentially shown us over the past several years. So on the left here, it shows the scale capacity that we have already allowed inside of NSX and those are numbers that have grown substantially since the beginning of NSX. History. Now, the history of NSX several years back. But hardware limitations are something that are inherent in the device you have and those are the more recent numbers. You have to be careful in quantifying their usage because they have a tendency not to work in a simplified algebraic manner. They have a tendency to work more in a geometric as well as there's sometimes an accumulation of two or more of those capacity values that tied to one another not giving you fulfillment of a single capacity value. The issue though still comes down to software is much easier to deploy scale service needs as well as to provide an agility to that application to be deployed in any environment on any endpoint and target any service endpoint. Another cause of concern that we've had from people and it was never a cause of concern I should say that is how we work with the VXLAN Fabric Overlay, that is our underlay that is done by ACI with our NSX Overlay. While it's still the very same process, we encapsulate a Ethernet frame that's deposited onto a logical switch by a virtual machine. The tunneling endpoint at that source location. We'll go ahead and encapsulate essentially a few things. The VXLAN value for our transport zone as well as a VLAN value either to that one queue for that transport zone. That's the VLAN value that, of course, we talked about before that you need to assign as one of the four infrastructure VLANs. The ACI Fabric will strip off just the VLAN header and attach its own VXLAN or iVXLAN header if you will and use that to transmit between the two endpoints inside of its own fabric. It uses that so that it can identify its endpoint. So the other endpoint it will identify, it will be the destination tunneling endpoint of where that NSX and vSphere or KVM host is found within it's infrastructure and it'll send it to that leaf location. That leaf we'll go ahead, strip off its iVXLAN header, leave the intact, the vSphere and NSX original frame, which is its VXLAN or Geneve if it's NSXT, and then it will re-encapsulate the corresponding VLAN value that's tied to that interface downstream to the host. The host receives it and it does its thing from stripping off the VLAN and the iVXLAN and depositing it onto the appropriate logical switch, so that the end system retrieves the frame off of it. So, this process is the same if we had a VLAN back infrastructure I should say. There's no differentiation. In fact, it's so simple to see is that even something as simple as an open source tool kits like Wireshark here, can go ahead and read the frames. The only thing I needed to do is tell it the corresponding UDP VXLAN value for iVXLAN, I think it's 4889 or something like that, inside of here and the NSX VXLAN value. But the understanding is just that, there's no necessity or issue with visibility of communication and this was picked up using an ACI span, source and destination policy within the ACI Fabric. It was then, of course, deposit into a system that provided the analysis that shown here. At this point, then it becomes fairly obvious that our discussion continues on and that idea of the NSX platform provides simple agility of any workload across any environment across any infrastructure. Inclusive of a cloud infrastructure that we could've also pictured here as a third location if you will. The idea being though that, the infrastructure itself, if we remove dependency upon its service needs and its scale limitation that it will impose, your scale and service model is much simpler done so in software. As this shows here, all we need is IP connectivity between those two sites or a site and some cloud service provider that you are engaging usage of with NSX. So, a summary of why NSX? NSX architecture is extremely expandable and extensible. That's what I've been talking about before. It's that software based ideal that's much easier to adapt additional scale, additional service functions. In fact, those are service functions that you the customer tells us. You tell us what we need to do. We constantly are being told. We need these. We come back and say, "Great. What's the use case for it?" Then we extrapolate that into an ideal on how to provide that corresponding updated service inside of our product at a later time. Essentially, we begin to solve the exact problems that you request due to that software based service model. Services are fully defined, load balancing, firewalling, VPN et cetera. Giving you a complete abstraction of the hardware in order to define the usage of all of those services directly local to the application needs on the same logical layer of connectivity of where that application resides and where those workloads are being computed. This gives a complete capability of doing something as simple as moving from development that your parking inside of a development environment, queuing, et cetera, into your production environment. Because we can completely abstract it's IP connectivity using something as simple as NAT. But the idea being that service model is very simplified on our software based platform. The hardware extension and some of the other ideals that we mentioned earlier in this presentation, with the ability to extend the capabilities that are inherent in your hardware beyond the lifespan that might be, I guess, defined at a smaller limit simply because it doesn't have the necessary service needs. So the next hardware model that's being produced by whatever hardware infrastructure vendor you have chosen, will obviously provide enhancements to those service needs. But the issue is, is that will require you to churn the hardware and infrastructure as well as of course, create a bit of disruption in doing so. As well as a fairly significant cost differentiation as time goes on, whether it's operationalizing that hardware churn or just the CapEx of purchasing on that continual basis. The hardware extension is a huge thing and we found value with our customers. This gives them the ability to feel secure in the knowledge that when they deploy NSX, all they need from that fabric is high-speed IO low latency communication. Now, the service disaggregation from the hardware that we've discussed before, that's all are provided by essentially two elements. Those infrastructure, kernel components that we talked about that negates the necessity to have anything tied to the fabric. The entire set of application workloads are all done within our transport zone. All done through that tunneling overlay that we mentioned as one of those kernel components. The second aspect of it is the edge disaggregation with the ECMP communication provided by the edges. This gives high-speed communication. In addition to that, that high-speed communication is done through a variety of different offloads inclusive of the newest DPDK offloads that we can now provide line rate to all of the newest interface capability and capacity values that are being introduced today beyond ten gigabit. This now then gives us that concept that final idea that we've stressed many times and that's application delivered in a full software defined format. So that way your application workloads, where do you deploy them in one site versus another site versus even into the cloud your delivery model and operational model can be fairly if not completely seamless. Thank you for listening to this presentation. I look forward to seeing all of you this year at VM World. Have a good day.