So we first mentioned Network Function Virtualization, or NFV, couple of times now, and we're going to double click on that a little bit and talk about how we got here. So back in 2012, there was an original ETSI NFV document that came out that introduced these terms, network function virtualization and software-defined networking. Since that time, things have changed. Here we are several years down the road, it's in grade school now, this concept, and we've learned a lot from it. One of the things that we want to really talk about before we dig too deep into it is the original concept talked about virtual machines. So the idea of virtual machines being that we would have some type of disaggregation or hypervisor, for example, that are sitting on top of our network compute and storage and a host operating system. And then we would have guest operating systems that provided resources to this collection of things that we called Virtualized Network Functions, VNFs. Now, let's not confuse NFV with VNFs. That's something that we're going to spend a little time talking about. So these virtualized network functions then could be a collection of processes that are sitting on top of this guest operating system. Fast forward to where we are at this time frame. The idea was that disaggregation, the optimal use of those compute and network resources, so that we can break that purpose-built architecture of the past and allow us to grow with the speed of the development of the hardware as it moves on from generation to generation. Sometimes, we get hung up on whether these are virtual machines, heavyweight applications that are sitting on top of a guest operating system supported by a hypervisor, or whether these are cloud-native or more properly containerized-type applications. We shouldn't let that be a hang up to us. We want to move forward and think about the virtualization concept at this point really says that we've disaggregated those purpose-built applications of the past from the underlying hardware that supports them. And that's the idea that we have today, and we talk about it. We haven't changed the terminology, because it's still good terminology in the sense that we're using that idea of virtualization. So the concept-- let's go back to it-- was to improve that CAPEX through standard high volume servers, sometimes also used the term commercial off the shelf servers, or COTS, to allow us to have that disaggregation. And then this hardware would be supported by some network function virtualization infrastructure, NFVI. These are the things I spoke of about as a host operating system and possibly a hypervisor. And then we would have these virtual functions, the actual interesting applications, sometimes as we call them the workload generating type of operations. So we're doing something with this hardware other than just making heat, and we could deploy those applications on these servers in a very general way. And then over to the right, we have a management and orchestration layer, or the MANO layer that's in there. And the MANO layer is actually a separate set of systems that are responsible for managing dynamically the configuration, operation, and the spinning up, if you will, of these workloads dynamically as we go throughout the day. So if we need more functionality, we can spin up another VM and apply it, maybe, in a large-term stationary way. It's not going to live forever, but maybe it's going to live for a couple of hours before it gets spun down. Or maybe it is allocated for a couple of days or even maybe weeks or months at a time. But it's that desegregation of these elements, so I can buy the hardware separate, that I can buy the application. That's the initial concept that we're working on. The thing that happens is that whole NFVI layer implies a lot of system integration work, and the reason we are where we are today is there is still a significant amount of system integration work that's going on as we continue to transform this network. But nevertheless, this transformation of NFV, as it applies to certain functionality in the core of the network, is well underway. The ship has sailed, and we're going to continue to make progress as we move forward. Some operators are reporting significant improved OPEX through the automation and the virtualization of specific layers in their workload. They've seen reduced power usage by migrating workloads to hardware that is more efficient and is more optimized for this. And similarly, we're seeing the ever increasing introduction of more standardization and open interfaces so that additional functionality from multiple vendors can co-exist and see inside these platforms. So if we want to look at it pictorially then, this NFV, breaking it into what we'd call pillars. Some of the functionality that is significant in the comm service provider network is that packet processing, or what I call the east-to-west and then west-to-east traffic flow that takes place in these elements. One of the critical elements from an operational standpoint from a service assurance, or an SLA, or a KPI, is that service assurance element, and that is determining whether or not these software functions and the hardware platforms that are supporting them are functionally operational at the levels they need to be and to report that information back up to management layers. So if things go or when things go poorly and we need to perform some type of a maintenance activity-- software gets stuck, a system gets stuck, or heaven forbid, the system actually starts to fail-- we can report that information and take proactive countermeasures to mitigate any type of service interruption that are in there. They're significant parts of the integration effort that goes from the NFVI layer to that VNF and then into that MANO layer that we saw in our earlier stack. And then finally, security is always a concern in today's networking. So as we talked about IoT devices a earlier, these devices are not controlled devices from the comm service provider, but rather they're introduced devices. And we want to be able to ensure that the network itself is tightly secure, and there's a significant amount of work that's going on in that system integration level between these various elements of NFVIs and the interesting workloads on top and the management layers of MANO to ensure that we have security into the design of these elements. So if we look at service assurance, for example-- and let's double click on this one a little bit-- we start off with a fundamental platform. And there is always a question as to whether or not that platform itself natively has service assurance. Some other technologies in Intel has been introduced that allow us to ensure that that platform leaving manufacturing can have a known good configuration, a known good quantity on it and that that can be reproduced from a validation standpoint as that platform starts to enter into assurance. And at certain checkpoint times, let's think about reboot. We can validate a fact that that system is still in its original intended state and hasn't been corrupted by any means that are in there. So that platform becomes aware at deployment time. And then there's also a significant amount of telemetry and monitoring that feeds back into those management and orchestration layers in real time to generate information that's useful to the comm service provider that allow them to take correction action if any is necessary. And that's, in essence, what the comm service providers are looking at from the shared platform. So we're not just grabbing any old box off of any old shelf from a manufacturing standpoint, but we're able to get platforms that have a known good state as they arrive from a manufacturer, before they enter our network and can be guaranteed to have assurance built on the technologies that we're talking about here. Obviously, the operational context of these platforms remains a critical aspect to the communication service providers. And when we look at what happens inside network function virtualization, OPNFV is creating some specifications and some software that allow to generate a barometer, a pressure level, if you will, to tell us about the health of these systems. And when we look at this diagram, we see a little bit more information that's coming in. We see that the basic platform still exists. We've got an NFVI layer that's wrapped around there. And we're also able to have a local interface for corrective actions into that platform that allows monitoring systems, if you will, to get statistics out of that platform and in real time or a frequent basis report that information back at the orchestration layer. So the orchestration layer, for example, we know this system is operating normally. I've got a certain amount of headroom where, with a specification, this system is starting to see an increase in traffic flow. It's approaching a threshold that we've established. We may need to spin up additional resources in a service chain or in a network slice in order to continue to meet the service that we need on here. And the orchestrator is able to do that based on the information that flows out of these systems from the analytics system. And then we're also able to capture the element management functionality out of the interesting VNFs, those Virtual Network Functions that have deployed onto this system that a vendor may provide that are unique to the application itself. And we've introduced a new box over here. That's the VIM layer, and that's the virtual interface manager. So it's able to provide control, if you will, to a number of those NFVIs, again, for the same thing, to monitor and manage those functions of that platform that are at that layer inside the architecture below. And this is just one example of some of the complexity that goes into place where I mentioned the system integration aspect. And this is an example of the system integration that is being introduced, and it's beyond the obvious system functionality that we looked at before. So let's put all together a little bit with just one example. And we're going to get into some other examples as we go on. So there's this functionality sometimes known as the session border controller or a virtualized Session Border Controller, SBC. So we've talked about some purpose-built platforms a couple of times. And what would that be? So if we're looking at a session border controller-- maybe it's a box in the past, a proper session border controller from 10 years ago, five years ago-- it's going to have a general CPU, obviously, on there. It may also have some specific applications logic on there in the form of digital signal processor or network adapters that are designed to provide certain bits of functionality. And in addition to that, then there's control management, media coding, network encryption layers, packet processing, header manipulation capability that go onto the session border controller. And the reason we call it that, the session border controllers, is that it provides functionality in the border of the network. So if you think about an enterprise user who has an interface in a communication service provider, one or both of those points on that, either the enterprise itself and certainly the comm service provider, would most likely have a session border controller that does things like throttling. Let's say that that enterprise is allowed to have 40 gigs worth of interface. That session border controller wouldn't allow 50 gigs of interface to flow through. It would block those types of sessions. It would exceed that, because of protecting, if you will, information or the content of the functionality deeper into that network, or it may be providing functionality, such as, for example, a media transcoding. There are a variety of examples. There are people who still make phone calls to this network, believe it or not. And even though they're digital phone calls, one of the things that happens is that there are different protocols for how that voice is represented in there, and a session border controller is a very good place to have transition. Where if one end likes encoder A and B, and the other and likes encoder C and D, the session border controller actually sitting at that edge of the network can see that and say, I tell you what. You like A and B, and you like C and D. We're going to settle on C, and I'll take care of the problem of interfacing between B and C at this point and allow that to take place. Similarly, these are great places for doing things like DDoS protections. So that's an example of a purpose-built platform that was early in the virtualization aspect of things, because a lot of it is very high-level processing that takes place. So a virtualized session border controller still provides that management control, that media transcoding, that network encryption at the VNF level, at the virtualized network functional level, and then relying on the resources of a standard high-volume server based on an [? intel ?] [? ZN ?] technology, and NIC card, and possibly with some additional processing capability if necessary like an FPGA on top of a hypervisor. And now what we can do is optimize the use of those software cores that continue to grow with each release of the Intel technology so that we have more and more capability. So one session border controller that we would have bought five years ago is going to have a fixed capability, and we'd have to replace an entire purpose-built platform. But if we've got it based on a standard high-volume server with this virtualized function, and we're only using 12 of cores today-- tomorrow we need 14 cores. We simply spin up two more of the cores. And we need 20 cores. We spin up eight more from the original 12, and we've got those race horses that are there. They're available in that network. So that was just to give you some idea of the concept that we're using from the network function virtualization as we separate it into that disaggregation and begin to virtualize some of these interesting functions that exist in the core of the network.