- [Adam] Jon and I have this awesome web application we're looking forward to deploying soon, but we've been debating back and forth about which deployment approach is best because we're not really sure if it's going to go viral or not. But we'd like to be prepared for it. - [Jon] You may have heard about the Hello World application. In fact, mostly everyone in IT has heard about it. The issue with that application is that it's boring and it's static. Instead, we want to create the Hello World version 2 application. This application shall take a name as an input - "Jon" - and then shall reply back with "Hello" whatever that name was - in this case, "Hello Jon." Since everyone in IT knows about Hello World, I'm pretty sure it's going to go viral. - [Adam] Let's certainly hope so. Okay. I think the traditional deployment method is probably our best bet. One, I know it. Two, it's tried and true. Three, I could show it to you in just a few minutes. - [Jon] Well, I'm pretty sure serverless is the right approach, but if you want to show me architecture, go for it. - [Adam] Okay, well let's get to work. Okay Jon, let's check out my architecture. Alright, so I've got a VPC. And that provides me logical isolation at the network level. I have an internet gateway, which allows inbound and outbound traffic. I have an ELB, which distributes the traffic across two Availability Zones. I've chosen two for high availability and fault tolerance. And then I have two EC2 instances. The ELB will distribute the load to those. And I'll show you the flow of traffic now. So, traffic comes inbound through the internet gateway to the ELB, and then is distributed to the EC2 instances. So, what do you think? - [Jon] Wow, that was a lot for a simple Hello World application. Let me show you a different way of doing this that is a bit simpler. API Gateway is going to receive traffic from the user. It will send that traffic into Lambda, and Lambda will be replying back with that traffic all the way down to the user. And that's it. Wasn't that much simpler? Adam? Adam, you there? (crickets chirp) Clearly, Adam is in agreement that this is the way to go, so let me discuss Lambda, since you already know API Gateway. At this point, you have learned about different serverless services. However, the category of serverless I want to discuss with you today is compute. AWS Lambda is a serverless compute service. It lets you run your code without provisioning or managing servers. You pay only for the compute time you consume. That means no charge when your code is not running. In our example here, if there are no users that send requests, there are no charges for Lambda and there's no charge for API Gateway, either. It also means that you don't need to wake up in the middle of the night to respond to a server that crashed or do patching outside of business hours. With Lambda, you can run code for virtually any type of application or backend service, all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. With AWS Lambda, you pay only for the requests served and the compute time required to run your code for each of those requests. Billing is metered in increments of 100 milliseconds, making it cost-effective and easy to scale automatically from a few requests to thousands per second. Building serverless application using Lambda means that developers can focus on their core products instead of worrying about managing and operating servers or runtime. This reduced overhead lets developers reclaim time and energy, and can be spent on developing great products. Customers do care about the availability of an application. But what they really want to see are new features. So with Lambda, you can focus on those new features. In the next lecture, I'll go in more detail about how Lambda works. Thank you for watching.