Has your laptop, phone, or projector ever stopped working when you needed it most? If so, you aren't alone. It's happened to me more times than I care to remember, and more often than not at the worst possible time. We expect these little glitches in consumer tech, but web applications come from a team of talented engineers and should be more reliable—right?
Unfortunately, web applications are often no more reliable than your laptop or phone.
You may have some vivid memories of failed demos of web applications. You begin the demo. The team want to show all their hard work, and your clients want to see their new product. The site starts to load, and . . . nothing happens. There's a blank screen. Reloading the page doesn't work either.
You stall for as long as you can, blame the WiFi, the laptop, and anything you can think of, but finally it dawns on you something serious must have happened. The server is down, the code has a bug, or there's one of a hundred other possible scenarios that means you can't access that page today. All you can do is apologize, bow your head, and walk out meekly.
What if you could solve these kinds of server maintenance problems by hiding the complexities of server maintenance?
That's what serverless aims to do. It handles the headaches of keeping the server up and running. It scales up and down to meet demand while you focus on your business.
This post describes what the term serverless means, where you can find these services, and how you can maximize their value. We'll look at the benefits of making some of your infrastructure serverless, and we'll consider some of the trade-offs you should know about.
Serverless Runs on Servers!The term serverless is a little misleading. It contains the word "less," which suggests that no servers are involved. However, serverless runs on servers. The word "less" in this context means spending less time thinking about servers and more time developing new features for your customers.
What Are the Must-Have Characteristics of Serverless Services?A service must have the following characteristics to be classed as truly serverless:
- The provider handles installation, configuration, and upgrades of servers, operating system, and installed software.
- There's no configuration or manual intervention, whether one person or a thousand uses the service.
- If you don't use the service, you don't pay.
- The more you use service, the more it costs.
What Are the Most Popular Serverless Platforms?The most popular serverless platforms fall into three categories:
1. Functions as a Service (FaaS)Functions as a service is the most popular use of a serverless platform. This kind of service allows you to upload your code, specify when to run it, and manage provisioning and maintenance of servers. AWS Lambda was the first FaaS, and it's the most popular one available today. Other examples of FaaS are Google Cloud Functions and Microsoft Azure Functions.
2. Storage as a Service (SaaS)Storage as a service was one of the first uses of a serverless platform. This kind of service allows you to store any kind of file, no matter what type or size, with effectively unlimited storage capacity. AWS Simple Storage Service was the second service that AWS introduced, and it's still one of the most popular storage solutions. Other examples of SaaS are Azure Storage and Google Cloud Storage.
3. Database as a Service (DBaaS)Database as a service is one of the newer categories of serverless platforms. This service configures and maintains the database and is accessible through an application programming interface for any interaction. The provider handles configuration, maintenance, and performance optimization. Types of DBaaS include variations of NoSQL, SQL, and in-memory key-value stores. Examples of popular DBaaS include AWS DynamoDB, AWS Aurora Serverless, and Azure SQL Database Serverless.
Serverless Helps You Focus on Business LogicThe trend among cloud providers Amazon, Google, and Microsoft is to help customers focus more on business logic and less on managing infrastructure. Serverless is the natural evolution of cloud computing, enabling you to write business logic or store the data that's vital to running your company without worrying about maintaining the underlying hardware.
At first, cloud providers offered the rental of entire servers or a rack of servers. Configuring bare hardware is time-consuming and prone to error, but at the time there were no other options.
Next came the ability to rent a virtual machine (VM). Multiple VMs can run on a server, saving money and adding an extra layer of abstraction that saves you time.
More recently, hosting containers have become a new, more cost-efficient way of hosting multiple services on a VM. Containers encapsulate the applications running in them, with no need to understand the underlying hardware.
Skip to the present day, and serverless is the next level of abstraction on top of containers. Serverless platforms handle the creation and scaling of containers and VMs.
Below is a graph of the evolution of cloud computing. It starts with full servers, which require significant configuration, and proceeds to the minimal configuration required for serverless.
What Are the Benefits of Serverless?Serverless doesn't solve every problem. How can you evaluate whether serverless is right for your use case? Convert a small portion of your infrastructure. Look for quick wins, such as file storage or encapsulated, stateless logic that's easy to extract and run independently.
When you're trying out your proof of concept, keep these factors in mind: the amount of time you're spending on configuration and maintenance, and the amount you're spending as compared with a traditional server.
Here are four significant advantages of serverless.
1. You'll Reduce the Time You Spend on Configuration and MaintenanceReducing the time you need to configure and maintain your system is arguably the greatest benefit of serverless. The salaries of operations engineers are always rising. Even more important, the demand for skilled engineers is increasing. For these reasons, finding staff who can configure and maintain complex systems is increasingly difficult.
With the introduction of serverless, you can empower developers to build and manage services. This change gives your software developers the visibility and power to deploy and monitor production code easily. Your operations staff can focus on making deployments as easy as possible by breaking down the barriers between operations and development made famous by DevOps practices.
2. You May Save Money Using Serverless Instead of Traditional ServersThe cost benefits of using serverless over traditional servers are the reason many companies convert some or all of their architecture to serverless. This diagram shows how serverless removes the upfront cost of using the traditional server route, thereby lowering the barrier to entry.
The initial cost of serverless is zero. Compare that with the high initial cost of using traditional servers (because you'd need to decide the expected volume upfront). That cost is flat until the server is fully used, but then you'd need more servers to handle the higher traffic.
The cost-benefit will vary from business to business. The savings for businesses that are growing rapidly will far outweigh those where the growth is slower or flat. If the infrastructure isn't growing, then the cost isn't really much of a benefit over traditional servers.
3. FaaS Encourages Small, Encapsulated, and Independently Deployable ServicesFunctions as a service (FaaS) has these key characteristics:
- Only one entry point: This is an HTTP request or notification from another server, helping you to encapsulate your logic.
- Limited memory: Assigning more memory means a higher cost, encouraging you to focus on keeping the function small.
- Limited runtime: Each function has a maximum time it will be allowed to run for, which also helps you keep the function small.
- State isn't shared between functions: State can be stored in other services only, which makes each function independently deployable.
4. With Serverless, Your Services Are More Reliable.Serverless can help you avoid some of the most common issues with servers, such as:
- A full file system.
- The server restarting.
- A corrupt file system.
- Incorrectly configured operating system or software.
- A bug in the software that causes the server to hang.
What Are the Most Important Trade-Offs of Using Serverless?What trade-offs should you expect to see when using serverless over traditional VMs or servers? Let's go through them one by one. Consider any limits before committing too heavily to a major restructure.
1. It's More Difficult to Monitor and DebugOne of the main complaints I hear after converting a monolith architecture to serverless is the increased difficulty of monitoring and debugging. The most noticeable side effect of the conversion is an increase in the number of interconnected services. These services are all either communicating with one another or with the user. Monitoring output from a monolith is easy to read because the context of the application is all in one place. Tracing issues between many small services is much harder.
Some excellent resources on how to monitor your microservices are available, but it's important when creating your services to take monitoring into account. Consider new log events carefully before adding them to the code. Too many logs make it difficult to track down an issue. Too few logs make it difficult to see the context of a problem. There are some effective tools out there to help with monitoring, such as Datadog and Splunk. These tools help funnel logs from all your different services into one place. They also allow some prefiltering and let you link services, so you can understand why something failed.
2. It's Harder to Coordinate DeploymentDifficulties in coordinating deployment also come from having more services with serverless architecture than with a monolith. You may not need to manage the underlying servers with serverless, but you do have to manage how you deploy the code and how you manage the communication between services. Unless your system is trivially simple, managing deployment of many services manually is difficult.
If you get to the point where you need better visibility and control of deployment, then using a service may help. Plutora is a value stream management service that helps you build, test, and deploy your new features. Then it monitors the deploy to make sure the new service is working as expected.
From my experience, the businesses that have the biggest problems with deployment are those that have too many reliant services. Keep this in mind during the initial design phase, and you'll avoid lots of headaches later!
3. It's Harder to Run the Full Architecture LocallyA common complaint from developers when moving to serverless is that it's difficult to recreate the full architecture locally. Make a special effort to seek a solution that works for the whole team. You and your developers both want as little friction as possible when creating new features. I've seen many ways to approach this issue, and different solutions work for different teams. Here are the solutions I've had the most success with.
- A framework to recreate the architecture locally: AWS Serverless Application Model (SAM) and serverless framework are two options to run the entire architecture locally.
- A cloud development environment for each developer: This approach allows each developer to create an identical environment to production in its own sandbox.
- Using container technology: This approach uses containers that replicate each service and run the entire architecture locally.
4. Cold Starts Can Have a Huge Impact on Speed of ProcessingOne of the major limitations of functions as a service (one category of serverless platforms) is the concept of cold starts. They can affect the speed with which you can respond to a user's request.
A cold start happens when your function takes much longer than usual to respond because it's no longer running on a VM. Your cloud provider will increase or decrease the number of VMs running your function automatically depending on the number of requests. However, if no one is using your service, then you have no requests coming in—and your cloud provider will stop all the servers.
When your service receives a steady stream of requests, your cloud provider will have one or more VMs running a version of your function. Responses from these already running functions are quick because the VM doesn't have to start before processing.
There are ways to avoid cold starts. However, nothing is foolproof. Most of the workarounds rely on how the cloud providers scale—and that's out of your control. The best solution is to choose services with a steady load or those where the response time of the service isn't crucial to the business.