Serverless Computing has emerged in the past year as a compelling architectural alternative for building and running modern applications and services. Serverless applications allow developers to focus on their code, instead of on the infrastructure configuration and management. This speeds up development and release cycles, as well as allows for better, more efficient, scaling.
Serverless computing is closely tied to new architecture patterns and technologies such as Microservices and Containers. Greenfield, cloud-native applications are often microservices-based, which makes them ideal for running on containers (Docker). The further decoupling – and abstraction – that Serverless functions allow between the application and the infrastructure make them an ideal pattern for developing modern microservices that can run across different environments.
As Serverless applications become more used (with Lambda being the most popular cloud service offered by AWS), we increasingly see enterprises who want to enable a Serverless experience for their engineers on on-premises infrastructure as well – meaning Serverless are getting into hybrid cloud environments next!
While Serverless offers a lot of benefits, implementing it successfully, alongside containers, comes with quite a few challenges – particularly for IT Ops. While Serverless speeds up development, it needs a Kubernetes cluster in order to run, and Kubernetes is notoriously difficult to deploy and manage. Furthermore, these new technologies increase the complexity, scale, and sprawl of the IT environment, tooling and applications that today’s enterprises need to support. Add the new Serverless to the mixed environments that already exist – of Cloud resources, traditional VMs and even bare metal – and things get even more complicated for IT.
In this series, I will cover some best practices and patterns to enable both developers and Ops teams to take advantage of Serverless computing in a way that supports their overall IT operations — and takes them to the next level!
I will discuss:
- What are Serverless architectures and what it means for you
- Challenges and “gotchas” with current Serverless implementations and how they hinder enterprise adoption
- The impact of Kubernetes on Serverless
- How open source Serverless frameworks help
- Reference architectures for different types of business applications using Serverless functions
Introduction to Serverless Architectures
First, some definitions.
The most critical characteristic of a Cloud Native (CN) architecture is the ability to dynamically scale to support massive numbers of users, and large, distributed development and operations teams. This requirement is even more critical when we consider that cloud computing is inherently multi-tenant in nature.
Within this area, the typical requirements we need to address are:
- the ability to grow the deployment footprint dynamically (scale-up) as well as to decrease the footprint (scale-down)
- the ability to automatically handle failures across tiers that can disrupt application availability
- the ability to accommodate large development teams by ensuring that components themselves provide loose coupling
- the ability to work with virtually any kind of infrastructure (compute, storage and network) implementation
Mono, Micro and Serverless: the evolution of modern application architecture
Most enterprise (legacy) applications are Monolithic in nature, with tight coupling and interdependencies between application components, infrastructure, development teams, technologies, tooling, etc. This tight coupling poses challenges to the speed and agility of development, the adoption of new technologies or DevOps practices, as well as to the ease of scaling and operating these applications.
Microservices are a natural evolution of the Service Oriented Architecture (SOA) paradigm. In this approach, the application is decomposed into loosely coupled business functions, each mapped to one or more microservices. Each microservice is built for a specific, granular business function and can be worked on by an independent developer or team. By being a separate code artifact it is loosely coupled not just from a tooling or communication standpoint (typically communication using a RESTful API with data being passed around using a JSON/XML representation) but also from a build, deployment, upgrade and maintenance process perspective. Each microservice can optionally have its localized data store. An important advantage of adopting this approach is that each microservice can be created using a separate technology stack from the other parts of the application.
Containers are an efficient, most optimal, way to run microservices, with Container orchestration solutions, such as the open source Kubernetes, required to handle the runtime operations of Container clusters. Microservices and containers are now an integral component of any digital transformation strategy, as they allow for easier build, independent development and deployment and better scaling.
Serverless is the next step in this evolution.
Serverless Computing is a new paradigm in software development where the application developer is focused on coding the application functions and is freed from having to invest time and effort in configuring and managing the resources required to deploy and run these applications.
In this paradigm, the cloud or infrastructure provider needs to do all the required plumbing for the application to be instantiated once a request has been received for it. The developer focuses on the code of the application, and it is the responsibility of the underlying data center provider to manage all associated resources required to run it, reliably.
Functions as a Service (FaaS) is the most common type of serverless computing where the application is developed as a pure software function aimed at a granular use case (an extremely fine-grained microservice, if you will.) Multiple functions can then be composed together and optionally used in conjunction with a microservices application to perform business functionality. For the rest of this article series, I will use both terms – Serverless and FaaS – interchangeably.
Serverless and FaaS allows for fine grained billing. Since FaaS gets idle functions to consume much lower CPU/memory, cluster resource usage becomes more proportional to real usage than deployment size. When the functions are idle, the server resources are not instantiated, and so cost is reduced. Keep in mind, however, that billing advantages depend, quite sensitively, on actual usage patterns. A function that runs for one second every 10 seconds is cheaper on Lambda, but if it runs for 2 seconds every 10 sec it’s cheaper on EC2. You have to model your usage accurately and evaluate whether Serverless will actually save you money.
The Characteristics of Serverless Architectures
Serverless Architectures, or Functions as a Service (FaaS), conform to the following broad principles:
- They’re at the extreme end of loosely coupled architectures. While the applications using them are typical webapps or rich client applications, they also support a great degree of usage from ‘headless’ apps such as IoT, Streaming data, etc.
- These applications natively leverage cloud-based 3rd party services such as events/messaging, authentication services, etc.
- They support a variety of data models – NoSQL being the dominant. The reason being that the functions in a Serverless architecture are lightweight and need to package all their dependencies into a nimble database, making NoSQL a great fit.
- They demand horizontal scaling as a fundamental tenet. The application should automatically scale up or down depending on throughput.
- From a developer standpoint, the flexibility afforded by serverless frameworks makes them a fast and cost-effective way for developing digital applications
- The FaaS application needs to be abstracted completely from the underlying infrastructure stack. This is key as development teams can focus solely on the application code and do not need to worry about the maintenance of the underlying OS/Storage/Network.
- The provision-deploy-scale cycle of cloud-native Serverless applications is managed automatically – both from a scaling and failover perspectives – to support 24/7 operations at any load.
Thus, the overall flow of a FaaS architecture is fairly straightforward:
- An event (e.g. an online order at a retailer) is received by an API Manager.
- The Manager creates an http request that results in a function being launched
- The function is first instantiated in a container. The container has all the configuration the function needs to run, including its dependencies
- The function processes the request
- The container is then automatically destroyed
- The user only pays for the resources consumed (RAM, CPU, Disk etc.) during the time the function ran.
Serverless computing solves the biggest challenge that developers face (even with a PaaS or a container orchestration platform such as Kubernetes) – the need to own, scale & manage infrastructure. The containers running these Serverless workloads are neither provisioned nor monitored or otherwise managed by the developer. They get to build applications that just run, without having to worry about servers.
A note on Serverless vs. PaaS
Serverless frameworks are different from PaaS technology in four different ways:
- If one thinks of a monolith being composed of hundreds of microservices, the microservice itself can be decomposed into hundreds of functions. Unlike with PaaS, while using Serverless frameworks DevOps teams are free from worrying about updates, application scale up/down events, idle costs, complex build/deploy operations, etc. Everything underlying the core function and its logic is handled by the infrastructure provider.
- Serverless technologies are not opinionated about the DevOps methodology used to develop, test and deploy them – unlike a PaaS which is typically stringent about the developer workflow.
- Serverless functions have very low startup latency unlike applications deployed on a PaaS.
- Serverless frameworks, however, are feature limited compared to PaaS. The increased simplicity does come with feature disparity.
Serverless frameworks run best on Kubernetes compared to running on PaaS. Most PaaS technologies have earned a reputation for being too rigid across several areas – developer workflow, architecture and pricing – that make them a bad fit to run serverless workloads. In the words of Brendan Burns, “the first generation of Platforms as a Service (PaaS) was squarely aimed at enabling developers to adopt “serverless” architectures. The trouble was, that as is the case in many first wave products, too many overlapping concepts were mixed into a single monolithic product. In the case of most first-generation PaaS, developer experience, serverless and pricing model (request-based) were all mixed together in an inseparable monolith. Thus, a user who might have wanted to adopt serverless, but perhaps not the developer experience (e.g. a specific programming language) or who wanted a more cost-efficient pricing model for large applications, was forced to give up serverless computing also.”
In the next post in this series, we’ll discuss some of the challenges and key considerations for choosing the right Serverless solution.
Originally published here.