Serverless Computing is a new age computing technology that provides users to write code and deploy their applications without worrying about the underlying infrastructure. It provides serverless architectures as application designs by incorporating third-party services called “Backend as a Service” or BaaS.
To provide such effective computing architectures, the applications are run in what is called “stateless compute containers”. What this means is that the server side logic of the applications is written by the developer but in contrast to traditional architectures, the applications are instead run in containers that are “event-triggered”. This means that the computing function/application call lasts only for that particular invocation/call. These functionalities are provided and fully managed by third party services, which is defined below.
What BaaS provide: BaaS in serverless computing provides database & storage services. These include custom code that run in containerized environments on “Function as a Service” (FaaS) platforms. On the cloud level, BaaS offers code executions and deployments that are essentially FaaS in nature.
Advantages of Serverless Computing:
- Low Cost – In short, pay-as-you-go; the more cloud instances you use, the more you pay accordingly, instead of paying in advance for more infrastructure without the necessity.
- Scalability – Scaling the resources based on requirement; serverless provides options to be ready for any size of growth.
- Backend Calls – Functions can perform independently. There are separate functions coded for separate call backs, custom invocation of API calls, etc.
- Turnarounds – New features helps the developers to modify code efficiently; one of the principles that also helps in scaling and agile development of applications.
Disadvantages of Serverless Computing:
- Vendors Lock-in – Vendors controls the operations i.e., user must adhere to rules and restrictions as imposed by the vendor. Porting from one cloud architecture to another is also a problem. When it comes to language, only Python and Node.js developers have the option to choose between existing serverless choices.
- Learning Abilities – Large functions must split into small functions as the Serverless supports well with microservices rather than monolithic. While migrating from monolithic to microservices, developer must learn about serverless computing and the rules & restrictions of vendor associated, and finally about building microservices.
- Long Term Tasks – Serverless computing gives short-term executions i.e., media related computations takes a long time to execute. There is always a running risk of the application breaking. In such conditions, serverless computations are not recommended.
Types of Serverless Computational Patterns:
- Microservice Patterns– Each function which user created is isolated.
Fig: Microservice Patterns
In other words, each function is split into its own process when invoked. This is also called Functional decomposition of an application. This helps in achieving a high degree of cohesion between multiple services to perform higher level services.
- Service Patterns– Each function is related with Data Model or shared infrastructure dependency.
Fig: Service Patterns
- Monolithic Patterns– Entire application is developed within a single function. For example, a single lambda function is far easier and feasible to manage. It also paves way for quick deployments and reduces cold start issues by a huge factor.
Fig: Monolithic Patterns
- Serverless is ephemeral, stateless nature in computing cause of the large number of small functions in cloud.
- Serverless computing is close partnership between DevOps, Developers and security.
- Serverless computations can be BaaS or FaaS. Major FaaS offerings associated in cloud are:
- Microsoft Azure
- Google Cloud
Now let’s get into AWS Lambda functions, where AWS exhibits this as Zero Administration for its developers.
- AWS attaches the IAM policies/roles to its Lambda functions for security.
- AWS Lambda has many advantages as we saw above but after developers started code execution and deployments, they found the duplications of commonly used libraries, binaries, dependencies, etc.
- AWS Serverless Computational offers two advancements –
- Lambda Layers
In general, irrespective of programming language supported by Lambda Functions, developers use Lambda Layers to trigger events with-in no time. But large Lambda functions have libraries, dependencies, binaries, etc., which are used frequently. Prior to Program, libraries, dependencies and binaries were imported to each and every implemented Lambda function and in effect code redundancies happened. To avoid this, Lambda Layers were introduced, where these layers consist of re-usable libraries, dependencies, binaries, etc. These layers are imported into Lambda Functions and can be used anywhere. Lambda Layers are secured attaching IAM Roles.
Fig: Lambda Layers
Common libraries, binaries, dependencies, frequently used modules, custom runtime APIs, etc., are archived and uploaded to the standard directory /opt. /opt has the package_name and Layer_name.
Eg: /opt/python3.6/common_libs/. Each Lambda Function can have a maximum of 5 layers and each layer is of 250MB capacity. Lambda Layers are extensively used in monitoring, security and application management.
- Lambda Runtime API: Lambda Runtime API provides communication between source code in an alternative program and AWS Lambda Environment. Communication is established via HTTP-based interface to get an event’s data and the simultaneous response from the Lambda function. This API is a function which consists of an executable file known as Bootstrap. Bootstrap is responsible for managing responses/error handlings, context creation and function executions. Data/Information at endpoints are referred to as Environment Variables.
- Environmental variables in Lambda Runtime API:
- AWS_LAMBDA_RUNTIME_API – Hostname:Port
- _HANDLER – Script_Name.Function_Name
- LAMBDA_TASK_ROOT – The directory of the code.
- Lambda Runtime Interactions:
- POST (RESPONSE)
- POST (ERROR)
- EndPoint + Resource Path
Advantages: Handy and time-intensive to install/configure tools.
Disadvantages: The layers get updated automatically and the versions gets changed resulting in updated libraries that might break your code. If any versioned layer is deleted, you won’t be able to redeploy your function referencing to the deleted version.
VT always strategize the design, development based on the customer requirements and fitment of the technology to achieve transformation from traditional application development to digital. We at Virtue Tech aim to reach customer goal by bringing innovation. With the AWS partnership our strength enhanced to solve major building blocks which enhance customers technology advancement. For our customers we always bring unique approach to implement serverless architecture like below :
- We implemented layers for creation of EC2/Container Instances and RDS Instances by using Python and Terraform.
- By using Boto3 package, we created the custom functions in python such that the function has the creation of an EC2/RDS/Container Instances, Start/Stop Instances, Listing of EC2/RDS/Container Instances, Creation of IAM Users, Updating roles/policies for IAM Users, etc.
- The created .py files were then compressed in .zip format and uploaded to the custom layer which we created in AWS LAMBDA LAYERS.
- Created Custom layer is versioned as “version 1” by selecting the Python version. We choose Python 3.6.
- The custom function is now created, with python 3.6 and lambda_basic_execution role.
- We then added the layer to lambda function by selecting from the list of compatible layers and their versions.
- In this way, we add the custom created layers to our lambda function. We can add a maximum of 5 layers to a single lambda function.
- Now, in lambda function, import the .py file
- Define a lambda function with lambda_handler (event, context):
- Now, access the custom layer python function which we imported earlier and test the lambda function after saving it.
- These Custom layers are created once, and we add them to any lambda function where ever we want to use them.
- By implementing this, we achieve the problem of duplication of code, time saving.
- Code Created Once, ‘ll use anywhere as a shared resource which is a part of serverless computing.
Adopting serverless can deliver many benefits—but the road to serverless can get challenging depending on the use case. And like any new technology innovations, serverless architectures will evolve en-route to becoming a well-established obvious standard. While serverless architecture may not be a solution to every IT problem, it surely represents the future of many kinds of computing solutions in the coming years.