How we use AWS Lambda to build a scalable platform

Technology
How we use AWS Lambda to build a scalable platform

How we use AWS Lambda to build a scalable platform

Technology
October 4, 2020
How we use AWS Lambda to build a scalable platform

In a previous blog post, we discussed about the benefits of the Serverless computing model. AWS Lambda is a platform built on the Serverless computing model in which we just need to code the business logic and don't have to worry about the servers.

In this article, we will discuss the benefits of using AWS Lambda, how we use it at Clappia, some of the challenges with Lambda and how we overcame those.

Benefits of AWS Lambda

Cost

Cost is one of the major benefits of using the Serverless model. With AWS Lambda, we need to pay only for the resources consumed during serving our requests. As there are no dedicated servers, there are no upfront costs. If our application doesn't receive a single request in a day, there are no charges for Lambda for that day.

In addition, AWS Lambda provides a free tier in which the first one million requests are free every month. This helped us save a lot of cost in the initial days of our setup.

More focused development

With AWS Lambda, a lot of maintenance hassles just go away. We don't have to deal with issues like excessive CPU utilisation, heap memory out of space, server and process restarts. This means that the developers can focus only on solving the business problems and Lambda will take care of the rest.

Inherently Scalable

As with any other Serverless platform, AWS Lambda is inherently scalable. Once we deploy the Lambda functions, we won't have to revisit them when our traffic increases by even a thousand times.  

How we use AWS Lambda at Clappia?

The beauty of AWS Lambda lies in its ability to integrate with a lot of other AWS Services. At Clappia, we use AWS Lambda with services like AWS API Gateway, Step Functions, SNS, S3, DynamoDB Streams etc.

AWS Lambda with API Gateway

API Gateway is a service provided by AWS that lets us deploy secure APIs. It acts as a gateway between client-facing applications and the backend logic implemented in AWS Lambda.

At Clappia, we follow a Microservices architecture in which we have separate services for managing different entities - Users, Apps, Workplaces, Workflows, Notifications etc. All the functions in each of these services are implemented as functions in AWS Lambda and exposed to the front-end clients using API Gateway.  

All the APIs hosted on API Gateway are secured using Lambda-based Authorizers that allow only authenticated users to access these APIs according to their permissions in the system.

AWS Lambda with Step Functions

AWS Step Functions is used an orchestrator to combine multiple Lambda functions to achieve a complex use case. It supports functionalities like condition-based branching, waiting, error handling and parallel execution of functions.

We use Step Functions to power the Clappia Workflows. All Clappia Apps can have multiple complex workflows which can involve actions like sending emails, mobile notifications, SMS, WhatsApp messages, integrating with external APIs and databases, sending data to other Clappia Apps and IF/ELSE logic. We translate these user-defined Workflows and generate a state machine in Step Functions. Know more about Clappia Workflows here.

How we use AWS Lambda to build a scalable platform

Step Functions gives us the capability to add many more similar Workflow actions in no time. We just need to implement their corresponding Lambda functions and Step Functions will take care of including them in the orchestration.

AWS Lambda with SNS

We have some use cases which involve tasks that can run asynchronously without having the end-user to wait for the completion of these tasks. For example, when an App admin assigns a Clappia App to another user, there is a non-critical task of sending an email to that user. When a user makes a submission in a Clappia App, we need to execute the Workflow associated with that App, but this execution can happen asynchronously.

For such use cases, we use SNS. SNS uses the Pub-Sub messaging mechanism to send messages between different Lambda functions. So when an App is assigned to a user, the first Lambda function actually updates some entries in a database to reflect the permission changes, then publishes an SNS Message and returns a response to the user. The SNS Message gets consumed by a second Lambda function that sends out the email. With this approach, we reduce the latencies of user-facing business logic by handling off-loading portion of the logic to other Lambda functions.  

How did we address the challenges with AWS Lambda?

Cold Start

After serving a request, the Lambda execution container becomes inactive after sometime if it doesn't receive more requests. And any request coming after this will face a Cold Start problem. The Lambda container will be provisioned again and the deployment package will be loaded in the container, only then the function will get executed. So we notice latency spikes for requests that come after a delay.

We followed a couple of approaches to mitigate this problem.

  • Reducing the deployment package size: The initialization of the execution container depends on the size of the deployment package. So we tried to reduce this size by removing unused Lambda functions, dead code, unused dependencies from the package. In some cases, one Microservice had ~15 APIs, so we also split it into two separate Lambda Applications so that each of them can load faster.
  • Utilizing the Lambda execution context: There are some reusable variables like SDK clients, database connections that can be used across multiple executions of Lambda. By moving these from the Lambda handler to the execution context, we ensure that these variables are available to future executions that happen on the same container before the container becomes inactive.
  • Sending dummy requests to Lambda: Sending dummy requests at regular intervals reduces the chances of the Lambda execution container going inactive, thereby allowing the actual executions to start immediately.
  • Provisioned concurrency for Lambda: This approach can result in the most significant improvement in the initial load time. Provisioned concurrency allows Lambda functions to be always ready to execute with no separate provision time. However this comes at an additional cost. Refer to pricing for Lambda provisioned concurrency.  

No Caching

With AWS Lambda, we cannot cache the responses of any API calls that we make. So if there is a Lambda function which makes an API call to get the user name using an email address, this call has to be made every time the Lambda gets executed, even if the user name is not likely to change frequently. This leads to increased latencies of the Lambda functions and also increased costs because of the number of invocations of the dependency API.

We mitigate this problem in two ways -

  • Using the Lambda execution context: We can use the execution context as a cache for user data also. Data which is not likely to change very frequently can be put in an in-memory cache in the Lambda execution context. This cache is available for subsequent calls that come to the same container before it becomes inactive. This approach can solve our problem only if the Lambda is getting enough requests to always keep the containers active.
  • Using an external caching service: For some use-cases, we also use AWS ElastiCache which works as an in-memory data store to provide sub-millisecond latency for retrieving objects from the cache.

Vendor Lock

Once we start implementing our business logic in AWS Lambda and integrating Lambda with other AWS Services, there are chances that we will have to continue to keep our entire stack on AWS forever and we won't be able to try out some products of other Cloud providers that better suit us.

We have addressed this concern by keeping the business logic handler decoupled from Lambda handler. That way, if we plan to move away from Lambda in future, the effort would be lesser as we won't have to touch the business logic.

We also use the Serverless framework that allow us to have separate configuration files for different Cloud Providers but common handlers for business logic.

Author
Sarthak Jain, Co-founder & CTO of Clappia
He can be reached at sarthak@clappia.com.

We are building a revolutionary No Code platform on which creating complex business process apps is as easy as working with Excel sheets. Visit
Clappia to give it a try.

FAQ

Build Apps That Fit Your Business Operations
Build Apps That Fit Your Business OperationsGet Started - It's free