Author: Filip Stefanovski, Software Developer
Traditional web applications offer complete control over the server resources, infrastructure, load balancing, uptime monitoring, server security updates. As the business grows, we will also need a scaling plan and vision so we can easily scale-up the application.
As in Agile planning most focus is directed towards functionality deliverables, it’s common malpractice in great number of organizations to underprioritize infrastructure and scaling technical debt.
Serveless computing provides an opportunity to focus on the functional part of the code and leave the infrastructure and resources part to the serverless provider. This approach is often called “Functions as a Service” (FaaS). This basically means that we only push the function code into the serverless provider whereas on demand infrastructure is provisioned automatically when we use that function. We see serverless functions as separate functional parts of the applications. Moreover, some microservices endpoints can be treated as serverless functions.
Serverless functions and services can be called/triggered by event driven actions (HTTP request, incoming email, new file upload on FTP server etc.) and serverless providers execute our functions in sandboxed containers. We can have perpetually unlimited number of running function containers in parallel*.
In order to understand the benefits of serverless functions, think of this scenario:
Imagine that we have a service where we might attach/upload documents which needs to be processed and converted to PDF. Some days we would have couple of documents attached per day, some days we would have thousands of documents in one batch.
By using serverless architecture, we are having advantage over the following:
- We can process thousands of documents in parallel (meaning each document upload event would trigger separate function container)
- We would be billed only for function execution time (meaning when the function is idle there is no active billing for those hours)
- We can specify function instance resources configuration so it would fit our computing needs
- Each function execution duration would not affect other functions timing
This possibility gives us an opportunity to handle maximum number of parallel events in the fastest way possible.
As the serverless function containers natively do not share the state between them, we should analyze which type of problems we are going to solve using serverless. Sometimes we might keep
some application parts on more traditional infrastructure (docker etc.) and only provide serverless functions for off loaded event-based actions.