on August 2019 by Max Ved
Several years ago commercial development of applications was an expensive process that required capital investment in hardware and software before a single line of code was generated. As cloud computing has come to rescue, it has introduced online services with hardware and software resources to fit all needs and budgets. However, those earlier cloud services tended to be monolithic chunks of code designed to be run on a single server, even if hosted in the cloud. Because of this, the hardware still needed to be provisioned, configured and paid for in order to execute the application. The new concept of serverless computing claims to address this issue.
What is serverless computing?
Serverless computing is a method of providing backend services on an as-used basis. Of course, it requires servers to apps — or functions — to run. However, the architecture is designed in such a way that the developer doesn’t need to worry about server management, or to plan capacity. Users write and deploy code without paying attention to the underlying infrastructure. A company that gets backend services from a serverless vendor is charged based on their computation and does not have to reserve and pay for a fixed amount of bandwidth or number of servers, as the service is auto-scaling.
- Cost-effectiveness — probably, the main advantage and the main difference from traditional cloud computing is that the customer doesn’t pay for underutilized resources. The serverless computing service takes functions as input, performs logic, returns your output, and then shuts down — and the user is billed only for the resources used during this period.
- Simplified scalability — or, to put it more correctly, no worries about scalability. Developers are spared from scaling up their code, since it is the serverless vendor who handles all of the scaling. An application can be either scaled automatically or by adjusting its capacity through toggling the units of consumption rather than units of individual servers.
- Simplified backend code — in the sense that developers can create simple functions that independently perform a single purpose, like making an API call.
- Automated high availability — serverless computing provides built-in availability and fault tolerance. Therefore, developers don’t need to architect for these capabilities since they are provided by default.
- Quicker turnaround — serverless architecture can shorten time to market as you can add and modify code on a piecemeal basis without a complicated deployment process to roll out bug fixes and new features.
- Reduced packaging and deployment complexity — all you need is to pack your code into a zip file, and upload it. Moreover, if you’re just getting started you don’t even need to upload anything as you can write your code directly in the vendor’s console.
Serverless computing does have many benefits, but, like any concept, it comes with significant trade-offs inherent to it that cannot be entirely fixed.
- Vendor control — using a third-party service means that you are handing over the control of some of your system to a third-party vendor. This entails the risks of the system downtime, unexpected limits, cost changes, loss of functionality, forced API upgrades, etc.
- Multitenancy problems — where multiple instances of software for several different customers (or tenants) are run on the same machine, and possibly within the same hosting application, a number of problems may arise, security (one customer being able to see another’s data), robustness (an error in one customer’s software causing a failure in a different customer’s software), and performance (a high-load customer causing another to slow down).
- Vendor lock-in — whatever serverless features you’re using from one vendor will be implemented differently by another vendor. Therefore, if you want to switch vendors you’ll almost certainly need to update your operational tools, change your code and even your design or architecture.
- Security issues — it is obvious that using third-party services entails risks for the system security, and the number of security implementations — read the surface area for malicious intent — increases with each serverless vendor that you use. When using a BaaS database from your mobile platforms you sacrifice the protective barrier of a server-side application. On a FaaS platform, you may experience a Cambrian explosion of FaaS functions across your company, each having another vector for problems.
When we speak about serverless computing, we mention architectures, not just one architecture. In its essence, it encompasses two different but overlapping areas:
- Backend as a Service and
- Function as a Service
Initially, serverless computing had been dependent on third-party cloud services, which allowed managing the server-side logic and state. These were — and still are — typically “rich client” applications, such as single-page web apps, or mobile apps that use the vast ecosystem of cloud-accessible databases, authentication services, and so on. Such types of services are described as “(Mobile) Backend as a Service”, or “BaaS”.
The idea underlying BaaS is that both Web and mobile apps require a similar set of features on the backend, such as push notifications, server code, user and file management, social networking integration, location services, and user management, each with its own API. BaaS uses a unified API and SDK to make a bridge between such backends and the application frontend. In this way, developers don’t have to develop another backend for each service that the applications use or access.
BaaS became popular with Facebook’s Parse, which provided a backend platform for developers to prototype rapidly. Not scalable and quite expensive (300–1000 requests per minute cost USD1000/month), Parse helps to share native models across server-side code and multiple clients. It is mainly helpful for front-end mobile developers who have little backend skills or resources but need a simple backend for their mobile apps. At present, there exist BaaS platforms for enterprise, even for a certain industry, and for mobile applications:
- Buddy — a pioneer BaaS and IoT platform
- Kumulos — mBaaS
- Appcelerator — a (m)BaaS platform working with the Titanium SDK
- Kinvey — mBaaS
- AWS Amplify — mBaaS
- Red Hat — Enterprise mBaaS
- Skygear.io — an open-source BaaS
Serverless computing can also mean applications where server-side logic is written by the application developer, but it is run in stateless compute containers that are event-triggered, may last for one invocation, and fully managed by a third party. The most popular way to think of this is “Functions as a Service” or “FaaS”. While in a microservice architecture, monolithic apps are broken down into smaller services, with FaaS, it goes even deeper as it breaks apps to the level of events and functions. What it means is that you can upload modular chunks of functionality into the cloud and they will be executed independently, splitting the server into a bunch of functions which can be scaled automatically and independently.
AWS Lambda is one of the most popular implementations of a Functions-as-a-Service platform at present. Launched in 2014 by Amazon web services, AWS Lambda is an event-driven, serverless computing platform. All the code is executed in Lamba upon configured event trigger. That means that instead of loading your application code into a virtual machine or a container, you upload it into Lambda where it sits, dormant until some external event triggers it and the Lambda service brings your app out of quiescence and executes it. Lambda became a major departure from the path of computing that stretches back to mainframes and continues through today’s infrastructure-as-a-service (IaaS) cloud computing offerings.
At the same time, there are many other platforms, including those by other tech giants:
- Microsoft/Azure Functions
- Google Cloud Functions / Firebase
- IBM Cloud Functions
- Iron.io — an open-source and cloud-agnostic platform
Nowadays, with more cloud-based “as a service” tools emerging on the market, it is high time to vote for a faster and easier to implement solution sparing your resources for other tasks — of course, if you are fine with giving some control over your application to someone else.