A recent Tech Pro Research poll shows that despite enthusiasm for serverless computing, some enterprises experience issues with the services.
By Melanie Wolkoff Wachsman on May 1, 2019
Research: Serverless computing holds promise for tech leaders, but results are mixed
This ebook, based on the latest ZDNet / TechRepublic special feature, examines the returns and efficiencies businesses are seeing with serverless computing, how to create a serverless architecture, and the top vendors.
Serverless computing holds promises of flexibility and efficiency as well as cost and time savings, but does this technology measure up with tech leaders?
In April 2019, ZDNet’s sister site, Tech Pro Research surveyed 159 tech professionals to find out why companies use — or do not use — serverless computing services. Survey questions covered a range of topics, including cloud service providers and the benefits and drawbacks of using serverless computing.
According to the survey, 47 percent of respondents currently use serverless computing services, while 9 percent plan to use the services within the next six months. Current users are taking advantage of the services for web app development, business logic, database changes, batch jobs or scheduled tasks, IoT, and multimedia processing.
However, despite the myriad functions serverless computing provides, 28 percent of survey respondents have no plans at this time or in the future to use the services, and 16 percent of respondents are waiting until sometime beyond the next 12 months to use the services.
As with many new innovations, security concerns topped the list of reasons why companies are not implementing serverless computing services. More than 20 percent of respondents either have no apparent business need for serverless computing services or are uncertain of how to effectively apply the technology. Cost concerns rounded out the list of reasons preventing respondents from using serverless computing services.
SEE: Top cloud providers 2019: A leader’s guide to the major players (Tech Pro Research)
Of the respondents who currently use these services, 16 percent have not experienced any issues with serverless computing; however, the remaining respondents indicated challenges with the services. A third of respondents listed vendor lock-in as their biggest issue, while other respondents found serverless computing more difficult to work with than expected. Respondents also noted difficulties in testing applications, additional development complexity, and lack of specific code or language support.
There are numerous serverless computing services providers, most of which are big-name players in the cloud services field like AWS, Microsoft Azure, Google Cloud Platform, Oracle Cloud Service, and IBM Cloud.
The infographic below contains selected details from the research. To read more findings, plus analysis, download the full report Prepare for serverless computing 2019: IT leaders need more convincing to use serverless computing services(Tech Pro Research subscription required).
What serverless computing really means, and everything else you need to know
Serverless architecture is a style of programming for cloud platforms that’s changing the way applications are built, deployed, and – ultimately – consumed. So where do servers enter the picture?
By Scott Fulton III | April 9, 2019
Serverless computing is not, despite its name, the elimination of servers from distributed applications. Serverless architecture refers to a kind of illusion, originally made for the sake of developers whose software will be hosted in the public cloud, but which extends to the way people eventually use that software. Its main objective is to make it easier for a software developer to compose code, intended to run on a cloud platform, that performs a clearly-defined job.
If all the jobs on the cloud were, in a sense, “aware” of one another and could leverage each other’s help when they needed it, then the whole business of whose servers are hosting them could become trivial, perhaps irrelevant. And not having to know those details might make these jobs easier for developers to program. Conceivably, much of the work involved in attaining a desired result, might already have been done.
“What does serverless mean for us at [Amazon] AWS?” asked Chris Munns, senior developer advocate for serverless at AWS, during a session at the re:Invent 2017 conference. “There’s no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container — anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.”
AWS’ serverless, functional service model is called Lambda. Its name comes from a long-standing mathematical code where an abstract symbol represents a function symbolically.
The pros and cons
Serverless computing has been pitched to developers as a means for them to produce code more like it was done in the 1970s, and even the ’60s, when everything was stitched together in a single system. But that’s not a selling point that enterprises care much about. For the CIO, the message is that serverless changes the economic model of cloud computing, with the hope of introducing efficiency and cost savings.
Improved utilization — The typical cloud business model, which AWS championed early on, involves leasing either machines — virtual machines (VMs) or bare-metal servers — or containers (such as Docker or OCI containers) that are reasonably self-contained entities. Virtually speaking, since they all have network addresses, they may as well be servers. The customer pays for the length of time these servers exist, in addition to the resources they consume. With the Lambda model, what the customer leases is instead a function — a unit of code that performs a job and yields a result, usually on behalf of some other code (which may be a typical VM or container, or conceivably a web application). The customer leases that code only for the length of time in which it’s “alive” — just for the small slices of time in which it’s operating. AWS charges based on the size of the memory space reserved for the function, for the amount of time that space is active, which it calls “gigabyte-seconds.”
Separation of powers — One objective of this model is to increase the developer’s productivity by taking care of the housekeeping, bootstrapping, and environmental matters (the dependencies) in the background. This way, at least theoretically, the developer is more free to concentrate on the specific function he’s trying to provide. This also compels him to think about that function much more objectively, thus producing code in the object-oriented style that the underlying cloud platform will find easier to compartmentalize, subdivide into more discrete functions, and scale up and down.
Improved security — By constraining the developer to using only code constructs that work within the serverless context, it’s arguably more likely the developer will produce code that conforms with best practices, and with security and governance protocols.
Time to production — The serverless development model aims to radically reduce the number of steps involved in conceiving, testing, and deploying code, with the aim of moving functionality from the idea stage to the production stage in days rather than months.
Uncertain service levels — The service level agreements (SLA) that normally characterize public cloud services, have yet to be ironed out for FaaS and serverless. Although other Amazon Compute services have clear and explicit SLAs, AWS has actually gone so far as to characterize the lack of an SLA for Lambda functions as a feature, or a “freedom.” In practice, the performance patterns for FaaS functions are so indeterminate that it’s difficult for the company, or its competitors, to decide what’s safe for it to promise.
Untested code can be costly — Since customers typically pay by the function invocation (for AWS, the standard arbitrary maximum is 100), it’s conceivable that someone else’s code, linked to yours by way of an API, may spawn a process where the entire maximum number is invoked in a single cycle, instead of just one.
Monolithic tendency — Lambda and other functions are often brought up in conversation as an example of creating small services, or even microservices, without too much effort expended in learning or knowing what those are. (Think of code that’s subdivided into very discrete, separated units, each of which has only one job, and you get the basic idea.) In practice, since each organization tends to deploy all its FaaS functions on one platform, they all naturally share the same context. But this makes it difficult for them to scale up or down as microservices were intended to do. Some developers have taken the unexpected step of melding their FaaS code into a single function, in order to optimize how it runs. Yet that monolithic choice of design actually works against the whole point of the serverless principle: If you were going to go with a single context anyway, you could have built all your code as a single Docker container, and deployed in on Amazon’s Elastic Container Service for Kubernetes, or any of its growing multitude of cloud-based containers-as-a-service(CaaS) platforms.
Clash with DevOps — By actively relieving the software developer from responsibility for understanding the requirements of the systems hosting his code, one of the threads necessary to achieve the goals of DevOps — mutual understanding by developers and operators of each other’s needs — may be severed.
THE GROWING FUNCTIONS-AS-A-SERVICE MARKET
More than any other commercial or open source organization, AWS has taken the lead in defining serverlessness with respect to consumers and the serverless business model. But its entry into the field immediately triggered the other major cloud service provider to enter the FaaS market (whether or not they adopt the serverless motif in its entirety): Azure Functions is Microsoft’s approach to the event-driven model. Google Cloud Functions is that provider’s serverless platform. And IBM Cloud Functions is IBM’s approach to the open source OpenWhisk serverless framework.
The typical cloud business model, which AWS championed early on, involves leasing either machines — virtual machines (VMs) or bare-metal servers — or containers (such as Docker or OCI containers) that are reasonably self-contained entities. Virtually speaking, since they all have network addresses, they may as well be servers. The customer pays for the length of time these servers exist, in addition to the resources they consume.
AWS’ serverless, functional service model is called Lambda. Its name comes from a long-standing mathematical code where an abstract symbol represents a function symbolically.
LAMBDA’S BUSINESS MODEL
With the Lambda model, what the customer leases is instead a function — a unit of code that performs a job and yields a result, usually on behalf of some other code (which may be a typical VM or container, or conceivably a web application). The customer leases that code only for the length of time in which it’s “alive” — just for the small slices of time in which it’s operating. AWS charges based on the size of the memory space reserved for the function, for the amount of time that space is active, which it calls “gigabyte-seconds.”
The Lambda calculus
Another phrase used by Amazon and others in marketing its serverless services is functions-as-a-service (FaaS). From a developer’s perspective, it’s a lousy phrase, since functions in source code have always been, and always will be, services. But the “service” that’s the subject of the capital “S” in “FaaS” is the business service, as in cloud “service” provider. The service there is the unit of consumption. You’re not paying for the server but for the thing it hosts, and that’s where AWS has stashed the server.
Amazon uses the terms “serverless” and “FaaS” interchangeably, and for purposes of the customers who do business in the realm of AWS, that’s fair. But in the broader world of software development, they are not synonymous. Serverless frameworks can, and more often in recent days do, span the boundaries of FaaS service providers. The ideal there is, if you truly don’t care who or what provides the service, then you shouldn’t be bound by the rules and restrictions of AWS’ cloud, should you?
PROMISE VS. DELIVERY
“The idea is, it’s serverless. But you can’t define something by saying what it’s not,” explained David Schmitz, a developer for Germany-based IT consulting firm Senacor Technologies, speaking at a recent open source conference in Zurich.
Citing AWS’ definition of serverless from its customer web site, Schmitz said, “They say you can do things without thinking about servers. There are servers, but you don’t think about them. And you are not required to manually provision them, to scale them, to manage them, to patch them up. And you can focus on whatever you are really doing. That means, the selling point is, you can focus on what matters. You can ignore everything else.
“You will see that this is a big lie, obviously,” he continued.
DISTINGUISHING SERVERLESS FROM FAAS
In his recent O’Reilly book Designing Distributed Systems, Microsoft Distinguished Engineer and Kubernetes co-creator Brendan Burns warns readers not to confuse serverless for FaaS. While it is true that FaaS implementations do obscure the host server’s identity and configuration from the customer, it is not only possible but, in certain circumstances, desirable for an organization to run a FaaS service on servers that it not only manages explicitly, but optimizes especially for FaaS. FaaS may appear serverless from one angle.
A truly serverless programming model and a serverless distribution model, some advocates are saying, would not be bound to, of all things, a single server — or, any single service provider.
YOU WONDER WHERE THE SERVER WENT
Serverless is supposed to be an open-ended cloud workshop. Optimistically, it should incite developers to build, for instance, services that respond to commands, such as “Call up my grocery store and have them hold two K.C. strip steaks for me.” The process of building such a service would leverage already written code that handles some of the steps involved.
The developer-oriented serverless ideal paints a picture of a world where a software developer specifies the elements necessary to represent a task, and the network responds by providing some of those elements. Suddenly the data center is transformed into something more like a kitchen. Whereas a chef may have a wealth of resources open to her, most everyday folks cook with vegetables that come from their refrigerators, not their gardens. That doesn’t make gardens somehow bad or wrong, but it does mean a whole lot more people can cook.
In practice, “serverlessness” (a term I invented) is more of a variable. Some methodologies are more serverless than others.
THE ROLE OF EVENT-DRIVEN PROGRAMMING
You may have already surmised that a distributed application hosted in the cloud is hosted by servers. But servers in this context are places in a network. So a distributed application may rely on software resources that exist in places other than the host from which it was accessed. Imagine a system where “place” is irrelevant — where every function and every resource that the source code uses, appears to be “here.” Imagine instead of a vastly dispersed internet, one big location where everything was equally accessible.
At the recent CloudNativeCon Europe event in Copenhagen, Google Cloud Platform developer advocate Kelsey Hightower presented a common model of a FaaS task: One that would translate a text file from English to Danish, perhaps by way of a machine learning API. For the task to fit the model, the user would never need to see the English-language file. Once the text file became available to the server’s object store, translators attached to that store would trigger an internal function, which would in turn set forth the translation process.
An event procedure does not have to be explicitly called, which means it doesn’t have to be addressed — a process which often involves identifying its location, which includes its server. If it’s set up to respond to an event, it can be left unguarded like a mouse trap or a DVR.
In distributed applications, services are typically identified by their location — specifically, by a URI that begins with http:// or https://. Naturally, the part of the URI that follows the HTTP protocol identifier is the primary domain, which is essentially the server’s address. Since an event-driven program is triggered passively, that address never has to be passed, so the server never needs to be looked up. And in that sense, the code becomes “serverless.”
SERVERLESS’ CAPTIVE AUDIENCE
“This is beautiful — this is like the dream come true!” said Google’s Hightower. He presented his audience with three choices: “You can destroy all your code; you could do no code, but that’s a little extreme; or you could do this serverless thing. This is how it’s sold. Anyone see the problem with this?”
After a few hints, Hightower revealed what he characterizes as a flaw in the model: Its dependence upon a single FaaS framework, operating within a single context, within the constraints of a single cloud provider. The reason you don’t see so many servers in such a context is because you’re inside, from its perspective, the only one there is.
Put another way, you’re stuck in Amazon’s house.
Hightower is an advocate for an emerging framework, being developed under the auspices of the Cloud Native Computing Foundation (CNCF, also responsible for Kubernetes) entitled CloudEvents. Its goal is to come up with a common method for registering an event — an occurrence that hosts should watch for, even if it emerges from elsewhere on some other system or platform. This way, an activity or method on one cloud platform can trigger a process on another. For instance, a document stored in Amazon’s S3 storage can trigger a translation process into Danish on Google Cloud.
“The goal here is to define a few things,” he told the audience. “Number one, the producer owns the type of the event. We’re not going to try to standardize every event that can be emitted from every system. That is a fool’s errand. What we want to do, though, is maybe standardize the envelope in which we capture that event — a content type, [and] what’s in the body. And then we need to have some decision, and one of those decisions so far is, maybe we can use HTTP to transport this between different systems.”
A bit of background for what Hightower’s talking about here: The earliest attempts at distributed systems — among them, DCOM and CORBA — imposed some type of centralized regimen where the context of jobs being processed was resolved at a high level by some mutually agreed-upon authority. Something was in charge. This would be the opposite of the serverless ideal; this would ensure that there’s always a principal host at the top of the food chain.
This concept does not work at large scale, because that host would need some kind of all-encompassing directory of contexts, like Windows’ System Registry, to specify what each type of data meant, and to whom it would belong. That type of authority is just fine, if you happen to be the maker of a platform that wants to be the only cloud in town.
REVENGE OF THE SERVERS
But that might not be the type of framework that developers in the field, like Senacor’s Schmitz, would like to see. From his perspective and experience, one of the main benefits of serverless computing as he practices it is the promise of the lack of a framework or protocol for these types of inter-cloud communications. In fact, the very presence of such a framework would imply that there were entities that need to communicate at all — in effect, servers.
“We all love frameworks, runtimes, and tools. And there are many,” Schmitz told his audience. “There are things like Serverless [Framework] which extract away Lambda. There are things like Chalice which does something in a similar way. There’s Serverless Express where you can wrap an existing application.
“Ye-u-u-gh,” he uttered, in a single syllable, like a brown bear uncovering an empty dumpster. “We don’t need that. Really, you do not need a framework to work with AWS. They have an SDK. Apply sane practices, and you will be fine.”
Schmitz conceded that staying within the AWS Lambda paradigm does result in the production of code that is somewhat monolithic and inflexible, difficult if not impossible to scale, and a bear to secure properly. In exchange for these concessions, he said, Lambda gives the developer instantaneous deployment, code that is simple enough to produce, and a learning curve that is not very steep at all.
Schmitz and Hightower are on opposite sides of the evolutionary path of serverless computing in the data center. Throughout the history of this industry, simplification and distribution have stared each other down across this moving barricade.
THE PURE BUBBLE
It has been the goal of the DevOps movement to break impasses like this one, and to incite coordination between software developers and network operators to work together toward a mutual solution. One of serverless advocates’ stated goals has been to devise the means to automate such processes as conformance, handshaking, security, and scalability without all that cumbersome human interaction. The end result should be that the manual processes of provisioning resources elsewhere in the cloud — processes that are susceptible to human error — are substituted with routines that take place in the background, so discreetly that the developer can ignore the server even being there. And since the end user shouldn’t have to care either, it may as well be truly serverless.
Serverless architectures, they insist, should free the developer from having to be concerned with the details of the systems that host her software — to make the Ops part of DevOps irrelevant to the Dev part. So doesn’t serverless work against DevOps?
“There is no doubt that, as you move to higher levels of abstraction of platforms, there’s operational burdens that go away,” responded Nigel Kersten, chief technical strategist for CI/CD resource provider Puppet. “You adopt virtualization, [and] a lot of your people don’t need to care as much about their metal. You adopt infrastructure-as-a-service in the cloud, [and] you’re not needing to worry about the hypervisors any more. You adopt a PaaS, and there are other things that essentially go away. All become ‘smaller teams’ problems.
“You adopt serverless, and for developers to be successful in developing and architecting applications that work on these platforms,” Kersten continued, “they also have to learn more of the operational burden. And it may be different to your traditional sysadmin who is racking and stacking hardware, and having to understand disk speed and things like that, but the idea that developers get to operate in a pure bubble and not actually think about the operational burden at all, is completely deluded. It just isn’t how I’m seeing any of the successful serverless deployments work. The successful ones are developers who have some operational expertise, have some idea of what it’s like to actually manage things in production, because they’re still having to do things.”
PLUGGING IN CONTINUOUS INTEGRATION
The development patterns Kersten sees emerging in the serverless field, he told ZDNet, are only now emerging as a result of evolutionary paths bunching themselves up against the edges of this proverbial bubble. New logic is required to resolve the adaptability burdens facing FaaS-optimized code, once it becomes encumbered by the stress of customer demand at large scale. Configuration management systems on the back end can only go so far. The simple act of updating a function requires the very type of A/B comparisons against older versions that a serverless context, with its lack of contextual boundaries, would seek to abolish.
There’s also the issue of the deployment pipeline. In organizations that practice continuous integration and continuous delivery (CI/CD), the pipeline is the system of testing and quality control each code component receives, before it’s released to production for consumer use. The very notion of staging implies compartmentalization — again, against the serverless ideal of homogeneity.
“I still think there needs to be test environments, there still needs to be staging environments,” argued JP Morgenthal, CTO for applications services at DXC Technology. “And I’m still of the firm belief that somebody should be responsible for validating something moving into production.
“I know there are some schools of thought that say, it’s okay for the developer to push directly into production. Netflix does that,” Morgenthal told ZDNet. “Somebody not getting their movies, sure, that’s a bad thing because you want customers to be happy. But it’s a lot different when you let somebody issue a new function inside of a banking application without appropriate validation at multiple levels — security, ethics, governance — before that code gets released. That is still DevOps, because that still has to go from the developer developing, deploying, in a test environment, to somebody testing it and ensuring that those things hold, before it can go the rest of the way in the pipeline into production deployment.”
Giving developers the appearance of operating in a “pure bubble” — a cushioned, comfy, safe haven where all is provided for them — and giving these same people a way to integrate themselves and their roles with everyone else in IT, seem to be two gifts for competing holidays.
Sure, we may yet devise new automated methods to achieve compliance and security that developers can comfortably ignore. But even then, the pure bubble of serverlessness could end up serving as a kind of temporary refuge, a virtual closed-door office for some developers to conjure their code without interference from the networked world outside. That may work for some. Yet in such circumstances, it’ll be difficult for employers and the folks whose jobs are to evaluate developers’ work, to perceive the serverless architectural model as anything other than a coping mechanism.
Enterprise serverless computing providers: Comparing the top contenders
If your business needs backend services for its websites and apps, IT pros say these are the serverless computing providers to turn to.
By Macy Bayern on May 1, 2019
Serverless computing is a category of cloud computing that is sweeping the enterprise. The main appeal of this platform-as-a-service (PaaS) is its pay-per-use and hands-off nature—meaning the user is billed only when their code is running, and there is no physical or virtual infrastructure for anybody to manage.
Organizations that benefit most from serverless computing are those running websites and apps that need backend services or analytics. Because the user is charged only when the code is run, this form of computing can prove to be very affordable for the right organizations, reported TechRepublic’s Nick Heath in his serverless computing cheat sheet.
SEE: Serverless architectures: 10 serious security problems (free TechRepublic PDF)
The popularity of this technology has brought a slew of serverless computing vendors to the market. “[These providers] abstract the developer from the lower level implementation details of the systems that they are building,” said Jeffrey Hammond, vice president and principal analyst serving CIO professionals at Forrester.
However, a functional PaaS platform (fPaaS) isn’t enough to build a complete application on its own, said Arun Chandrasekaran, distinguished vice president analyst at Gartner.
“Developers need other services, such as an API gateway, various event sources, analytic engines, content management services, persistence services, and orchestration tools to support application development,” Chandrasekaran said. Cloud and serverless computing providers can combine these functionalities and present a comprehensive platform experience.
The top serverless computing providers
While countless serverless computing providers have surfaced in the enterprise, three stand out among the rest: AWS Lambda, Microsoft Azure, and Alphabet’s Google Cloud Platform. All three vendors are topnotch, having similar advantages. But there are qualities that make each one special in their own right.
SEE: Serverless computing: A guide for IT leaders (Tech Pro Research)
“AWS was a pioneer in offering serverless computing through the AWS Lambda product,” Chandrasekaran said.
As the first major vendor of affordable cloud services, AWS continues to build upon its services with the ebbs and flows of the industry. As serverless computing gained ground, out came Lambda, which is the backbone of its serverless offerings, Hammond said.
Among the advantages of using AWS is the prolific number of services the user can easily integrate with one another. However, there are disadvantages as well.
“Some of the disadvantages we’ve heard developers complain about is cold boot time,” Hammond said. “One of the other challenges is that it’s hard to take your Lambdas’ and go run them on any other platform because they are proprietary and distinct to Amazon’s Cloud.”
Similar to other serverless computing providers, Azure has a usage-based billing policy, which is great for companies trying to stay on budget. For organizations that already rely on Microsoft technology, Azure can be easy to integrate and adopt, as Azure uses proprietary Microsoft technologies.
“For example, say you’re already using Active Directory, and you need to migrate applications, and you still wanna use Active Directory,” Hammond said. “You can start using Azure Active Directory and access that right from within the functions you write.”
Microsoft Azure also “lacks upfront costs or an appreciable time delay in resource provisioning—capacity is available on demand,” reported TechRepublic’s James Sanders in his Microsoft Azure cheat sheet.
GOOGLE CLOUD PLATFORM
Google Cloud functions are similar to those of Azure Cloud, Hammond said. However, Google just introduced its Cloud Run service, allowing developers to write functional code in addition to the other capabilities.
“[Cloud Run] uses a project called Knative, which is a specification that allows you to run functions on top of Kubernetes clusters,” Hammond said. “Even though right now they’re running that in Google’s Cloud, there’s the promise that you can take those functions to any Kubernetes cluster, including ones that might be deployed on premises. It’s still early, but that’s the direction that Google is headed.”
Google Cloud functions have a good lifecycle, Hammond said. The platform also has some integration with DevOps tools, making it easier to deploy them.
How to choose
When choosing which service to use, Hammond said you have to start by looking at your workload. Serverless-style platforms allow organizations to try things quickly, while not spending too much money. He suggested event-driven workloads or quick prototyping as great cases for serverless architecture.
Chandrasekaran outlined the following considerations to take when choosing a platform:
- Give preference to serverless PaaS when seeking improved operational productivity and cost-efficiency, while retaining sufficient control of application design.
- Deploy fPaaS framework software in private context if multi-cloud deployment or vendor lock-in are concerns.
- Assemble an all-serverless suite of services to gain the full effect of a serverless cloud experience.
- Avoid overdependence on the immature serverless offerings. Subject their use to scrutiny to discover limitations before they begin to manifest as problems, and be ready for technology change.
For a comprehensive side-by-side look at these three vendors, check out this vendor comparison download on Tech Pro Research (subscription required).
How to build a serverless architecture
A serverless architecture can mean lower costs and greater agility, but you’ll still need to make a business case and consider factors like security and storage before migrating selected workloads.
By Joe McKendrick on May 1, 2019
Serverless computing promises to free both developers and operations people alike from the shackles of underlying hardware, systems and protocols. In making the move to a serverless architecture, the good news is that the move can often be made quickly and relatively painlessly. However, IT managers still need to pay close attention to the same components in the stack in which their current applications are built and run.
How is a serverless architecture like previous, more traditional technology architectures, and how does it differ? Despite the name, it’s not a serverless architecture entirely devoid of servers: rather, it’s a cloud-based environment often referred to as either Backend-as-a-Service (BaaS), in which underlying capabilities are delivered from third parties, or Function-as-a-Service (FaaS), which capabilities are spun up on demand on a short-lived basis.
In a FaaS environment, “you just need to upload your application codes to the environment provided, and the service will take care of deploying and running the applications for you,” says Alex Ough, CTO architect for Sungard Availability Services.
A serverless architecture “still requires servers,” says Miguel Veliz, systems engineer at Schellman and Company. “The main difference between traditional IT architecture and serverless architecture”, he adds, “is that the person using the architecture does not own the physical or cloud servers, so they don’t pay for unused resources. Instead, customers will load their code into a platform, and the provider will take care of executing the code or function, only charging the customer for executing time and resources needed to run.”
Or, as Chip Childers, CTO of Cloud Foundry, prefers to define serverless, “computing resources that do not require any configuration of operating systems by the user.”
So, with everything managed or spun up through third parties, there isn’t as much a need to worry about annoying details such as storage, processing and security, right? Not quite. These are all factors in the migration from traditional development and operations settings to cloud-based serverless environments. Here are some further considerations you’ll need to weigh up when developing a serverless architecture:
Before anything else is initiated in a serverless architecture development process, the business case needs to be weighed, to justify the move. The economics of serverless may be extremely compelling, but still need to be evaluated in light of architectural investments already made, and how it will serve pressing business requirements. “Serverless adoption must be a business and economic decision,” says Dr. Ratinder Ahuja, CEO at ShieldX. “The presumption is that over time and across functions, paying for a slice of computing for the short period of time that a piece of logic executes is more economical than a full stack virtual machine or container that stays online for a long time. This approach should be validated before organizations embark on a serverless journey.”
Migration – and blending
As serverless computing is inherently a cloud-based phenomenon, the best place to start is looking at what cloud providers have to offer. “If lock-in is not a concern, and you want to start quickly, a fully managed solution like the ones provided by the major cloud providers is one way to start,” says William Marktio Olivera, senior principal product manager for Red Hat.
However as a serverless architecture expands from there, Olivera recommends additional approaches such as container technology, to assure the seamless transformation of code and applications between environments. “As soon as you start considering running your application on more than one cloud provider, or you might have a mix workloads running on-premises and on a hybrid cloud, Kubernetes becomes a natural candidate for infrastructure abstraction and workload scheduling and that’s consistent across any cloud provider or on premises,” he says. “If you already have Kubernetes as part of your infrastructure, it makes even more sense to simply deploy a serverless solution on top of it and leverage the operational expertise. For those cases, Knative is an emerging viable option that has the backing of multiple companies sharing the same building blocks for serverless on Kubernetes, making sure you have consistency and workload portability.”
Serverless functions are running in containers, and “these containers appear ephemeral and invisible to the application designer,” says Scott Davis, VP of software development at Limelight Networks and former CTO at VMware. “Under the covers there is a pool of reusable containers managed by the infrastructure provider and used on demand to execute a serverless function. When a function completes, the host container is reset to a pristine state and readied for its next assignment. Since serverless functions only live for a single API call, any persistent state must be stored externally for subsequent functions that need it.”
While a transition from on-premises assets to serverless can be accomplished relatively swiftly, the move to serverless should be taken with deliberation. Not everything will be ready to go serverless at once. “Legacy software is anything you’ve already deployed, even if that was yesterday,” Childers says. “Changes take time in any non-trivial enterprise environment, and often a rush to re-platform without rethinking or redesigning software can be a wasted effort. Software with pent-up business demand to make changes — or new software projects — are the logical projects to consider more modern architectures like serverless environments.”
Not every workload “is a perfect candidate for serverless workloads,” Olivera agrees. “Long running tasks that can be split into multiple steps or ultra-low-latency applications are good examples of workloads that may take a few years to be considered as good candidates for serverless platforms. I recommend approaching the migration as an experiment — a new API that is being implemented with a reasonable SLA or a new microservice are good candidates. Then you can progress to single-page applications and some other backend functionalities of web application. The learning of running those experiments at scale should be enough to inform the next steps and prove the benefits of serverless architecture.”
This blending of legacy environments with serverless will likely go on for some time. “Organizations should embrace a different path forward, combining their existing — and often monolithic — applications with modern APIs, which can be used from newer serverless components as functionality engines,” says Claus Jepsen, deputy CTO and chief architect at Unit4. “Serverless architectures can complement and enrich the existing architecture by providing a new abstraction that supports building new services and offerings.”
To a large extent, serverless takes many security headaches off the table. Traditional on-premises IT architectures that present fairly large attack surfaces — such as the network, host operating systems, services, application libraries, business logic, data and the user — notably shrink in serverless settings. Still, even serverless environments require due diligence and vigilance, says Ahuja. “Security teams must take into account the function code, what services that function can access, data access and misuse and certain types of denial-of-service attacks. The cloud provider that is hosting the function as a service is responsible for securing the underlying infrastructure.”
Security worries don’t necessarily go away — they just change. “Because you will not be behind your own firewalls, you need to observe security protocols that you didn’t have to worry about with on-premises computing and data storage,” says David Friend, CEO of Wasabi. “Such things as protecting your encryption keys become very important. Almost all data stored in serverless cloud environments is encrypted, so even if someone hacks in, in theory they will only find useless encrypted data. But people are careless with their keys because they are not used to encryption keys being so important.”
Storage is another area in which the serverless computing shifts the dynamic. Storage “is usually the trickiest part of serverless,” according to Gwen Shapira, software engineer at Confluent. “Cloud providers have a large variety of storage options, some are advertised as ‘serverless,’ although you aren’t limited to those.” Scalability of serverless storage is a critical factor, she continues. “It’s important to remember that the scalability of storage systems is influenced by the data model and design. So you need to choose both storage and data model that will fit your scalability expectations.” The cost model is another consideration, she adds. “With serverless apps, you only pay for what you use. But storage introduces ongoing cost for the data you store, and sometimes fixed cost for provisioned capacity, so you need to take those into account and, if necessary, optimize the amounts of data you store.”
Serverless requires new thinking about data storage. “This is because a serverless system needs an external storage plan to manage state and ensure data at rest is protected by other means,” says Greg Arnette, technology evangelist at Barracuda. “Serverless is all ephemeral, with processes firing up and shutting down in seconds and minutes bursts of activity. Serverless functions need to read and write data from other sources that offer persistence and API access.”
Ultimately, storage “becomes a service in these environments,” says Childers. “Storage services are network accessible and may take the form of a fully managed database offering or a newer API-based storage capability.”
Serverless architecture is following the lead of cloud architecture, and that means lower costs and greater agility. “With the exception of certain edge cases that require very specialized services, cloud-based computing and storage are both becoming commodities,” says Friend. “There is little reason for anyone to run their own storage or compute if they have reasonable bandwidth connectivity. If you aren’t generating your own electricity or digging up the streets to lay your own fiber, why would you want to own your own storage or compute infrastructure? Most people don’t realize it yet, but IT needs to focus on the strategic uses of data and not the hardware infrastructure.”
Serverless computing vs platform-as-a-service: Which is right for your business?
Understanding the difference between serverless computing and PaaS is the first step in deciding which is best for your organization.
By James Sanders on May 1, 2019
Serverless computing platforms, like AWS Lambda and Google Cloud Functions, are popular among app developers. Likewise, platform-as-a-service — a collection of tools that allow developers to deploy applications without handling the underlying hardware that powers them — is also gaining traction.
For organizations looking toward a hardware refresh, adopting PaaS middleware is a relatively easy step to modernizing IT deployments. Serverless computing, however, requires rearchitecting existing applications, or building entirely new applications, to gain the full value and promise that serverless platforms offer.
What’s the difference between serverless computing and PaaS?
Serverless platforms and PaaS fundamentally exist to enable developers to spend time writing code, rather than focusing on the platform on which that code is run. There are three primary differences between the two models, however. For PaaS, scaling is more manual, while “in a serverless environment, scaling is a lot more automated and automatic,” Arun Chandrasekaran, a Gartner analyst for technology innovation, told ZDNet. “The second is when does an application really spin up … I think, in a serverless environment, the function can be evoked in a much more agile manner than in a compatible PaaS environment. The final difference really is the control. In a PaaS environment you have a lot more control over the development environment, whereas in serverless it’s a little bit of a black box.”
SEE: Serverless computing: A guide for IT leaders (Tech Pro Research)
How should you decide between serverless computing and PaaS?
For event-driven programs, which have specific, delineated events that can be used to trigger specific actions, “serverless is going to be a great fit for that,” Chandrasekaran said. “However, if you’re doing much more conventional application development and you want a very prescriptive way of doing application development as a platform, you would want to have a lot more control over the underlying environment — if any of these factors is true, you’ll most likely want to be using a PaaS environment.”
Conversely, Jeffery Hammond, principal CIO analyst at Forrester, sees the two as being on a collision course. “Look at where Pivotal is going with Pivotal Cloud Foundry. They’ve got the Pivotal applications service, which is a PaaS, and then they’ve got the Pivotal Functional Service … at the core of their serverless efforts. But there are going to be applications that are built on PaaS that may call functions in PFS. So integration is certainly something that’s on the radar.”
SEE: AWS Lambda: A guide to the serverless computing framework (free TechRepublic PDF)
Yes, you can deploy serverless using an on-premises server
As zany as the prospect may seem, the release of Google’s Knative serverless platform middleware for Kubernetes, combined with GKE On-Prem or another Kubernetes platform, theoretically allows serverless applications to be deployed using servers that live on-premises. “So the idea of grafting a serverless platform onto Kubernetes and wrapping it with a service fabric is something which is starting to get a lot of traction,” Hammond said, pointing to Kubeless and OpenFaaS as other frameworks that enable this deployment model.
There are use cases for which this makes sense, such as financial services and healthcare. “Their expectations from a scalability standpoint are not so high, but they do want to run something that is closer to where the internet is happening. They probably want to do image processing, where they really want to run this close to where the data has been generated,” Chandrasekaran said. “You may lose some benefits of serverless by running it in a private IP environment, particularly the scalability aspect of it, and to a lesser extent even integration of data has not really been etched out at this point in time. Serverless platforms do not integrate with data at on-premises environments, so these things have to mature, in my opinion.”
What public cloud provider should I use?
Buoyed by first-mover advantage, AWS Lambda is used more frequently for serverless application deployment. Lambda also has a richer development ecosystem of third-party services and integrations. While those third parties are extending their reach to Google Cloud Functions and Microsoft Azure Functions, to cover more of the market, third-party integrations typically come first to AWS.
Likewise, different vendors have different trigger types that can be used to execute functions, with AWS offering more event sources than other serverless platforms from public cloud providers. When additional use cases and scenarios are devised, these are likely to expand as serverless functionality from public cloud providers matures.
Executive’s guide to serverless architecture
If you’re wondering whether serverless computing functions are right for your business needs, learn about this cost saving, code-simplifying service.
By Brandon Vigliarolo on May 1, 2019
From online word processing to scaling virtual servers, cloud computing has become an integral part of the business world.
If your business is looking for cloud computing services and needs more power than online document editing but not an always-on, and frequently expensive, cloud server, you may find a happy medium in serverless architecture.
Also called serverless computing, serverless cloud services offer the ability for businesses to run self-contained snippets of code in the cloud without paying for a virtual server. The cost saving and performance boosts can be immense — but only if it’s a good fit for your computing needs.
Executive summary (TL;DR)
What is serverless architecture? Serverless architecture is a pay-per-use approach to running small chunks of self-contained code in the cloud. Instead of paying for an always-on virtual server, users only pay for compute time. It eliminates the need to pay for computing overhead and shifts the burden of hardware management onto the cloud provider.
How does serverless architecture work? Serverless code snippets, commonly called functions, are stored and run on servers managed by cloud providers. Functions are dormant until a particular input condition is met, at which time they spin up, execute, and then shut down again.
What are the potential benefits of serverless architecture? Serverless architecture can save businesses money by eliminating infrastructure and cloud computing costs, is infinitely scalable, can reduce latency, and simplifies development, among other benefits.
Who is serverless architecture designed for? Serverless computing is incredibly flexible, making it useful for a wide range of applications. Websites, web apps, analytics, data filtering, and automating routine computing tasks are just some of the ways to use serverless computing.
What are the biggest serverless architecture platforms? AWS Lambda was the first serverless platform and continues to be the largest, but there are other options available from Google, IBM, Microsoft, and Oracle.
How does a business get started with serverless architecture? If you think serverless architecture is a good fit for your cloud computing needs, you can sign up for a serverless computing platform online, and in most cases get a good deal of use for free before having to invest money in it.
What is serverless architecture?
When you first hear the term ‘serverless architecture’ or ‘serverless computing,’ it’s understandable that there would be a bit of confusion — especially about the lack of a server. Surely, if code is running in the cloud, there’s a server involved at some point, right?
The fact is that, yes, servers are involved no matter which serverless platform an organization uses. The term serverless architecture is more of a description for customers that informs them what they’re getting: The ability to run code without having to pay the price for an always-on server.
In fact, all of the maintenance cost for hardware, electricity, support, and other incidental tech expenses is handled by the serverless computing provider. The only thing that customers have to pay for is the time spent using server resources, which are shared among various other serverless functions all running on the same machine.
In short, serverless computing is a cloud service that allows organizations to run snippets of on-demand code without having to pay for the hardware necessary to host and run that code.
- Serverless computing: A cheat sheet (TechRepublic)
- Why serverless computing is one of the biggest threats to containers (TechRepublic)
- NoOps: How serverless architecture introduces a third mode of IT operations(TechRepublic)
How does serverless architecture work?
The most important thing to know about the workings of serverless architecture is the code snippet. Serverless code snippets, also known as functions, are what an organization writes to be executed on a serverless computing platform.
Functions are commonly written in Python, Java, Node.js, Go, PowerShell, C#, PHP, and Ruby, with many serverless platforms adding support for additional languages as time goes on.
Regardless of what language they’re written in, serverless functions all have to meet one condition: They can’t have outside dependencies or need additional code to operate. Functions are completely self-contained, which allows them to be activated quickly, execute their task, and shut down without the need to pull from outside sources or additional libraries.
Functions are built to be triggered by a particular condition: A photo uploaded to a website, an API request needs to be authenticated, an e-commerce order is placed, and so on. The use cases for serverless functions are nearly unlimited, provided the code is self-contained and can be activated by an API call.
When a serverless function is activated, it performs its task, shuts down upon completion, and awaits its trigger condition to run again. While that function sits dormant, its owner isn’t charged a thing.
- Serverless computing: 6 things you need to know (TechRepublic)
- Why serverless computing makes Linux more relevant than ever (TechRepublic)
What are the potential benefits of serverless architecture?
Cost is one of the biggest benefits to serverless architecture — who wants to pay for unused computing resources? This fact alone is often enough to create a use case for serverless computing.
One important thing to know about paying for serverless architecture is that it can be confusing, particularly in reference to the measurement of time that is commonly used in pricing schemes: GB-seconds.
GB-seconds aren’t actually a function of gigabytes or seconds. The value of a GB-second is derived from multiplying the maximum memory size of a serverless function (in GB) by the time in seconds the function runs. Understanding this, and knowing how to do the work to figure out the GB-second value of the function you need to run, is a key part of understanding the potential costs.
In many cases, however, serverless computing can be completely free if you don’t go over a certain GB-second limit. All of the major serverless computing platforms offer a free tier that, at minimum, offers 1,000,000 computations per month (or 400,000 GB-seconds), with Google being the only one to offer more (2,000,000 computations/month).
Serverless computing also saves money on infrastructure: There’s no hardware to purchase or maintain, less space is needed in the server room, electricity use is reduced, and more. The cost savings are hard to calculate exactly — there are potential savings everywhere you look.
Scalability is another advantage of serverless architecture: As long as you have the budget to pay for it, your serverless functions can run once or 10 million times per billing cycle. The computing resources needed to execute a serverless function are minimal, and the data centers where serverless computing hardware is located are distributed around the world, so your functions will always be able to scale up as needed.
Speaking of distribution, serverless functions can greatly reduce latency for users whose action triggers their activation. Instead of data having to travel to a central location, serverless functions can be activated at the nearest data center, reducing travel time and latency. On top of that, by distributing the location users are reaching out to, there’s a lot less potential for congestion, which can also increase latency.
Serverless functions can also result in much less code complexity. As mentioned above, serverless functions have to be self-contained, which means they have to be able to run on any hardware, anywhere, and without needing to reach out to external sources for supplemental code. All of those restrictions mean serverless functions have to be built simply, making the barrier to building them significantly lower for developers.
- How secure is serverless computing? (ZDNet)
- Serverless computing highlights new security challenges in hybrid IT (TechRepublic)
- How can serverless computing be cost-justified? (ZDNet)
Who is serverless architecture designed for?
It’s difficult to think of a business cloud computing use case that couldn’t be translated into a serverless function — take AWS Lambda’s case study page as an example. The organizations included in the case studies had various reasons for using Lambda functions.
The flexibility of serverless computing means it can be used in a wide variety of applications, such as:
- Website scaling: By building a website or a web app around serverless functions, a website can be stood up faster and scale to a larger user base without interruption.
- Image processing: Images filtered through a serverless function can be categorized and sorted using machine learning, resized, reformatted, and more.
- Internet of Things (IoT) sensor input: Data received from IoT sensors and devices can be filtered, logged, and responded to automatically.
- Extract, transform, and load tasks: ETL software can get expensive, but its work can largely be handled by serverless functions.
- Event streaming and logging: One of the toughest things about troubleshooting IT systems is tracking down specific events that cause problems. Functions can be built to log events and return alerts when specific conditions are met.
- Build multilingual applications: Instead of having to pick one particular programming language, serverless functions can be strung together to execute tasks in multiple languages, allowing developers to stick to what they know best.
- Automate scheduled computing tasks: Tasks that need to be performed at certain intervals, or at particular times, can be automated using serverless functions.
- Moving data: If data is uploaded in one particular application, but needs to be transferred to another for whatever reason, a serverless function can take care of it.
- Big data processing: Trying to filter out particular types of data can be tough, but serverless functions can take care of it by being built to trigger when certain inputs are detected.
This list of use cases isn’t exhaustive. If you’re not sure your serverless needs fall into one of these categories, it’s best to reach out to the serverless provider to see what they can offer.
- Survey: so far, so good with serverless computing (ZDNet)
- How to create a serverless computing function app in Microsoft Azure (TechRepublic)
What are the biggest serverless architecture platforms?
If you’re considering going serverless, there are a number of vendors to consider.
First and foremost, there’s AWS Lambda. Amazon’s offering is the oldest, the largest, and the most popular serverless computing platform. It can take care of most serverless computing needs, and for customers of Amazon’s other AWS offerings, it’s a no-brainer to choose Lambda as a serverless provider.
Lambda also features tight integration with Amazon’s other compute and machine learning services, allowing its serverless function to be triggered by other AWS services along with HTTP and API triggers. There’s also a robust library of tutorials to make adjusting to the serverless world of AWS Lambda easier.
Not to be outdone, Google has built its own serverless computing platform called Google Cloud Functions, which works similarly to AWS Lambda. As an added bonus, Google Cloud Functions offers twice the number of free computations per billing cycle as AWS and its other competitors, giving 2,000,000 to everyone else’s 1,000,000. Its GB-seconds limit is the same, however, which means that extra million may not matter that much.
Google Cloud Functions integrates tightly with its other cloud services, making it a great fit for those already invested in Google’s cloud platform.
Microsoft Azure Serverless Computing offers similar services, as does IBM. The only serverless computing platform that differs from what’s offered by Amazon, Google, Microsoft, and IBM is Oracle, whose Fn Project throws a wrench into what’s typically thought of as serverless computing.
The Fn Project is open source and container native, allowing it to be run on any server, anywhere. It’s not a typical serverless architecture platform, as it requires access to either a local server or a cloud-based one, but it does have the potential to eliminate vendor lock-in associated with the other platforms.
If you want to build your own serverless computing platform from scratch, the Fn Project may be your best option, provided you’re ready to take on a lot of added responsibility without offloading any of the complications traditionally eliminated by other function-as-a-service platforms.
- What serverless computing really means, and everything else you need to know (ZDNet)
- AWS Lambda, a serverless computing framework: A cheat sheet (TechRepublic)
- Amazon Web Service’s API Gateway: Why it could be a big deal (ZDNet)
- MongoDB Stitch: Serverless compute with a big difference (ZDNet)
How does a business get started with serverless architecture?
One of the best things about serverless computing is how low the bar to entry is. You don’t need to do anything aside from signing up for an account by visiting the get started link for AWS Lambda, Google Cloud Functions, Azure Serverless Computing, or IBM Cloud Functions. The Fn Project requires some manual work, which you can find out more about on its GitHub getting-started page.
Once you’re signed up, it’s easy to get started, provided you know what you want to build and how to build it. Be sure to take advantage of the tutorials offered by all the major vendors — those guides will go a long way to getting you settled and familiar with the intricacies of each platform.