What is Serverless Computing and Why Would I Need It?

Posted by Dustin Keib, Head of Cloud Enablement

Feb 05, 2020


Serverless computing continues to gain momentum toward wider enterprise adoption. In fact, both of our infrastructure partners, AWS and Google Cloud Platform now offer serverless computing solutions for enterprise cloud applications.

But what is serverless computing? Do you know the answer, or understand how it works?

Google Cloud calls serverless computing a “paradigm shift in application development.” What does this mean, exactly? Great things if you’re a busy IT pro.

I’m here to give you a high-level view of this option in the cloud and answer the question, “What is serverless computing?” In fact, consider this blog to be a helpful chapter straight out of a Cloud Adoption 101 textbook. After all, serverless computing is yet another way cloud solutions to support a future-proof digital workplace.

What are the Benefits of Serverless Computing?

Serverless computing makes life easier and reshapes the way you build and run apps. That’s because cloud providers like Google and AWS manage code execution. This gives you more freedom to develop apps because you aren’t deploying them to physical servers.

It means that you now have the ability to scale quickly, efficiently — and as large as you need without having to maintain all that infrastructure yourself. No more managing, provisioning and maintaining servers for backend components when you deploy application code. Your public cloud does all of the heavy lifting for you.

This is possible thanks to something known as, “functions-as-a-service,” or FaaS. As described in a Computing World article, this means developers break down code into “stateless chunks” that can be executed without the need for a server.

Cloud functions, messaging and queuing technologies and data pipelining tools are making real-time infrastructure analytics and metrics trivial to build. They’re also making observability and automated remediation issues a thing of the past.

Well-oiled DevOps teams can deliver new features many times a day now, rather than by following antiquated manual deployment processes.

What’s more, serverless computing can deliver easily realized ROI. When used in appropriate patterns, that ROI can happen quickly if workloads are short-lived, sit dormant for long periods or are best consumed in a per-usage model.

What Types of Organizations are Best Suited to Serverless Computing?

The best candidates are cloud-native organizations that are able to move quickly and take advantage of the new development patterns in serverless or are already taking advantage of these technologies in production today.

Organizations large and small are beginning to use serverless patterns in novel new ways across data pipelines, back-end APIs, application controller logic and in many facets of their business domain.

Is Serverless Computing Ready for Real-World Adoption?

In many cases, absolutely! In enterprise cloud environments, serverless computing has already started making inroads toward adoption.

Findings from a recent O’Reilly survey reveals that demand for serverless computing will grow in the near term as increasingly more organizations determine it’s a worthwhile fit in their cloud computing strategy.

In fact, 40% of 1,500 respondents from a wide range of organizations indicated they’ve already adopted some form of serverless computing and are enjoying reduced operating costs and automatic scaling.

The important thing is to weigh the features of the serverless technology in question against your use case and demand patterns. Let's say you have a serverless function that runs your game leaderboard, and millions of people hit it all the time, around the clock, looking at their stats. Because it's never going to go dormant, it might not be the best use.

On the other hand, let's say you have a month-end report processing pipeline that gets hit very hard at odd hours once a month, then sits around waiting until the next month. That's a perfect fit.

Are There Any Drawbacks to Serverless Computing?

Understanding what patterns are a great fit is very important to help avoid unnecessary challenges or issues with a serverless computing environment.

Workloads that need quick "hot-start" capabilities sometimes don't do well in a serverless environment, as provisioning the resources under the function can take a little while, though there are plugins that help mitigate this issue in some use cases.

Also, there are instances where serverless computing can create extra work for an already busy IT department. Refactoring a large monolithic application into loosely coupled serverless systems can be very complex. The benefits often outweigh the costs, but that’s very specific to the application and its value to the business / ROI for the refactor.

What Else Should I Know about Serverless Computing?

The O’Reilly survey has some interesting results that suggest there’s a learning curve that comes with the transition to serverless computing.

Out of the 1,500 respondents, 50 percent that adopted serverless computing for three or more years indicated their implementation was a success. However, 35% of those who had spent a year or less in a serverless environment also reported success in this new environment.

If you look at the total number of those who said their serverless computing transition was successful, that’s a great number. It means that this exciting new technology is growing in reach — and moving well beyond being a fad.

Serverless technology is another exciting tool for cloud migration, and transformation offers some highly useful features, including the ability to nearly infinitely scale without needing to worry about maintenance, power or other hardware limitations.

As long as you’re careful to apply sensible patterns that address the per-use cost model of serverless and take advantage of the unique feature set, serverless computing can be a great choice for a modern cloud-native application strategy. It’s just another amazing way cloud solutions drive digital transformation.


7 Domains Whitepaper

Subscribe for Updates

Dustin Keib, Head of Cloud Enablement

Dustin is a software engineer, systems architect, and cloud scalability expert at Onix. His deep understanding of the full SaaS and Paas stack comes from 20+ years of enterprise IT experience. Dustin is a Certified Google Cloud Solutions Architect, AWS Solutions Architect - Associate, and Puppet Professional and has a deep knowledge of infrastructure automation, containers, and CI/CD system design and implementation.

Popular posts

AWS 101: What is Amazon S3 and Why Should I Use It?

Kubernetes 101: What are Nodes and Clusters?

Google Workspace vs. Microsoft 365: A Comparison Guide (2022)