Why Use Serverless Computing? | Pros and Cons of Serverless

Serverless computing offers a number of benefits to web developers, including scalability, faster time to market, and lower expense. However, in some cases, other concerns may outweigh these benefits.

Why use serverless computing?

Serverless computing offers a number of advantages over traditional cloud-based or server-centric infrastructures. For many developers, serverless architectures offer greater scalability, greater flexibility, and faster time to production, all at a lower cost. With serverless architectures, developers don’t have to worry about buying, sizing, and managing backend servers. However, serverless computing is not a silver bullet for all web application developers.

How does serverless computing work?

Serverless computing is an architecture in which a provider provides backend services on an as-needed basis. To learn more about serverless computing, see What is serverless computing?

What are the advantages of serverless computing?

1. No server management required

Although “serverless” computing actually takes place on servers, developers never have to deal with servers. They are managed by the provider. This can reduce the investment needed in DevOps, which lowers expenses, and it also allows developers to build and extend their applications without being constrained by server capacity.

2. Developers are only charged for the server space they use, reducing costs

As with a pay-per-view phone plan, developers are only charged for what they use. The code only runs when the backend functions are needed by the serverless application, and the code automatically scales as needed. Provisioning is dynamic, accurate, and real-time. Some services are so granular that they break down their costs in 100-millisecond increments. In contrast, in a traditional “full-server” architecture, developers must plan in advance how much server capacity they will need and purchase it whether they are using it or not.

3. Serverless architectures are inherently scalable

Imagine if the Post could magically add and remove delivery trucks at will, increase the size of its fleet based on mail peaks (say, just before Mother’s Day), and reduce its fleet for periods where fewer deliveries are needed. This is essentially what serverless applications are capable of doing.

Applications built with a serverless infrastructure will automatically scale as the user base grows or usage increases. If a function needs to be executed multiple times, the provider’s servers start, execute, and terminate it as needed, often using containers. (Functions start faster if they have been run recently – see “Performance may be affected” below). Therefore, a serverless application will be able to handle an exceptionally large number of requests just as it would a single request from a single user. A traditionally structured application with a fixed amount of

4. Rapid deployments and updates are possible

With a serverless infrastructure, there is no need to upload code to servers or do any backend configuration in order to release a working version of an application. Developers can very quickly download snippets of code and launch a new product. They can load the code all at once or one function at a time since the application is not a simple monolithic stack but rather a set of functions made available by the supplier.

It also helps to quickly update, repair, fix or add new features to an app. It is not necessary to make changes to the entire application; instead, developers can update the application one feature at a time.

5. Code can run closer to the end user, reducing latency

Since the application is not hosted on an origin server, its code can be run from anywhere. It is, therefore, possible, depending on the provider used, to execute the functions of the application on servers close to the end user. This helps reduce latency because user requests no longer have to travel all the way to an origin server.

What are the disadvantages of serverless computing?

1. Testing and debugging become more difficult

It is difficult to replicate the serverless environment in order to see how the code will actually behave when deployed. Debugging is more complicated because developers don’t have visibility into backend processes and because the application is split into separate, smaller functions. 

2. Serverless Computing Brings New Security Concerns

When vendors manage the entire backend, it may not be possible to fully verify their security, which can be particularly problematic for applications that process personal or sensitive data.

Because companies are not assigned their own separate physical servers, serverless providers often run several of their customers’ code on a single server at any given time. This problem of sharing machines with other parties is known as “multitenancy“; think of multiple companies trying to rent and work from a single office at the same time. Multi-tenancy can affect application performance, and if multi-tenancy servers are not configured correctly, they can lead to data exposure. Multi-tenancy has little or no impact on networks that are sandboxed well and have a strong enough infrastructure.

3. Serverless architectures are not designed for long-running processes

This limits the types of applications that can operate economically in a serverless architecture. Because serverless vendors charge for code execution time, it can be more expensive to run an application with long-running processes in a serverless infrastructure than in a traditional one.

4. Performance may be affected

Since it doesn’t run all the time, serverless code may need to “start” when used. This startup time can degrade performance. However, if a code is used regularly, the serverless provider will keep it ready to be activated – a request for this ready-made code is called a “warm start”. A request for code that has not been used for a while is called a “cold start”.

5. Containment with the supplier is a risk

Allowing a vendor to provide all of an application’s backend services inevitably increases containment to that vendor. Setting up a serverless architecture with a single vendor can make it difficult to switch vendors if necessary, especially since each vendor offers slightly different features and workflows. 

Who should use a serverless architecture?

Developers who want to reduce their time to market and create lightweight, flexible applications that can be extended or updated quickly can benefit greatly from serverless computing.

Serverless architectures will reduce the cost of applications that are used irregularly, with peak periods alternating with periods of low or no traffic. For such applications, purchasing a server or block of servers that are always running and always available, even when not in use, can be a waste of resources. A serverless fabric will react instantly when needed and will not incur costs when idle.

Additionally, developers who wish to bring some or all of their application functions closer to end users to reduce latency will need at least partially serverless architecture, as this requires moving some processes of the origin server.

When should developers avoid using a serverless architecture?

In some cases, it makes more sense, both from a cost and system architecture perspective, to use dedicated servers that are either self-managed or offered as a service. For example, large applications with a fairly constant and predictable workload may require a traditional configuration, and in this case, the traditional configuration is likely to be less expensive.

In addition, it can be extremely difficult to migrate existing applications to a new infrastructure with a completely different architecture.

Leave a Reply

Your email address will not be published. Required fields are marked *