Serverless Computing: A Security Viewpoint

Serverless Computing: A Security Viewpoint

Padmavathi Vurubindi, Sujatha Canavoy Narahari
Copyright: © 2024 |Pages: 16
DOI: 10.4018/979-8-3693-1682-5.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Recently, serverless computing has become a newfound computing platform for the cloud-based deployment of applications and it took over all other contemporary computing platforms. It has two significant advantages over its contemporary computing platforms. First off, it enables software developers to delegate to cloud service providers all infrastructure maintenance and operational chores, allowing them to concentrate solely on the business logic of their applications. The second is that it has a strict pay-per-use business model, where customers are only charged for the resources, they utilize. Despite its advantages, researchers have been deeply examining the actual security guarantees offered by the current defences employed to safeguard container-based infrastructures over the past few years. This led to the discovery of significant flaws in the security controls employed for network security and process isolation. In this chapter, the authors highlight the security attacks/issues with the investigated serverless architecture platforms and suggest potential countermeasures.
Chapter Preview
Top

1. Introduction

Serverless computing has transformed cloud application development by removing the burden of managing servers and infrastructure, allowing developers to focus only on writing code and business logic. In this, the application logic is broken down into a collection of small, transient, and stateless functions, each of which executes in a different execution environment such as a container and interacts with other functions and various cloud services such as storage services to complete its tasks. Developers use this strategy to deploy their application code to containers controlled by a cloud service provider. The cloud provider handles infrastructure provisioning and scaling, as well as normal maintenance duties including security management, operating system updates, capacity planning, and system monitoring. This method simplifies development, increases productivity, and lowers the difficulties of server management.

The pay-as-you-go pricing model of serverless computing differs significantly from traditional server-based approaches. Users are invoiced based on actual computation and resource consumption, which eliminates the need to reserve and pay for predetermined quantities of bandwidth or server resources. The ability to auto-scale ensures efficient resource allocation, making it a cost-effective alternative. This is in contrast to the old strategy, in which over-purchasing of server capacity to address projected traffic spikes is widespread, resulting in unnecessary expense. Figure 1 illustrates various components of serverless architecture.

While the phrase “serverless” may appear to be deceptive, it truly describes the development experience. Backend services are still provided by physical servers, but developers are protected from server-related problems. This frees them from the constraints of servers, allowing for a more efficient and simplified development process. Serverless computing represents a paradigm shift in application development, providing modern development needs with simplicity, scalability, and cost-efficiency.

The breakdown of application logic into a constellation of tiny, stateless functions is at the foundation of serverless computing. These functions run in discrete execution environments, frequently in the form of containers, and communicate smoothly with other functions and cloud services. This architectural paradigm enables developers to more effectively exploit cloud capabilities than ever before.

1.1 Security in Serverless Computing

Serverless computing is an increasingly popular approach for building and deploying applications due to its ability to abstract away infrastructure management and its promise of enhanced security. However, as the examination of serverless platform security defenses reveals, there are still notable concerns that demand a closer look.

Concerns like this include network security. Functions communicate via networks and APIs in serverless systems. Data integrity and secrecy must be guaranteed when using these channels. Even though encryption is frequently provided by serverless platforms for both data in transit and data at rest, developers must take care while setting up and executing safe network profiles. Inadequate network configuration may leave confidential information vulnerable to intrusions.

Another subject that is under investigation is process isolation. In order to stop attackers from moving laterally within the system, serverless systems try to separate different functions from one another. To find and fix any possible flaws, it is crucial to test and evaluate this isolation's efficacy on a regular basis.

In fact, serverless computing's security posture is strengthened by its layers of protection, which include automatic scaling and reduced attack surfaces. Still, the shared responsibility paradigm is critical. Developers and companies must take proactive steps to secure their serverless apps by using continuous monitoring, frequent code audits, and least privilege access, among other measures.

In closing, serverless computing offers a desirable security architecture, but there are drawbacks as well. To ensure that serverless applications remain resilient and robust in the face of constantly changing security threats, effective security assurance requires a comprehensive solution that addresses network security, process isolation, and a shared responsibility paradigm.

Figure 1.

Serverless architecture

979-8-3693-1682-5.ch013.f01

Complete Chapter List

Search this Book:
Reset