When you go serverless, it's the serverless provider (eg. AWS lambda, Google Cloud Functions etc) that's responsible for securing all the cloud components such as data center, network, servers, operating systems and its configurations.However this merely reduces the security burden shouldered by the developer and doesn't negate it. From the application side of things, the application developer is still responsible for application logic, code, data, and application-layer configurations, making it is a shared security responsibility.
The Shared Security Responsibilities Model for Serverless Architectures
Serverless brings with it new security challenges for developers. Here are 10 of the top security risks we’ve encountered in serverless architecture.
Injection flaws are some of the most devastating vulnerabilities out there. They occur when untrusted input is passed directly to an interpreter and gets executed or evaluated. Most serverless architectures provide a multitude of event sources, which can trigger the execution of a serverless function.This abundant set of event sources increases the potential attack surface and introduces complexities when attempting to protect serverless functions against event-data injections. This is exacerbated by the fact that serverless architectures are not nearly as well-understood as web environments where developers know which message parts shouldn’t be trusted (e.g. GET/POST parameters, HTTP headers, and so forth).These are some of the most common types of injection flaws in serverless architectures:
Serverless applications are architected in microservice-like system design which often contain hundreds of distinct serverless functions with their own purposes. Some may expose public web APIs, while others may serve as a proxy to different functions or processes. It's mandatory to apply robust authentication schemes, which provides proper access control and protection to every relevant function, event type and trigger.An example of such an attack would be “Exposing Unauthenticated Entry Point via S3 Bucket with Public Access”
Serverless architecture is still new and provides different customisation and configuration settings for any specific need, task and environment. The probability of misconfiguring critical configuration settings are quite high and can lead to catastrophic data losses. It's vital to make functions stateless while designing serverless architectures and also to make sure that sensitive data isn't exposed to the any unauthorized personnel. It's also recommended to properly make use of cloud hardening methods and proper ACL configurations.
It's always wise to follow the Principle of “Least Privilege”. Which technically means serverless functions should only be given necessary privileges to perform the intended logic. Provisioning over privileges to a serverless function could end up being abused to perform unintended operations such as ‘Executing System Functions’.
From a security standpoint, it's critical to log and monitor security related events in real-time as it would help in detecting an intruder’s action and containing the situation much effectively. It will also help prevent cyber breaches in real-time. One of the key aspects of serverless architectures is the fact that “Monitoring and Logging” reside in a cloud environment, outside the organisational data centre perimeter.While it's true that many serverless architecture vendors provide extremely capable logging facilities, these logs are in their basic/out-of-the-box configuration and aren't always suitable for the purpose of providing a full security event audit trail.In order to achieve adequate real-time security event monitoring with a proper audit trail, serverless developers and their DevOps teams are required to stitch together logging logic that will fit their organisational needs. For example:
This often requires you to first store the logs in an intermediary cloud storage service. The SANS top 6 categories of critical log information recommends that the following log reports be collected:
Technically, a serverless function should be a small piece of code that performs a single discrete task. At times, in order to perform this task, the serverless function will be required to depend on third party software packages, open source libraries and even consume 3rd party remote web services through API calls. It is wise to look at 3rd party dependencies before importing their code as they could be vulnerable and can make the serverless application susceptible to cyber attacks.
As applications are growing in scale and complexity, the need for storing and maintaining Application Secrets becomes critical. These include:
One of the most frequently committed mistakes are, storing application secrets in plain text within configuration files, database configurations etc. Any user with “Read” permission can gain access to these secrets. It's always advisable to encrypt or to not store plain text secrets containing API private keys, passwords, environment variables etc. Environment variables are common way to persist data across serverless function executions, and in certain cases, such variables could leak data to unauthorised entities.
Denial of Service (DoS) attacks can also be targeted within serverless architectures as they're a pay-per function based model. Denial of Service attacks on serverless application can cause financial and resource unavailability disasters. To avoid such financial disasters and service downtime, It is vital for the application developer to properly define execution limits while deploying the serverless application in the cloud. Some resources to be limited are:
Some attack vectors are :
Manipulating application’s flow can help an attacker to subvert the application logic in bypassing access controls, elevating user privileges or even cause Denial of Service attacks. Application flow manipulation is not uncommon to serverless architectures, and is found in multiple types of software. However, as serverless applications are unique, they often follow microservices design paradigm of containing discrete functions, coupled together in a specific order which implements the overall application’s logic. As functions are chained, Invoking a specific function may invoke another function, The order of invocation is critical for achieving the desired logic.
In all serverless applications, performing line-by-line debugging is more complicated and limiting compared to standard applications. However, the above factor forces developers to adapt the use of verbose error messages, enabling debugging environment variables and eventually forgetting to clean the code when moving it to production environment.Verbose error messages such as stack traces or syntax errors, expose internal logic of the serverless function revealing potential weakness, flaws or sensitive data.