Container Technology is officially a phenomenon and is being hailed as the next big technology platform across world today. In fact, at we45, we use containers in unique and creative ways like Vulnerability Scanning, Application Security testing and so on. My team uses Docker (a popular container technology product) to scale vulnerability management and testing as a cluster.
Before we begin, let’s quickly understand Container technology and Docker. Linux operating systems (O/S) provides users with the ability to virtualize numerous Linux operating system deployments on a single Linux Kernel. i.e. one can run several Linux operating systems on a single Linux kernel. The Linux kernel provides the isolated namespaces, processes and environment for hosting applications. While it is similar to system virtualization, (where users could deploy multiple Guest Operating Systems on a single hypervisor and its underlying hardware), containerization allows the deployment of Linux operating systems as isolated containers on a single Linux Host.
Interestingly, container technology is NOT virtualization in the traditional sense – which is critical from security stand-point. Virtualization allows multiple operating systems to leverage the same hardware using a hypervisor. Based on configuration parameters, the guest systems typically operate as independent operating systems, leveraging the hypervisor or the underlying OS as interfaces to hardware components. However, in the case of a container, it appears as an abstraction higher than that of a virtualized host. The container uses resources of the kernel that it runs on, therefore, an application’s runtime environment typically runs on protected namespaces and control groups. Therefore even when the container runs as “root”, the possibilities of an attacker escalating privileges from a container to the host and/or other containers is remote.
Nevertheless, through our experience, we’ve identified some simple yet effective security practices for Docker deployments. We will follow some of these areas through more granular articles, but this is a good starting point for one to consider when looking at security for container deployments.
- Know what you pull
- Limit Interaction
- Secret Sprawl
- One Primary Function per Container
1. Know what you Pull
One of the major benefits of the Docker ecosystem is the Docker Hub. The Docker Hub consists of pre-built images for several applications and operating systems, including officially supported images for apps like MySQL, Elasticsearch to tons of user built images for these apps and Operating Systems. This is great as this allows us to start deploying container environments in a matter of minutes. However, this is also a potential security risk.
For instance, consider that you need a Docker image for the Nginx web server. Typically, you would look for the Nginx image in the Docker hub, identify the image right for your needs, download and deploy it. However, if you pull an image with a version lower than nginx 1.11.4, which is vulnerable to cache poisoning attacks,this code may introduce vulnerabilities to your deployment or may be intentionally malicious. Imagine, if someone runs a shell script that summarily transmits your database information to a third party host. What’s worse, several docker images come pre-built with custom code that may have been built in with the image from the user who created the image.
Applications and Operating Systems of all types are rife with vulnerabilities across different versions and types. If you’re not careful, you could be wilfully weakening your environment.There are a few solutions to scan for security issues like Docker’s own Security Scanner (available for commercial users), Docker Bench and so on. But this area is still evolving and knowledge and vigilance is of the highest order for Container Security. In addition ensure that you harden your Linux Host that contains all the Containers that you are running. Running Docker containers in an insecure Linux environment is a recipe for disaster.
2. Limit Interaction
While it’s ideal to run Docker as a ‘read-only’ container, it’s often not possible or feasible to do so. Networking and exposing volumes (for persistence) are usually required for most containers. In addition, Docker also disables sensitive system calls and has whitelisted security computing (seccomp) modes that have disabled 44 out of 300+ system calls available to Docker.
Dangerous calls include reboot (which would reboot the host) and pivot_root (a call that allows changes to the root file system) are disabled by Docker by default. However, you might want to increase the level of security by adding your own seccomp profiles to your containers. This will ensure that only specific calls are allowed on the host by the container. We will be detailing this in a dedicated blogpost in the near future.
3. Secret Sprawl
Most applications utilize secrets extensively. Secrets include passwords, API Tokens, Encryption Keys and other sensitive data. One of the key challenges in any DevOps environment is to maintain the confidentiality of secrets as the ones mentioned above. A common practice, (and a bad one) is to maintain these secrets as Linux environment variables. These environment variables are referenced and called at runtime and used as secrets. This is a dangerous practice, especially in Docker deployments as users can easily gain access to environment variables by simply running “docker inspect” (figure below) . In addition, malicious insiders may be able to easily leverage this knowledge and decrypt customer sensitive information, among several other attack possibilities.
The best way to handle secrets is to use a comprehensive secret management tool like Vault or Keywhiz, or leverage such tools that are provided by cloud providers like Amazon (KMS) or Azure (KeyVault)
4. One Primary Function per Container
One of the common mistakes that people make is to use containers as a full application stack. We’ve seen deployments where a web server, database server, message queue broker and a search DB have been used in the same Docker container! Although this is a bad practice from a pure deployment perspective, negating the benefits of distributed management, this is also an egregious security practice. By running multiple services in the same container, security over each of these elements ends up getting diluted and highly splintered, resulting in a higher attack surface, especially with un-patched components and so on. Its ideal to implement one service per component, which is easier to manage, scale and secure.
In conclusion, Container Technology like Docker is taking the world by storm. We are confident that you and your organization may be considering this in the short to long term. However, as with any new technology paradigm, security must be considered extensively before diving headlong into implementation.