- Fast application deployment – containers carry the minimal runtime requirements of the application, decreasing their size and enabling them to be deployed instantly. IaaS VM is easy to deploy and manage.
- Transferable across machines – an application and all its dependencies can be grouped into a separate container that is autonomous from the host version of Linux kernel, platform configuration, or deployment type. This container can be assigned to another machine that runs Docker and performed there without adaptability issues. We can use the VM image to deploy similar VMs and move containers across the VMs.
- Version control and component retain – we can pursue succeeding versions of a container, inspect irregularities, or go back to previous versions. Containers reuse segments from the preceding layers, which makes them remarkably light. A dedicated Azure container registry can be used to store the container images.
- Sharing – we can use a distant Azure container repository to share our container with others, and it is also desirable to configure our own individual repository. We can use the RABC to provide access to the users over these IaaS VMs to manage.
- Light and minimal overhead – Docker images are typically very small, which promotes rapid delivery and reduces the time to deploy new application containers.
- Containers don’t work at bare-metal rates – Containers utilize resources more efficiently than virtual machines. But containers are however directed to performance overhead due to overlay networking, interfacing within containers and the host system and so on. So even using IaaS VM will may not get the expected performances.
- Provide cross-platform compatibility – The one major issue is if an application designed to run in a Docker container on Windows IaaS VM, then it can’t run on Linux VM or vice versa. However, Virtual machines are not subject to this limitation. So, this limitation makes Docker less attractive in some highly heterogeneous environments which are composed of both Windows and Linux servers.
- Docker and VM Security problems – In simple words, we need to evaluate the Docker-specific security risks and make sure we can handle them before moving workloads to Docker. The reason behind it is that Docker creates new security challenges like the difficulty of monitoring multiple moving pieces within a large-scale, dynamic Docker environment. IaaS VM security is a overhead.
- The Docker container ecosystem is fractured– Although the core Docker platform is open source, some container products don’t work with other ones — usually due to competition between the companies that back them. For example, OpenShift, Red Hat’s container-as-a-service platform, only works with the Kubernetes orchestrator.
- Persistent data storage is complicated – By design, all of the data inside a container disappears forever when the container shuts down, unless we save it somewhere else first. There are ways to save data persistently in Docker, such as Docker Data Volumes, but this is arguably a challenge that still has yet to be addressed in a seamless way. We have to use azure disk and other options to store data for applications.
- Graphical applications don’t work well -Docker was designed as a solution for deploying server applications that don’t require a graphical interface. While there are some creative strategies (such as X11 video forwarding) that we can use to run a GUI app inside a container, these solutions are clunky at best.
- Not all applications benefit from containers on VM– In general, only applications that are designed to run as a set of discreet microservices stand to gain the most from containers. Otherwise, Docker’s only real benefit is that it can simplify application delivery by providing an easy packaging mechanism.
Standard practices for Docker deployments
- Keep container images small and Docker runtime security
Small images are faster to pull over the network and faster to load into memory when starting containers or services. By using correct Image and multistage docker file we can define a proper image size. Make sure we know the composition of our containers during runtime as well as build time. The main way to make changes to a container is by editing the container image and then deploying a new container. Creating a runtime security policy can help define appropriate response actions during runtime. If suspicious behavior is detected, the security policy will prompt alerts and remedies.
- Where and how to persist application data
Avoid storing application data in our container’s writable layer using storage drivers. This increases the size of our container and is less efficient from an I/O perspective than using volumes or bind mounts. Instead, store data using volumes.
One case where it is appropriate to use bind mounts is during development, when we may want to mount our source directory or a binary we just built into our container. For production, use a volume instead, mounting it into the same location as we mounted a bind mount during development.
For production, use secrets to store sensitive application data used by services, and use configs for non-sensitive data such as configuration files. If we currently use standalone containers, consider migrating to use single-replica services, so that we can take advantage of these service-only features.
- Limiting resources
The ability to run as many containers as needed gives us a lot of flexibility in production. However, this also creates major risks in the event containers are compromised. Make sure we monitor container activity and limit use of resources. Design errors, software bugs, or malware attacks can lead to a DoS. we can handle the large attack surface by limiting the number of system resources allotted for each container. And containers per VM.
- Use CI/CD for testing and deployment and Complete lifecycle management
When we check in a change to source control or create a pull request, use Docker Hub or another CI/CD pipeline to automatically build and tag a Docker image and test it. Take this even further with by requiring our development, testing, and security teams to sign images before they are deployed into production. This way, before an image is deployed into production, it has been tested and signed off by, for instance, development, quality, and security teams. Tools can help us monitor, manage, and analyze every aspect of the container’s infrastructure. By scanning for vulnerabilities during the delivery lifecycle, you can prevent deployment of contaminated containers. Implementing complete lifecycle management ensures containers remain secure throughout all stages of development and deployment.
- Granular access management
Docker access management solutions help reduce docker security risks by enabling granular RBAC management. Authorized access management solutions like Active Directory let you operate containers with minimal privileges and manage access across teams and development lifecycle stages.
- Monitoring container activity
Containers and VMs can be monitored using tools like log analytics, Azure Monitor, Scout, Datadog and Prometheus. Monitoring systems can help us identify attacks, send alerts, and even automatically implement fixes. Periodically review log data generated by containers and use it to generate preventive security insights.
- Differences in development and production environments
There should be isolation between production and development environment and proper access management should be created on both compute and container level.
You must log in to post a comment.