Containers are inextricably linked to all things cloud—and enterprises are taking note. Container strategies are proliferating in classic IT scenarios, but it’s not plug-and-play. It’s incredibly complex to do at scale, and many organizations have some catching up to do.
Containers and microservice-based architectures are key threads in the fabric of next-gen tools and technologies that promise to modernize the enterprise org. The approach is replacing traditional, monolithic app development with a modern development ecosystem supported by the cloud, API-based services, CI/CD pipelines, and smart, cloud-native storage.
But how easy is adoption of these next-gen tools? Do you have the foundation you need so that your container strategy is sustainable for the long haul?
Why enterprises are embracing the container era more than ever
“The Kubernetes community and software are moving fast. However, enterprise IT does not typically move at the same speed, which presents both technical and cultural challenges to the adoption and effective use of cloud-native technology and methodology.”
–The Rising Wave of Stateful Container Applications in the Enterprise
Containers are very promising beyond the DevOps testing scenarios where they’ve flourished for several reasons. Naturally, enterprises want to get on board. Massive corporations like Capital One, Tesla, and Intel have made the shift, with some even building their own proprietary container orchestration tools. So what’s the draw?
Containers are all about efficiency and agility. These small, individual environments sit on top of operating systems (OSs) rather than having their own. This makes them very lightweight (the opposite of the monolith) and easy to deploy across multiple platforms without issues. They have a smaller footprint than virtual machines (VMs), they’re shareable and portable, and they can be deployed and deleted in a blink. If you’re looking to go hybrid, containers are cloud-native and one of the most efficient ways to get there.
But older, legacy infrastructures may stand in your way of getting the most from containers. When it comes to launching a container strategy for mission-critical, legacy workloads, consider the following to ensure you’re ready for the shift.
Do you have the IT support to implement a container strategy?
For technologies like Docker and Kubernetes, a lack of deep familiarity and expertise can be an obstacle to getting apps into production. Kubernetes is a helpful tool for orchestration at scale, but it can’t do everything.
You’ll want to make sure your IT team is prepared and informed to support Kubernetes with the underlying infrastructure it requires. For stateful workloads, in particular, you’ll need to tap the expertise of skilled IT architects to ensure full resiliency, optimize infrastructure footprint, handle capacity planning, and implement backup and restore measures. Also, ensure DevOps can get close collaboration with the IT security team to keep containers secure from the ground up.
Be sure teams also have the capabilities to incorporate containers into your infrastructure seamlessly, and that there’s bandwidth to improve on and adjust container-deployment strategies and processes continuously. You might implement platforms that are self-service to remove operational bottlenecks, like the team at VMware did.
The good news: While Docker and Kubernetes skills are in high demand, they’re also on the rise as must-have skills for modern DevOps roles.
What apps do you plan to containerize?
Containers haven’t always been the perfect fit for every kind of application. The benefits of containers used to be their limitations for more dynamic applications. By design, containers weren’t meant to be persistent (or stateful). That’s what made them a better fit for stateless scenarios like web-scale workloads and test environments.
However, that’s changing. Early on, less than 20% of workloads deployed to containers were stateful. Today, more than half are stateful. (Or, rather, made stateful by combining them with storage workarounds.)
There are different approaches to this. Docker offers support for persistent data volumes that can be attached to a container, but more organizations are now deploying stateful containers using CSI drivers through container orchestrators such as Kubernetes. Storage providers like Pure Storage® have developed native Kubernetes integrations that simplify storage delivery to stateful applications. This allows you to confidently deploy more traditional three-tier, stateful web apps with containers.
Take stock of your app landscape and identify those workloads that can be optimized and improved with containers. Then, ensure you have the storage capabilities and integrations to enable containers to consume persistent storage on-demand and seamlessly interface with file systems and object, block, and software-defined storage.
Will your data storage be sufficient?
Containers aren’t your average software ecosystem, which means their underlying storage can’t be either. Storage has to meet the unique demands of complex, distributed, and persistent workloads—and without being a bottleneck.
Consider this: Stateless containers can be spun up and down within seconds, which means you could be dealing with thousands of containers at any given time. Kubernetes and other orchestrators provide the most effective way to manage this at scale. But your storage has to be smart enough to communicate with it. And stateful containers, on the other hand, can have longer life spans, and even larger requirements from data—a mix of block or file storage or pipelines outside the container environment. This makes shared storage services and dedicated, cloud-native storage a better fit for these workloads than direct-attached storage (DAS), which can’t provide scalable, predictable performance under these circumstances.
“The future belongs to storage offerings that are hybrid-cloud enablers, automated, container-aware, and highly scalable and offer rich data services for multiple workloads across heterogeneous platforms. [This] fits an era where enterprises take advantage of emerging application deployment technologies and approaches.”
– Containerizing Key Business Workloads: Evaluating the Approaches to Meet Persistent Storage Demands in Containers, IDC
Legacy storage infrastructures can’t keep up with these demands. It’s critical to have an enterprise-class storage solution that not only unifies your environments—VMs, containers, multicloud, or on-premises—but also simplifies them.
Look for the following attributes when selecting storage for your container workloads:
- Is enterprise-grade. To containerize mission-critical, stateful applications, you’ll want the same performance and availability you depend on for your traditional, Tier-1 apps. All-flash solutions with rich data services can offer enterprise-level reliability, resilience, and security.
- Offers support for a heterogeneous mix of platforms. A unified interface brings familiarity and simplicity to data management and makes it DevOps- and developer-friendly.
- Provides automation. Microservice-based environments can scale fast and you’ll want fast, automated provisioning to help you manage them. Look for a solution that automates container deployment and storage needs for self-service delivery of persistent storage.
- Supports both file and block storage. You’ll want to make sure your storage ecosystem can offer you container-specific storage offerings—whether it’s software-defined storage, specialized appliances, or block storage. In particular, block storage can be beneficial for both durability and performance of containers.
- Delivers cloud-native design. And gives you the flexibility to leverage storage consistently across hybrid environments. The beauty of containers is that they can run under different conditions, but you’ll want a storage solution that can provide an underlying compatibility with consistent data services, APIs, and as-a-service consumption.
- Has standardized integrations. The solution should integrate with container runtime engines, automation tools, and orchestration systems such as Docker and Kubernetes. Platform-native plug-ins are proving to be mature, effective workarounds for stateful container apps.
Tip: Want to go all-in on containers with ease? Portworx® by Pure Storage® provides a fully integrated Kubernetes data services platform that makes it easy to get persistent storage, data protection, and automated capacity management for Kubernetes workloads.
How will teams effectively manage and orchestrate containers?
Be ready to efficiently manage container storage and head off “container sprawl.” Because it’s so easy to spin them up and down, it’s not uncommon to have thousands of containers at any given time. Staying on top of things—and leveraging automation where you can—is critical.
Kubernetes is the real key to containers at scale in the enterprise. Integrating an orchestrator like Kubernetes with your storage solution allows you to manage, deploy, delete, recover, and provision containers quickly and easily. A tool like Pure Service Orchestrator™ can seamlessly integrate storage with a Kubernetes orchestration environment. It can act as an abstracted control plane for your data, automating persistent storage for containers on demand.
How will you address security concerns with containerized apps at scale?
Security is a top concern for many organizations launching a container strategy. In a survey of 400 IT professionals, 34% noted security wasn’t adequately addressed in their container strategies. When you’re adding new technologies to your stack, you often run into fresh security considerations. By design, containers promote security of applications by isolating their components and enforcing a “least privilege” principle. But you’ll need to take extra steps to avoid larger-scale “privilege escalation” attacks.
One storage option, container-native storage, may also be too untested for many organizations to entrust with business-critical apps. That’s fair, and certain enterprise security concerns may in fact be amplified by containers.
To help you minimize risk, here are some best practices and tactics. Be sure to implement them during the initial design phase—not after the fact.
- Ensure that container security is a shared responsibility, between both DevOps and ITSec.
- Protect containerized apps with Kubernetes-aware disaster recovery and backups. Starting at the storage level can do a lot to protect your data, no matter where it is. When you’re containerizing more traditional, business-critical workloads, be sure to have sufficient encryption, resilience, and backup and recovery measures in place.
- Stay true to the inherent design principle of containers. When possible, configure container apps to run on a “no privilege” access level. Another way to further protect sensitive data? Keep it out of containers and opt for access via API.
- Implement a real-time monitoring solution like Pure1® with advanced log analytics so you can run a better forensic investigation of attacks. This will help you keep a closer eye on data when containers’ short life spans can make it hard to trace an attack.
To embrace multicloud and hybrid cloud and truly modernize your IT organization, you’ll also want to embrace containers—but only fools rush in. If your existing environment, infrastructure, or storage capabilities make it difficult to pivot quickly and easily adopt containerized workloads, take a step back and ensure you’ve got what you need. Adopt containers in a way that allows you to leverage what’s beautiful about them—without toppling your stack.