A Brief History on the Growth Explosion of Containers
A couple of years ago, Linux containers began to gain popularity among devs and ops folks alike. It was a win win scenario as the benefits of containers to both dev and ops are clear. As time moved along and the adoption of containers in production workloads exceeded many people’s expectations, the world needed a way to manage container lifecycles at scale.
Enter Kubernetes, while it is not the only proven open source container manager out there (see Docker Swarm & Mesos,) it certainly has the majority share of voice in the community. Originally a Google project that was a core component of their internal Borg system, it has since been open sourced and embraced by companies like Microsoft and Red Hat (OpenShift is powered by Kubernetes.) Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Models for Consuming Kubernetes
Roll your own
Not unlike any other open source project, there is always the option of running Kubernetes yourself. Of course this will come with its own learning curve above and beyond container runtime environments. Kubernetes is a complex system with multiple core components. Availability and performance of these Kubernetes components will need to be managed just as the applications that run top them will.
Pros: Free software, set up the way that you want it.
Cons: Brand new learning curve that will undoubtedly come with some pains. You will be responsible for running and managing Kubernetes itself as a well as the workloads that run on top of it. Community support only, which may or may not be a con for some.
Who is it for: An ops-heavy organization whose core competency is building and supporting software. They may want the ability to make all of their own technological decisions all the way through the stack and investing in and learning the inner workings of Kubernetes may be considered a competitive advantage for the organization.
All three of the major public cloud providers (AWS, Azure & Google) offer their own native containers-as-a-service product. Amazon has EC2 Container Service (which is not Kubernetes based,) Google offers its Container Engine and Azure has Azure Container Service (the latter two are powered by Kubernetes.) You essentially feed these systems your containers and they manage scale and availability on your behalf based on the requirements that you define about your workloads.
Pros: Zero effort in managing scale and availability of your containerized apps
Cons: You are essentially locked into the provider who is offering the service
Who is it for: An organization who is heavy in the dev department, but lighter in the ops department and is okay with having one cloud vendor manage all of its containers.
BYOCaaS (Bring your own Container-as-a-Service)
Maestro is a full stack application management platform from our friends at Cloud 66. With support for containers (backed by Kubernetes) and complete with support for non-container parts of your infrastructure including firewalls and network, databases (provisioning, monitoring, backups and replication), security and ACL access control, OS and server level security monitoring, deployment workflow management, and native DB and storage components and more.
The thing we like about Maestro is the support for multi-cloud – which kind of blends the first two options for consuming Kubernetes. Kind of like Bring your own Containers-as-a-Service (BYOCaaS ?) Cloud 66 supports AWS, Azure, Google, Digitalocean, Packet and of course Cloud-A. Furthermore, you can deploy Maestro to dedicated physical services, and/or your own servers on-premises.
Pros: Multi-cloud, near-hands-off container management, the option of integrating with and using Cloud 66’s SkyCap – Container-native CI pipeline. Native support for Cloud-A 😉
Cons: Less control than Roll your own, with equal to or greater control then as-a-service.
Who is it for: An organization who is heavy in the dev department, lighter in the ops department and also wants the ability to deploy containers on multiple clouds and/or their own servers. These might be organizations whose customers require that their apps and data be resident in a particular jurisdiction for performance or regulation purposes.
Beginning your Kubernetes Journey
There isn’t an easy button for deciding which model for consuming Kubernetes is best for your workloads. This should be an organizational decision that considers the requirements for devs your ops folks and most importantly your customers. If you have any questions feel free to reach out to our team of cloud engineers, or reach out to one of our partners directly.