The "OpenShifting" series covers some history of Docker, Kubernetes and OpenShift though its main focus is on the real-world implementation of these technologies. As PaaS is a rapidly evolving and constantly redefined sector of the technology spectrum, it's likely that some of this series will be out of date by the time it is completed. However, I will do my best to post updates and corrections and accommodate reasonable feedback.
Most of the solutions discussed here should be accessible to any developer and infrastructure engineer with a small budget. Examples will include both physical server deployments and "fringe"deployments in even low-cost, non-ideal cloud hosting environments. One of my goals here will be to showcase the flexibility of such Linux-based container orchestration platforms to motivate even the most cautious among us to embrace this paradigm.
The first time I deployed a Docker container I realized that everything about web application hosting had changed. Of course, by the time I came to that realization, major industries had moved to containerized, "serverless" application deployment and hosting methods years ago... However, as happens with new technology, it takes time for the cutting-edge to hit the mainstream. Docker, by itself, appeared to be a wonderful tool to facilitate consistent, repeatable, application deployments, but something was missing. If all developers relied upon was deploying and managing individual containers, then why would we not just stick with the trusty Linux hypervisor and virtual machine?
Unbeknownst to me, Google had already solved this problem many years before. The history of Kubernetes is already a footnote in the the trend towards distributed, cloud-agnostic, "serverless" infrastructure. K8s (shorthand for Kubernetes) was originally designed by Google before being bequeathed to the Cloud Native Computing Foundation 1. Red Hat took immediate interest in k8s, shifting its underlying platform for OpenShift from its proprietary orchestrator to k8s. Red Hat quickly became a major code contributor to k8s, supporting the platform which, along with Docker, became a core component of OpenShift.
As I write, I can't say yet that I have a specific plan for this series. I've been learning about, deploying and developing against OpenShift for the past couple of years. Even in that comparatively short period of time, there have been major changes to not just OpenShift, but all the contributing components of this PaaS. If there is one lesson to be learned here it's that change, at least in this realm of technology, is constant. That can be a bitter pill to swallow for the professionals seeking to stake their flag on a profitable career. Similar things can be said, of course, about any emerging technology. Lest we forget, application containerization and orchestration is constantly evolving. Winners and losers are crowned and dethroned quarterly all while enterprise DevOps managers and CIOs are struggling to predict if their next investment will carry them through the next fiscal year.
Worry not, for all is not lost! Now that we have all the concerns of uncertainty behind us, we can focus on what really matters. Of course there has to be a focus, not because no one wants to read a meandering blog series, but because it is my core belief that our work should deliver specific results within defined constraints. Leaning on my experience managing a DevOps team, I can say with conviction that results trump ideas. A good solution delivered today beats an excellent solution promised tomorrow. Perfection is the enemy of completion... you get the picture. Pedestrian pontificating aside, the point is that we now have the tools to build working, trusted, solutions today.
The next part in this serious will introduce my work on building an OpenShift Origin cluster lab using Intel NUC (skull canyon) micro form-factor PCs. This will include a deep-dive into the hardware configuration and preparation prior to Ansible playbook OpenShift deployment. Later in the series I will also review my work on developing a multi-node cluster on Scaleway.com baremetal virtual servers.
Future articles will cover:
- Single-Master Clusters on Scaleway Baremetal servers
- Single-Master Clusters vSphere ESXi Hosts
- Multi-Master Clusters on CentOS 7 KVM/libvirtd VMs