Workflow for Kubernetes DevOps

The developers, application/cluster operators, architects, and security team wants to contribute to the Kubernetes YAML continuously to keep the Infrastructure matching to the evolving organization strategy and policy, which demands a workflow.

Arun Ramakani
FAUN — Developer Community 🐾

--

The software systems we build are always expected to be in a state of dynamic equilibrium. This dynamic equilibrium generally happens in two dimensions a) Fast phased technology changes b) The rapidly evolving business landscape. Artificial Intelligence, Docker, Cloud, Kubernetes, Istio / Other service Mesh, Helm, Microservices architecture pattern, Serverless are some of the examples of Technology changes. Airbnb, iPhone, Amazon, Zomoto, Netflix, Cloud Kitchens, Uber, COVID 19 some of the examples of Business Landscape changes. These changes result in a continuous change of organization strategies, structure, policies, and business models.

While the code needs to evolve continuously adapting to these changes, there is also a need to find a way where developers, testers, application/cluster operators, architects, and security teams can collaborate efficiently to adapt to these evolving changes.

This blog attempts to advocate how a workflow with “ Helm >> Kustomize >> GitOps >> Admission Controller “ can be the base DevOps infrastructure where people can collaborate seamlessly.

Helm >> Kustomize >> GitOps >> Admission Controller

Many call Kubernetes as “The Linux of the Cloud”, and next abstraction after Linux which is going to stay for some time. Read more on the same at Kubernetes The Universal Abstraction.

Now with the assumption that Kubernetes is going to stay for some time, we have to introduce a new application development process that evolves continuously supporting the changes to technology and business landscape. We will look at, how a workflow with “ Helm >> Kustomize >> GitOps >> Admission Controller “ will serve this purpose.

Stage 1: Declarative Application Description, Packaging, Dependency and Lifecycle Management

With Kubernetes, the relationship between the application and the underlying infrastructure is purely declarative. You ask for what do you want from the infrastructure with declarative (YAML’s). The implementation details of these YAML’s are abstracted away by the underlying Kubernetes cluster with Controllers, Schedulers, KubeDNS, KubeProxy, Operators, etc., This enables us to move away from the traditional “Infrastructure as Code” to “Infrastructure as Data”. Read more on this at GitHub. The key here is, every declarative tag that you need for an application description, packaging and dependency, and Lifecycle management is enriched into the YAML in the Continuous Delivery pipeline and pushed into stage 2.

Helm is an awesome tool to support this use-case with a big community behind. Also, there are many tools that may fit your specific ecosystem. More lists of tools are available here.

Stage 2: Customize the YAML to add Policy

The next stage is to look for a tool that helps customize the YAML for application/cluster operators, architects, and security teams to add policies, tools, and resource segmentation tags. Why do we need a separate tool? Cant Helm do this? Yes, Helm can do this but is come with certain problems.

Generally, these policy and supporting tools tags are cross-cutting in nature, introducing them into the Git Repo where developers maintain Helm Chats will create multiple issues.

  1. Cost Of Maintenance — Git Fork or Copy of Helm Charts
  2. Leaky Abstraction
  3. Coupling Between Different Cross-cutting Concerns

More on this at Helm Is Not Enough, You Also Need Kustomize. Now we will have a production-ready YAML after customizing the Helm charts with tools like Kustomize.

Stage 3: Auto-sync the YAML to Cluster

The next step is to apply these production-ready YAML into the Kubernetes cluster With GitOps Tools. We can simply apply the YAML changes to the cluster, but GitOps tools will bring many advantages. It will make the git a true “source of truth” with Zero manual changes into the Kubernetes cluster by continuously looking into the git repositories to reflect any state change into the Kubernetes cluster. Argo CD, Flux, Flagger are some of the famous GitOps tools in the market. Now the 3 stage flow will look like the below reference architecture.

Read more on this at Continuous GitOps, the way to do DevOps in Kubernetes

Stage 4: Fitness Functions with Admission Controller

The final stage is specialized for Architects and security engineers to run validation. In the Evolutionary architecture practices fitness function plays a critical in providing a guided change to the continuously evolving software product. This is like automated architecture governance. Let’s say we have a rule across the teams, that maximum memory allocation for a POD is 100 mi. We can use the validation admission controller to validate if all the YAML deployments match the rule. Another example of validation could be a rule requesting all API pods to be exposed via a specific port.

One important thing to note here is, while stages 1 & 2 are declarative in nature, and stage 3 is imperative. Hence we should use an admission controller only for policy enforcement. Mutating the YAML with the admission controller is a late binding and may fail in ways that are hard to predict. Above all mutating, the YAML is against the principles of GitOps(Stage 3).

I hope the reading is useful. Let’s create some powerful declarative workflows 🚀. See you in an upcoming article🏄

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬.

To join our community Slack team chat 🗣️ read our weekly Faun topics 🗞️, and connect with the community 📣 click here⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--