resourceLayerBanner

Kubernetes Architecture

resourceLayerBanner

It may help to understand Kubernetes on a basic level. The system is designed to run and coordinate the container applications across a number of machines. The platform was created to manage the life cycle of these containerized apps with methods that act based on predictability models.

At its most basic form, Kubernetes pulls individual physical or VM (virtual machines) into clusters to share the network and improve communications between servers. The cluster forms the physical point where the pieces, tools and workloads of Kubernetes are configured.

Kubernetes is designed at a high level and organized to function flawlessly. The physical or virtual machines in a given cluster are delegated a task by Kubernetes to support the application. One server will function as the master server, acting as the brain for the cluster. This “brain” will health-check other servers and act as the gateway for users and clients, exposing the API. The master server becomes the centralized point of action and information from which Kubernetes is able to perform.

The other machines within the cluster act as supporting servers (or nodes) that run workloads and accept tasks for the app. They use both local and external resources to complete these jobs and run all the busywork in the background. Kubernetes uses containers to run these applications and isolate the machines. Containers will need to be equipped with a runtime tool to support the flexibility, manageability and isolation capabilities of each node. The master server gives out tasks to the nodes, creating and destroying containers as needed to coordinate traffic and resources.

The user interacts with the cluster by communicating through the API to the master server. Typically, the user won’t have any way of knowing when the master server adjusts the nodes or containers, since this is all done seamlessly and behind the scenes of the app. The user might approach the app directly or through app libraries.

The user starts the application with a declarative plan on JSON or YAML to determine what is created and how the app is managed. The master server through Kubernetes then uses this plan and automates a functional infrastructure. The open-source program is able to use the app plan requirements and state of the resources to determine the best path to achieve the app goal. The final piece of Kubernetes is to finish the app for user activity with a shiny, user-friendly skin set of visible controls.

Kubernetes Objects and Concepts

Within a given cluster, there are entities that are established by Kubernetes as objects. These components make up the working layer and base data of the app.

Kubernetes Pods

Containers are the major mechanism within the Kubernetes program, but there are also smaller layers that provide scaling, healing and life cycle management features without being confined to the containers directly. These abstract components provide support to Kubernetes by handling workloads on top of the container interface.

The most basic unit of Kubernetes is the pod, which is made up of one or more containers. The pod is a container that operates as a single application and may include a number of containers that work closely together within the same lifecycle. They share a network and storage.

The containers within a pod should always be scheduled to the same tasks and are typically managed as a unit, sharing IP space and volumes. Within the pod is a main container that controls the workloads and facilitates the tasks of the helper containers for that pod.

In a larger cluster of pods, one will act as a leader or master server pod. All other (secondary) pods or containers are called nodes.

Users should not manage pods themselves in the majority of cases because they are not functional or intuitive. Instead, users typically work on a higher level with components or controllers that use pods and offer a more comprehensive functionality.

Master Server Components

The master server creates a kind of inner base for Kubernetes to assume control. The master server is the main contact point where administrators and users are most active. It allows a central point of power for creating demands, identifying workloads, analyzing capacity, examining resources and outsourcing tasks to other nodes. The components of the master server have to work together to accommodate the user requests and determine a course of action. Kubernetes is designed to select the best path and the best ways to schedule workloads within the containers. The master server has to act for the user within the constraints established by the creator, adjusting, networking, managing and checking on the go.

These responsibilities may occur within a single server or be distributed across a number of machines. These are the individual components that make up the master server.

Etcd: A globally accessible configuration store is vital for Kubernetes functionality. The etcd project was created by CoreOS and offers a store that can cross multiple nodes, accessible by any of those nodes within the cluster. The store can be configured on a master server or spanning multiple machines. It does need to be available to the network of each Kubernetes machine used.

Kube-apiserver: A management point for the whole cluster, the kube-apiserver watches the etcd store and ensures the deployed containers are acting in accordance to orders. It bridges other components to support the health of the cluster and translate the service details provided.

Kube-controller-manager: This control plane will cover a number of responsibilities, but is primarily in charge of regulating the state of the cluster after retrieving information from the API for all nodes. The controller manager will manage the workload life cycles and participate in completing routine tasks. This may include scaling, adjusting end point, replicating and more to support changes coming through the API server.

Kube-scheduler: Reading a workload’s requirements for operating, this is the default scheduler that operates within the control plane. The service assigns workloads and tracks capacity to ensure scheduling doesn’t conflict with resources. The scheduler manages the life cycles of the containers and organizes workloads allocated to each node.

Cloud-controller-manager: This component is linked to the cloud and connects the cluster to embed cloud-specific control logic. You can link to the acceptable private, public or hybrid cloud provider’s API to release features at a varying pace from the kube-controller-manager. This allows different cloud platforms to integrate with Kubernetes through the structured plugin, running as part of the control plane.

These components act as a glue, allowing Kubernetes to interact in a consistent way with providers that require different features, capabilities and controls. The master server relies on these components to help structure the workloads and manage the outcomes.

Node Server Components

Within the node servers, components exist to help configure the containers and run workloads as they are assigned. These are components within the node servers.

Container runtime: This component is often satisfied with a secondary project, like Docker. It serves to start and manage the container runtime. Alternative projects, like rkt, Atomic and runc can be used to perform this function as well. The selected project will be used by Kubernetes to distribute and orchestrate pod creation across the project hosts. This determines the runtime within each node that completes the workloads required of the cluster.

Kubelet: Within each node, there is a main contact point for relaying information called a kubelet. This is how the plane services and etcd send and receive information from the node. The kubelet communicates to gather workload instructions and operating parameters, which it then relays to the node server. The kubelet will control the container runtime for constructing, launching or destroying containers to scale.

Kube-proxy: This small service manages individual hosts and other small components. The proxy service works within each node server, forwarding requests to the correct containers, balancing basic loads and ensuring the networking remains accessible. The kube-proxy ensures isolation where necessary and keeps the communication predictable between containers and in relationship to the network.

Labels

Creating labels within Kubernetes provides a tag that helps mark objects as part of a group. This can enable easy selection that targets those items for management or routing tasks. Labels will identify the pods that controller-based objects operate on or tell services where they should route the requests. Created as simple key-value pairs, labels can apply to multiple units and every unit can have more than one label. These labels may clarify the development stage, accessibility settings, app version, and more.

This will be very important for setting up different nodes and objects to qualify for specific tasks or exceptions. Labels offer you an opportunity to provide notes on the pieces. Specifically, labels are used for logical information and not just whatever you feel like including.

Annotations

Similar to labels, annotations within Kubernetes allow you to attach information to an object within the app. Labels are used for logical information that help determine search criteria, but annotations are free-form. These are where you can store less structured notes and data. This is a good place to add metadata that wouldn’t support selection purposes but still holds value within the app.

Kubernetes Patterns

In Kubernetes, developers and architects are given guidelines for building complete systems. It is up to the creators to utilize Kubernetes and the library of APIs to build successful apps that meet their needs and goals. Patterns reuse architecture within container-based apps to build on existing patterns. This helps ensure things work like they are meant to within the app.

Learning on your own through trial and error takes a lot of time and is exhausting. With patterns identifying similar issues in the background, the developer or architect doesn’t have to go through and fix each issue. The issues can be caught and dealt with as one. You can solve a whole class of issues that are similar because the pattern catches the similarities. There are a number of patterns that help build the system.

Foundational patterns: Covering the basics of Kubernetes, the core principles are used to support the building of container-based apps native to the cloud.

Behavioral patterns: Patterns sitting on top of foundational patterns as overlays will add a deeper layer to the concepts that run the various kinds of container apps and the platform interactions they have.

Structural patterns: Organizing containers within a given pod, helps form the backbone of the app.

Configuration patterns: Offering a new viewpoint into app configurations, these patterns will help connect apps to those forms.

Advanced patterns: Deeper concepts that might utilize unusual solutions, like suggesting the platform as a way to extend and build with operators.

Controllers

In order to manage and provide workloads for the pods and containers, certain controllers are used within Kubernetes. These can speed tasks and improve efficiency for redundant tasks. Controllers allow you to create certain tasks and set them in motion without a lot of effort or involvement.

Replication Sets

With most processes, pods will be replicated to perform more functions simultaneously. The pods are created from pod templates and scaled by replication controllers or sets. These act based on instructions that allow them to replicate pods needed to support the app correctly and destroy unnecessary pods to avoid congested activity.

Replication controller: The ReplicationController (also called the rc) uses the template and parameters to create identical copies of the pod to distribute workloads within the app. Whenever pods fail, the controller replicates pods to make up for the lack. If there are extra pods or they fail, they are destroyed. The replication controller supervises the processes across multiple nodes to cover the demands of multiple pods.

Replication sets: Similar to replication controllers, the replication sets are typically used on higher-level units because they don’t provide the minimized impact the replication controllers offer. They can be referred to as RelicaSets and always ensure a certain number of pods are maintained according to set criteria. ReplicaSet will be linked to a deployment and this tends to be what the user will be dealing with rather than the ReplicaSet unless custom updates are needed.

Users rarely deal with replication controllers or sets because they are also small-scale and lack management capabilities preferred for user control. However, the replication controls are occurring on the base level, creating and destroying pods according to the instructions set in place by the developer or creator.

Kubernetes Deployment

There are resource objects in Kubernetes that tell the system how you want the workload of a cluster to look. The object, like Kubernetes deployment, allows you to establish and describe the life cycle of an application. You will determine what images are used, how many pods should stay in existence and how updates occur.

This process can be highly time consuming and tedious. Manual updates can be simplified with Kubernetes deployment.

The deployment is typically used by the user or developer to create the declarative updates that use pods replication sets or controllers. As one of the most common workloads, deployments start with replication sets and add flexible management of life cycles and functionality.

Deployments are capable of replicating while rolling updates are implemented. This makes it easier to create and manage workloads directly, even after network failures, while rolling back failed changes or in the middle of an update. Deployments are directly managed by the user to adjust replica sets, manage app version transitions and continue event history without interruption as a Kubernetes object.

Rolling update strategy: When the app is in use, a deployment can be used to ensure a phased replacement of pods that always leaves a minimum number active. The default is set to never pull more than 25% of pods into an inactive state or overloaded at any point. This deployment also ensures that there are a certain number of pods working in order to maintain the activity level, not killing old pods until that number is met and not creating new pods until enough of the old pods have been removed. There is no downtime during the rolling update deployment, yet the app essentially runs with two versions of the same container and issues may arise with conflicts between them.

Recreate strategy: There is another form of deployment that removes all old pods before creating new ones. The containers are destroyed, starting the new series of containers simultaneously. There is a pause for downtime as this occurs, but users won’t run into conflicting dual containers that can cause issues.

In order to use deployment, developers write a manifest that is applied to the cluster. This might be written with JSON or YAML and can be applied with kubectl. Once the manifest is written, you only need to change details within your template for future updates. The deployment’s rollout history is held within the system in case you need to rollback the deployment for stability issues.

StatefulSets

Specialized pod controllers called StatefulSets provide ordering and guarantee uniqueness for each pod. This number-based unique name remains intact, even if the pod is moved to another node. This helps with stability and supports the transfer of storage volumes or rescheduling workloads. Since volumes persist after the pod is deleted, accidental data loss can be prevented. This gives you more control over execution orders and helps improve predictability.

Like deployment, the stateful set will manage pods within a certain container spec. But stateful sets keep the identity of the pods despite rescheduling, providing persistence with storage volumes. Individual pods may fail, but identifiers put in place by the StatefulSet will make it easier to replace the failed pods and match volumes.

DaemonSets

This pod controller is another specialized tool that creates a pod copy on each node specified within the cluster or subset. The convenient thing about a DaemonSet is that the pods created can also be batch deleted. This can help for running cluster storage, log collection and node monitoring in batches.

These sets will bypass pod schedules and run throughout the fleet to keep essential services running. This can sometimes be an issue, since the set won’t consider pod priority for scheduling decisions. It can create inconsistent pod behavior that leads to confusing results for the user if there is a conflict.

When collecting logs, forwarding data, gathering metrics, and starting services that will increase node capabilities, it is important to use a tool like DaemonSets to ensure the Kubernetes nodes are deployed correctly and deleted efficiently after their activity is complete.

The DaemonSet is often created in YAML and is set into motion through the DaemonSet controller instead of the Kubernetes scheduler. A DaemonSet scheduler can be used to create these actions to be put into place for a certain point when the DaemonSet controller is needed to complete the task.

Jobs

Kubernetes offers a workload type that acts as a batch processor to complete specific tasks. When the task is done, deleting the job will clean up any pods created for that task. The job will track completion and recognize the end of its needed cycle. It has the ability to create pods or run multiple pods to get the same job accomplished in more than one section.

Job: Rather than run for the life cycle, a Kubernetes Job works until completion. Typically, the job is just a one-task controller, but there is another type of job that can be used to activate during specified times.

CronJob: A Kubernetes Cron Job can be used for tasks that need to be run periodically. So while Jobs are a one-time command, the CronJob is something that will be scheduled for a repeating task. The CronJob can be useful for running backups or sending automated emails. They can also be used to schedule jobs during times when the app is less taxed and clusters are likely to be idle.

Garbage Controller

There are sometimes free-roaming pods or objects that have no owner. The role of the Kubernetes garbage collector control is to search out and delete those rogue objects. Within a DaemonSet, ReplicaSet or other pod creator controllers, there is a reference field that clarifies the owner and signifies the object has a place so it isn’t automatically selected for removal.

The garbage collector can also be used to remove objects and any dependents if commanded. There are two ways to complete this cascading deletion (deleting dependents).

Foreground cascading deletion: in this case the object will be visible with a deletion time stamp set. The garbage collector will then begin deleting the dependents first before deleting the creator object. An admission controller can be put in place to stop the deletion of the owner object or remove blocks that are slowing the deletion process.

Background cascading deletion: Sometimes it is more efficient to destroy the owner first and the cascading dependents are deleted as a follow-up in the background.

If deletion controls are not set correctly, pods can be orphaned. When it comes to using deployments for cascading deletes, the propagation police foreground command has to be used to destroy the RelicaSets along with their pods and not just the ReplicaSets.

Scheduling and Eviction

When an app runs into issues with resources, Kubernetes can help adjust the pods and nodes. Kubernetes will schedule pods by matching them to notes where they are operated by the kublet. When a node becomes starved for resources, one or more pods will be selected to purposely fail as a proactive effort towards scaling.

Node affinity: This pod property attracts them to sets of pods (or nodes) to create clusters.

Taints: A label on a node that allows the node to repel an unwanted set of pods to ensure pods don’t join an inappropriate node. Creators can add taints with a kubectl taint line.

Tolerations: Are used with pods to allow them to schedule onto appropriate nodes that have identical taints. Creators can specify those toleration rules within the PodSpec.

It’s really important that your application knows how to adapt to the available resources. Without this adjustment, your app could be very sluggish or freeze up.

TTL Controller

Objects should have a limited lifetime of resources before being replaced. The TTL (time to live) controller creates that established life cycle within jobs. The TTL controller assumes that objects need to be cleaned up within seconds of the completed task or resource. Whenever the TTL expires, the TTL cleans up the resource in a cascade method to remove any dependent objects as well. All life cycle guarantees, like finalizers, will be followed by the TTL controller.

TTL values might be based on labels, resource status, timeline or other qualifiers. This allows a job to be cleaned up automatically without following up later with a garbage controller. The TTL controller will use timestamps within Kubernetes to consider expiration times and could cause a time skew, so all nodes need to have the NTP run on them to help with this. And, developers might decide to avoid non-zero TTL for this very reason.