Kubernetes Up and Running — Chapter 1

Sathish
5 min readDec 1, 2020

Recently, I started learning Kubernetes (K8s) following the book Kubernetes Up and Running. I felt, it will take 2–3hrs to go through each chapter in detail, so, I have captured the important points chapter wise. Today we will discuss about Chapter 1 .

Introduction

Kubernetes is an open source orchestrator for deploying containerized applications. It was originally developed by Google, inspired by a decade of experience deploying scalable, reliable systems in containers via application-oriented APIs

It has become the standard API for building cloud-native applications, a proven infrastructure for distributed systems that is suitable for developers of all scales.

There are many reasons why people uses Kubernetes, but all can be tracked to one of these benefits:

· Velocity

· Scaling (of both software and team)

· Abstracting your infrastructure

· Efficiency

  1. Velocity

Velocity is measured in terms of the number of things you can ship while maintaining a highly available service. Kubernetes provides velocity using the following core concepts.

· Immutability

· Declarative configuration

· Online self-healing systems

1.1 The Value of Immutability

Kubernetes encourage developers to build distributed systems that adhere to the principles of immutable infrastructure.

Traditionally, With a mutable system, the current state of the infrastructure is not represented as a single artifact, but rather an accumulation of incremental updates and changes over time. Furthermore, in any system run by a large team, changes are performed by different people and it’s very hard to rollback when error occurs and hard to record the changes.

In contrast, in an immutable system, an entirely new image is built, where the update simply replaces the entire image with the newer image in a single operation. There are no incremental changes, and it will enable for easy rollback.

1.2 Declarative Configuration

Everything in Kubernetes is a declarative configuration that represents the desired state of the system. Kubernetes ensures the actual state matches this desired state.

Much like mutable versus immutable infrastructure, declarative configuration is an alternative to imperative configuration, where imperative commands define actions, declarative configurations define state.

To understand these two approaches, consider the task of producing three replicas of a piece of software. With an imperative approach, the configuration would say “run A, run B, and run C.” The corresponding declarative configuration would be “replicas equals three.”

1.3 Self-Healing Systems

Kubernetes is an online, self-healing system. When it receives desired state configuration, it does not simply match the current state to desired state, but continuously monitor to ensure that the current state matches the desired state.

2. Scaling Your Service and Your Teams

Kubernetes achieves scalability of software and the team that develop it by favouring decoupled architectures.

2.1 Decoupling

In a decoupled architecture, each component is separated from other components by defined APIs and service load balancers. Decoupling components via load balancers makes it easy to scale the programs that make up your service, because increasing the size (and therefore the capacity) of the program can be done without adjusting or reconfiguring any of the other layers of your service.

Decoupling servers via APIs makes it easier to scale the development teams because each team can focus on a single, smaller microservice with a comprehensible surface area.

2.2 Easy Scaling for Applications and Clusters

Kubernetes makes scaling trivial to implement due to its immutable and declarative config. Scaling your service upward is simply a matter of changing a number in a configuration file, asserting this new declarative state to Kubernetes, and letting it take care of the rest. Alternatively, you can set up autoscaling and let Kubernetes take care of it for you.

2.3 Scaling Development Teams with Microservices

Development team can be decoupled into service-oriented small teams, responsible for building a single microservice , consumed by other small teams. The aggregation of all of these services ultimately provides the implementation of the overall product’s surface area.

Kubernetes provides numerous abstractions and APIs to build these decoupled microservice architectures:

• Pods, or groups of containers, can group together container images developed by different teams into a single deployable unit.

• Kubernetes services provide load balancing, naming, and discovery to isolate one microservice from another.

• Namespaces provide isolation and access control, so that each microservice can control the degree to which other services interact with it.

• Ingress objects provide an easy-to-use frontend that can combine multiple micro‐ services into a single externalized API surface area.

2.4 Separation of Concerns for Consistency and Scaling

The decoupling and separation of concerns produced by the Kubernetes stack lead to significantly greater consistency for the lower levels of your infrastructure.

This enables to scale infrastructure operations to manage many machines with a single small, focused team.

Container orchestration API becomes a crisp contract that separates the responsibilities of the application operator from the cluster orchestration operator. The application developer relies on the service-level agreement (SLA) delivered by the container orchestration API, without worrying about the details. Likewise, the container orchestration API reliability engineer focuses on delivering the orchestration API’s SLA without worrying about the applications that are running on top of it.

3. Abstracting Your Infrastructure

Kubernetes has two concrete benefits.

· Separates developers from specific machines

· Makes the machine-oriented IT role easier, since machines can simply be added in aggregate to scale the cluster, and in cloud, it also enables a high degree of portability since developers are consuming a higher-level API that is implemented in terms of the specific cloud infrastructure APIs.

Kubernetes has a number of plug-ins that can abstract you from a particular cloud. For example, Kubernetes services know how to create load balancers on all major public clouds Likewise, Kubernetes PersistentVolumes and PersistentVolumeClaims can be used to abstract applications away from specific storage implementations.

Kubernetes application-oriented abstractions ensures that the effort put into building, deploying, and managing your application is truly portable across a wide variety of environments.

4. Efficiency

Efficiency can be measured by the ratio of the useful work performed by a machine or process to the total amount of energy spent doing so.

Kubernetes provides tools that automate the distribution of applications across cluster of machines, ensuring higher levels of server utilization.

A further increase in efficiency comes from the fact that a developer’s test environment can be quickly and cheaply created as a set of containers running in a personal view of a shared Kubernetes cluster (using a feature called namespaces).

--

--

Sathish

Software Architect ★ Developer ★ Troubleshooter