Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (sometimes abbreviated to just OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.
Container Engine for Kubernetes uses Kubernetes - the open-source system for automating deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes groups the containers that make up an application into logical units (called pods) for easy management and discovery. OKE uses versions of Kubernetes certified as conformant by the Cloud Native Computing Foundation (CNCF).
In this article, we will cover the basics of Kubernetes and Docker (which is the container runtime used in OKE).
You can access Container Engine for Kubernetes to define and create Kubernetes clusters using the Console and the REST API. You can access the clusters you create using:
OKE is integrated with Oracle Cloud Infrastructure Identity and Access Management (IAM), which provides easy authentication with native Oracle Cloud Infrastructure identity functionality.
For an introductory tutorial, see Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes and Deploying a Sample App.
But with bigger numbers of deployable components and increasingly larger datacenters, it becomes increasingly difficult to configure, manage, and keep the whole system running smoothly. This is where Kubernetes comes in.
Container Engine for Kubernetes uses Kubernetes - the open-source system for automating deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes groups the containers that make up an application into logical units (called pods) for easy management and discovery. OKE uses versions of Kubernetes certified as conformant by the Cloud Native Computing Foundation (CNCF).
In this article, we will cover the basics of Kubernetes and Docker (which is the container runtime used in OKE).
OKE Capabilities
You can access Container Engine for Kubernetes to define and create Kubernetes clusters using the Console and the REST API. You can access the clusters you create using:
- Kubernetes command line (kubectl)
- Kubernetes Dashboard
- Kubernetes API
OKE is integrated with Oracle Cloud Infrastructure Identity and Access Management (IAM), which provides easy authentication with native Oracle Cloud Infrastructure identity functionality.
For an introductory tutorial, see Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes and Deploying a Sample App.
Microservices-Based Application Architectures
Microservices (typically implemented as containers) are loosely coupled services which are combined within a software development architecture to created a structured application. Because microservices are decoupled from each other, they can be developed, deployed, updated, and scaled individually. This enables you to change components quickly and as often as necessary to keep up with today’s rapidly changing business requirements.
Here are some of the key features that are built into Kubernetes:[3]
The Control Plane is what controls the cluster and makes it function. It consists of multiple components that can run on a single master node or be split across multiple nodes and replicated to ensure high availability. These components are
When the API server processes your app’s description, the Scheduler schedules the specified groups of containers onto the available worker nodes based on computational resources required by each group and the unallocated resources on each node at that moment. The Kubelet on those nodes then instructs the Container Runtime (e.g. Docker) to pull the required container images from an image registry (e.g., Oracle Cloud Infrastructure Registry) and run the containers.
Oracle Cloud Infrastructure Registry
Oracle Cloud Infrastructure Registry is an Oracle-managed registry that enables you to store, share, and manage development artifacts like Docker images.
You can use Oracle Cloud Infrastructure Registry as a:
- Self-healing
- Kubernetes controller-based orchestration ensures that containers are restarted when they fail, and rescheduled when the nodes they are running on fail.
- User-defined health checks allow users to make decisions about how and when to recover from failing services, and how to direct traffic when they do.
- Service discovery
- Kubernetes is designed from the ground up to make service discovery simple without needing to make modifications to your applications.
- Each instance of your application gets its own IP address, and standard discovery mechanisms such as DNS and load balancing let your services communicate.
- Scaling
- Kubernetes makes horizontal scaling possible at the push of a button, and also provides autoscaling facilities.
- Deployment orchestration
- Kubernetes not only helps you to manage running applications, but has tools to roll out changes to your application and its configuration.
- Its flexibility allows you to build complex deployment patterns for yourself or to use one of a number of add-on tools.
- Storage management
- Kubernetes has built-in support for managing the underlying storage technology on cloud providers, such as OCI Block Volume Service, as well as other standard networked storage tools, such as NFS.
- Cluster optimization
- The Kubernetes scheduler automatically assigns your workloads to machines based on their requirements, allowing for better utilization of resources.
- Batch workloads
- As well as long-running workloads, Kubernetes can also manage batch jobs, such as CI, batch processing, and cron jobs.
Figure 1. The components that make up a Kubernetes cluster |
Architecture of Kubernetes Cluster[4]
Control Plane
The Control Plane is what controls the cluster and makes it function. It consists of multiple components that can run on a single master node or be split across multiple nodes and replicated to ensure high availability. These components are
- Kubernetes API Server
- You and the other Control Plane components communicate with
- Most operations can be performed through the kubectl command-line interface or other command-line tools, such as kubeadm, which in turn use the API. However, you can also access the API directly using REST calls.
- Scheduler
- Schedules your apps (assigns a worker node to each deployable component of your application)
- The Kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity.
- Controller Manager
- Performs cluster-level functions, such as replicating components, keeping track of worker nodes, handling node failures, and so on
- In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.
- etcd
- A reliable distributed data store that persistently stores the cluster configuration.
Worker Nodes
The worker nodes are the machines that run your containerized applications. The task of running, monitoring, and providing services to your applications is done by the following components:
The worker nodes are the machines that run your containerized applications. The task of running, monitoring, and providing services to your applications is done by the following components:
- Docker (or rkt)
- Runs your containers
- Kubelet
- Talks to the API server and manages containers on its node
- Kubernetes Service Proxy (kube-proxy)
- Load-balances network traffic between application components
Figure 2. A basic overview of the Kubernetes architecture and an application running on top of it |
Running an Application in Kubernetes
Pulling Images from Registry during Kubernetes Deployment
During the deployment of an application to a Kubernetes cluster, you'll typically want one or more images to be pulled from a Docker registry. In the application's manifest file you specify the images to pull, the registry to pull them from, and the credentials to use when pulling the images. The manifest file is commonly also referred to as a pod spec, or as a deployment.yaml file (although other filenames are allowed).
During the deployment of an application to a Kubernetes cluster, you'll typically want one or more images to be pulled from a Docker registry. In the application's manifest file you specify the images to pull, the registry to pull them from, and the credentials to use when pulling the images. The manifest file is commonly also referred to as a pod spec, or as a deployment.yaml file (although other filenames are allowed).
Oracle Cloud Infrastructure Registry is an Oracle-managed registry that enables you to store, share, and manage development artifacts like Docker images.
You can use Oracle Cloud Infrastructure Registry as a:
- Private Docker registry
- For internal use, pushing and pulling Docker images to and from the Registry using the Docker V2 API and the standard Docker command line interface (CLI).
- Public Docker registry
- Enabling any user with internet access and knowledge of the appropriate URL to pull images from public repositories in Oracle Cloud Infrastructure Registry.
References
- Overview of Container Engine for Kubernetes (OCI)
- Microservices on Kubernetes
- Kubernetes on AWS
- Kubernetes in Action
- Kubernetes Reference
- Downloading a kubeconfig File to Enable Cluster Access (OCI)
- Overview of Registry (OCI)
- Example: Setting Up an Ingress Controller on a Cluster (OCI)
- NGINX Ingress Controller for Kubernetes
- Docker Registry V2 (SlideShare)
- Oracle Cloud Infrastructure (redthunder.blog)
- More articles on OCI (XML and More)
- WebLogic Server Kubernetes Operator
very useful information, the post shared was very nice.
ReplyDeleteDocker Online Training
This is a good post. Thanks for sharing.
ReplyDeleteDevOps Training
DevOps Online Training
Great Blog!!! thanks for sharing with us.
ReplyDeletecareer in software testing
software testing career
I really appreciate for your efforts you to put in this article, this is very informative and helpful. I really enjoyed reading this blog. Keep sharing and give us updates about Self Healing Cloud Infrastructure.
ReplyDelete