Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

Latest commit

 

History

History
114 lines (78 loc) · 6.14 KB

kubernetes-on-generic-platforms.md

File metadata and controls

114 lines (78 loc) · 6.14 KB

Kubernetes on CoreOS with Generic Install Scripts

This repo is not in alignment with current versions of Kubernetes, and will not be active in the future. The CoreOS Kubernetes documentation has been moved to the tectonic-docs repo, where it will be published and updated.

For tested, maintained, and production-ready Kubernetes instructions, see our Tectonic Installer documentation. The Tectonic Installer provides a Terraform-based Kubernetes installation. It is open source, uses upstream Kubernetes and can be easily customized.

This guide will setup Kubernetes on CoreOS in a similar way to other tools in the repo. The main goal of these scripts is to be generic and work on many different cloud providers or platforms. The notable difference is that these scripts are intended to be platform agnostic and thus don't automatically setup the TLS assets on each host beforehand.

While we provide these scripts and test them through the multi-node Vagrant setup, we recommend using a platform specific install method if available. If you are installing to bare-metal, you might find our baremetal repo more appropriate.

Generate TLS Assets

Review the OpenSSL-based TLS instructions for generating your TLS assets for each of the Kubernetes nodes.

Place the files in the following locations:

Controller Files Location
API Certificate /etc/kubernetes/ssl/apiserver.pem
API Private Key /etc/kubernetes/ssl/apiserver-key.pem
CA Certificate /etc/kubernetes/ssl/ca.pem
Worker Files Location
Worker Certificate /etc/kubernetes/ssl/worker.pem
Worker Private Key /etc/kubernetes/ssl/worker-key.pem
CA Certificate /etc/kubernetes/ssl/ca.pem

Network Requirements

This cluster must adhere to the Kubernetes networking model. Nodes created by the generic scripts, by default, listen on and identify themselves by the ADVERTISE_IP environment variable. If this isn't set, the scripts will source it from /etc/environment, specifically using the value of COREOS_PUBLIC_IPV4.

Controller Requirements

Each controller node must set its ADVERTISE_IP to an IP that accepts connections on port 443 from the workers. If using a load balancer, it must accept connections on 443 and pass that to the pool of controllers.

To view the complete list of environment variables, view the top of the controller-install.sh script.

Worker Requirements

In addition to identifying itself with ADVERTISE_IP, each worker must be configured with the CONTROLLER_ENDPOINT variable, which tells them where to contact the Kubernetes API. For a single controller, this is the ADVERTISE_IP mentioned above. For multiple controllers, this is the IP of the load balancer.

To view the complete list of environment variables, view the top of the worker-install.sh script.

Optional Configuration

You may modify the kubelet's unit file to use additional features such as:

Boot etcd Cluster

It is highly recommended that etcd is run as a dedicated cluster separately from Kubernetes components.

Use the official etcd clustering guide to decide how best to deploy etcd into your environment.

Boot Controllers

Follow these instructions for each controller you wish to boot:

  1. Boot CoreOS
  2. Download and copy controller-install.sh onto disk.
  3. Copy TLS assets onto disk.
  4. Execute controller-install.sh with environment variables set.
  5. Wait for the script to complete. About 300 MB of containers will be downloaded before the cluster is running.

Boot Workers

Follow these instructions for each worker you wish to boot:

  1. Boot CoreOS
  2. Download and copy worker-install.sh onto disk.
  3. Copy TLS assets onto disk.
  4. Execute worker-install.sh with environment variables set.
  5. Wait for the script to complete. About 300 MB of containers will be downloaded before the cluster is running.

Monitor Progress

The Kubernetes will be up and running after the scripts complete and containers are downloaded. To take a closer look, SSH to one of the machines and monitor the container downloads:

$ docker ps

You can also watch the kubelet's logs with journalctl:

$ journalctl -u kubelet -f

Did your containers start downloading? Next, set up the `kubectl` CLI for use with your cluster.

Yes, ready to configure `kubectl`