HostPath volume is the one that uses nodes local storage, but it is in most cases a solution that can be used on dev, but when you go prod and have multiple nodes/pods using hostPat volume may cause some serious issues (think about pod rescheduling to different node or 2+ pods scheduled on the same one) – Radek 'Goblin' Pieczonka Mar 6 '18 at.
You’ve mastered the Swarm. Now it’s time to master the Helm
I‘ve never looked at Kubernetes because Swarm gave me all I needed in terms of Container Orchestration. While being straightforward to use, it shined in a world where Container Orchestrators like Mesos and Kubernetes were difficult to setup.
But now in 2018 the story is quite different: All three major cloud providers (AWS, Google Cloud and Azure) are now betting on Kubernetes with managed Kubernetes as a Service. This is big, because it takes all the complexity of managing a cluster (which is the main pain point in K8S, in my opinion) and puts it in the Cloud Provider’s hands. Not to mention the fact that the new versions of Docker Enterprise and Docker for Mac & Windows will come bundled with Kubernetes out of the box.
The size of the community is also a big point in this story. Every time I had a problem with Docker Swarm it took me a while to find a solution. In contrast, even with more features and configuration possibilities, simple Google searches and asking questions on Slack helped me to solve all the problems I had so far with Kubernetes. Don’t get me wrong here: The Docker Swarm community is great, but not as great as the Kubernetes one.
This point isn’t Docker Swarm’s fault: The reality is that Kubernetes is under active development by companies like Google, Microsoft, Red Hat, IBM (and Docker, I suppose), as well as individual contributors. Taking a look at both Github repositories reveals that in fact Kubernetes is a lot more active.
But hey! This was supposed to be a guide, so let’s start by comparing how to achieve similar scenarios in both Swarm and K8S.
Disclaimer: This guide was not meant to provide any production-ready scenarios. I made it simple to illustrate the similarities between Swarm and K8S more easily.
Starting a cluster (1 Master & 1 Worker)
To keep things simple, let’s build a simple cluster with 1 Master and 1 Worker.
Starting a Cluster — Docker Swarm
Starting a cluster in Docker Swarm is as simple as it gets. With Docker installed on the machine, simply do:
Then, on another machine in the same network, paste the aforementioned command:
Starting a Cluster — Kubernetes (using kubeadm)
I mentioned a few times that setting up a Kubernetes cluster is complicated. While that remains true, there is a tool (which is still in beta) called kubeadm that simplifies the process. In fact, setting up a K8S cluster with kubeadm is very similar to Docker Swarm. Installing kubeadm is easy, as it can be installed with most package managers (brew, apt, etc)
The command takes a while to complete, because Kubernetes relies on the setup of external services like etcd to function. All of these is automated with kubeadm.
As with Swarm, to join another node one must simply run the outputted command in another node:
So far, the cluster creation process is nearly identical in both solutions. But Kubernetes needs an extra step:
Installing a pod network
Docker swarm comes bundled with a service network that provides networking capabilities inside the cluster. While this is convenient, Kubernetes comes with more flexibility in this space, letting you install a network of your choice. The official implementations include Calico, Canal, Flannel, Kube-Router, Romana and Weave Net. The process of installing either of them is more of the same, but I’ll stick with Calico for this tutorial.
Fore more information about using kubeadm, check here
Starting a Cluster — Kubernetes (using minikube)
If you want to experiment with Kubernetes o your local machine, there is a great tool called minikube that spins up a Kubernetes cluster inside a Virtual Machine. I’m not going to extend so much with this, but you can run minikube in your system by doing:
For more information about minikube, check here
Running a service
Now that we have a cluster running, let’s spin up some services! While there are some differences under the hood, doing so is very similar in both orchestrators.
Running a Service — Docker Swarm (inline)
To run a service with an inline command, simply do:
Running a Service — Kubernetes (inline)
As you may imagine, doing the same thing in Kubernetes is not that different:
As seen above, we needed two commands to replicate Swarm’s behavior. The main difference between both orchestrators is that in the case of Swarm, we explicitly exposed the port 80 on the host. In Kubernetes, the port is randomly selected from a pre-configured range of ports. We can select the target port with a flag, but it needs to be within that range. We can query the selected port using:
Running a Service — Docker Swarm (YAML)
You can define services (as well as volumes, networks and configs) in a Stack File. A Stack File is a YAML file that uses the same notation as Docker-Compose, with added functionality. Let’s spin up our nginx service using this technique:
As we didn’t specify any network, Docker Swarm createad one for us. Keep in mind that this means that the nginx service cannot be accesed via service name from another service. If we want to do this, we can either define all the services that need to communicate with each other in the same YAML (as well as a network), or import a pre-existing overlay network in both stacks.
Running a Service — Kubernetes (YAML)
Kubernetes allows to create resources via a Kubernetes Manifest File. Those files can be either YAML files or JSON files. Using YAML is the most recommended, because it’s pretty much the standard.
Because it’s built around a more modular architecture, Kubernetes requires two resources to achieve the same functionality that Swarm has: A Depoyment and a Service.
A Deployment pretty much defines the characteristics of a service. It is where containers, volumes, secrets and configurations are defined. Deployments also define the number of replicas, and the replication and placement strategies. You can see them as the equivalent of a stack definition in swarm, minus load balancing.
In fact, deployment’s are a higher-level abstraction over lower-level Kubernetes resources suh as Pods and Replica Sets. Everything defined in the template part of the deployment definition defines a pod, which is the smallest unit of scheduling that Kubernetes provides. A pod does not equal to a container. It’s a set of resources that are meant to be scheduled together; for example a Container and a Volume, or two Containers. In most cases, a Pod will contain only one container, but it’s important to understand that difference.
The second part of the file defines a Service resource, which can be seen as a way to refer to a set of pods in the network and load balance between them. The type NodePort tells Kubernetes to assign a externally-accessible port on every node of the cluster (the same on all nodes). This is what swarm did as well. You tell Services what to load balance between by using selectors, and this is why labeling is so important in Kubernetes.
In this case, Kubernetes is much more powerful: For example, you can define a service of type LoadBalancer, which will spawn a Load Balancer in your cloud provider (prior configuration), such as an ELB in AWS, which will point to your service. The default service type is ClusterIP, which defines a service that can be accessed anywhere in the cluster on a given port, but not externally. Using ClusterIP is equal to defining a service without an external mapping in Swarm.
Creating volumes
Volumes are needed to maintain state and provide configurations. Both orchestrators provide simple ways to define those, but Kubernetes takes the lead with a huge lot more capabilities.
Creating volumes — Docker Swarm
Let’s add a volume to our nginx service:
This is the simplest case, and obviously this kind of volume does not provide any benefit in this case, but is enough for a demonstration.
Creating volumes — Kubernetes
Doing the same in K8S is pretty easy:
The emptyDir volume type is the simplest type of volume Kubernetes provides. It maps a folder inside the container with a folder in the node that dissapears when the pod is stopped.
Kubernetes comes with 26 types of volumes, so I think pretty much covers any use case. For example, you can define a volume backed by an EBS Volume in AWS.
Kubernetes comes with 26 types of volumes, so I think pretty much covers any use case. For example, you can define a volume backed by an EBS Volume in AWS.
That’s it
There are certainly more resources other than services and volumes, but I will left them out of this guide for now. One of my favorite resources in kubernetes are ConfigMaps, which are similar to Docker Configs but provide better functionality. I will make an effort to write another guide comparing those two, but for now, let’s call it a day.
Conclusion
Using kubernetes the same way as Swarm is easier than ever. It will take a while for us to make the decission to migrate all our infrastructure to Kubernetes. At the time of this writing, Swarm gives us all we need, but it’s nice to know that the entry barrier of K8S is lowering with the passing of time.
I’m a Software Engineer based in Buenos Aires, Argentina. Currently working as a Platform Engineer at Etermax, the leading Mobile Gaming company in Latin America.
Docker for Mac with Kubernetes のBeta版が利用できるようになってましたので、動かしてみました。
マニュアルはこのへん
https://docs.docker.com/docker-for-mac/#kubernetes
https://docs.docker.com/docker-for-mac/#kubernetes
インストール
Docker for MacのEdgeバージョンをインストールする必要があります。
ファイルのパス
https://download.docker.com/mac/edge/Docker.dmg
https://download.docker.com/mac/edge/Docker.dmg
インストールして
About Docker
を見るとこんな感じです。
サインイン
Dockerにサインインする必要があるので、メニューからサインインします。
起動
Preferenceから
Kubernetes
を選択し、 Enable Kubernetes
にチェックを入れて、Apply
します。
kubectlコマンド
Kubernetesが起動すると、
kubectl
コマンドが使えるようになってます。- 起動前
- 起動後
Homebrew等でkubectlを既にインストールしていて、
/usr/local/bin/kubectl
がある場合、先に削除しておく必要があるようです。
contextの変更
kubectlのcontextsを確認します。
contextを
docker-for-desktop
に変更します。
確認
これで使えるようになっています。
システムコンテナ
Kubernetes関連のコンテナはデフォルトでは見えないようになっています。
Preferenceで
Show system containers
にチェックを入れると、見れるようになります。