Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Docker swarm mode is great. I’ve been using it in production since its inception in Docker 12.03. The reason is simple: While its usage was very starightforward and required no aditional software, other orchestration solutions like Mesos and Kubernetes seemed so difficult to set up and the benefits weren’t clear to me.
The truth is, I never looked at Kubernetes because Swarm gave me all I needed in terms of Container Orchestration. But now in 2018 the story is different: When taking a look at the offerings of the three major cloud providers (AWS, Google Cloud and Azure), one will notice that Kubernetes won the orchestration battle. While AWS’s Elastic Kubernetes Service was announced a few weeks ago and it’s still in Beta (I didn’t have the chance to try it yet), both Google and Azure offer some kind of Managed Kubernetes as a Service. This is big, because one of the things that make people run away from Kubernetes in the first place is the pain of it’s setup and management. Having the possibility to take all that complexity and put it in the hands of the cloud providider takes that problem away (or most of it).
There is also the fact that Docker is going to ship K8S into the next versions of Docker Enterprise Edition and Docker for Mac & Windows.
Another big point is the community: Every time I had a problem with Docker Swarm it took me a while to find a solution. In contrast, even with more features and configuration possibilities, simple Google searches and asking questions on Slack helped me to solve all the problems I had so far with Kubernetes. Don’t get me wrong here: The Docker Swarm community is great, but not as great as the Kubernetes one.
This point isn’t Docker Swarm’s fault: The reality is that Kubernetes is under active development by companies like Google, Microsoft, Red Hat, IBM (and Docker, I suppose), as well as individual contributors. Taking a look at both Github repositories reveals that in fact Kubernetes is a lot more active.
But hey! This was supposed to be a guide, so let’s start by comparing how to achieve similar scenarios in both Swarm and K8S.
Disclaimer: This guide was not meant to provide any production-ready scenarios. I made it simple to illustrate the similarities between Swarm and K8S more easily.Not so much of a VS, but I found this image on the Internet
Starting a cluster (1 Master & 1 Worker)
To keep things simple, let’s build a simple cluster with 1 Master and 1 Worker.
Starting a Cluster — Docker Swarm
Starting a cluster in Docker Swarm is as simple as it gets. With Docker installed on the machine, simply do:
> docker swarm init
Swarm initialized: current node (x5hmcwovhbpxrmthesxd0n1zx) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-5agb6u8svusxsrfisbpiarl6pdzfgqdv1w0exj8c9niv45y0ya-9eaw26eb6i4yq1pyl0a2zdvjz 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Then, on another machine in the same network, paste the aforementioned command:
> docker swarm join --token SWMTKN-1-5agb6u8svusxsrfisbpiarl6pdzfgqdv1w0exj8c9niv45y0ya-9eaw26eb6i4yq1pyl0a2zdvjz 192.168.65.3:2377
The node joined the swarm as a worker
Starting a Cluster — Kubernetes (using kubeadm)
I mentioned a few times that setting up a Kubernetes cluster is complicated. While that remains true, there is a tool (which is still in beta) called kubeadm that simplifies the process. In fact, setting up a K8S cluster with kubeadm is very similar to Docker Swarm. Installing kubeadm is easy, as it can be installed with most package managers (brew, apt, etc)
> kubeadm init
Your Kubernetes master has initialized successfully!To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each nodeas root:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
The command takes a while to complete, because Kubernetes relies on the setup of external services like etcd to function. All of these is automated with kubeadm.
As with Swarm, to join another node one must simply run the outputted command in another node:
> kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
Node join complete:* Certificate signing request sent to master and response received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.
So far, the cluster creation process is nearly identical in both solutions. But Kubernetes needs an extra step:
Installing a pod network
Docker swarm comes bundled with a service mesh that provides networking capabilities inside the cluster. While this is convenient, Kubernetes comes with more flexibility in this space, letting you install a network of your choice. The official implementations include Calico, Canal, Flannel, Kube-Router, Romana and Weave Net. The process of installing either of them is more of the same, but I’ll stick with Calico for this tutorial.
> kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
Fore more information about using kubeadm, check here
Starting a Cluster — Kubernetes (using minikube)
If you want to experiment with Kubernetes o your local machine, there is a great tool called minikube that spins up a Kubernetes cluster inside a Virtual Machine. I’m not going to extend so much with this, but you can run minikube in your system by doing:
> minikube start
For more information about minikube, check here
The literal file name of this image was container-drugs.jpg. I hope the DEA isn’t reading any of this
Running a service
Now that we have a cluster running, let’s spin up some services! While there are some differences under the hood, doing so is very similar in both orchestrators.
Running a Service — Docker Swarm (inline)
To run a service with an inline command, simply do:
> docker service create --publish 80:80 --name nginx nginx:latest
Running a Service — Kubernetes (inline)
As you may imagine, doing the same thing in Kubernetes is not that different:
> kubectl run --image=nginx:latest --port=80 --expose=true nginxservice "nginx" createddeployment "nginx" created
I will explain later what deployments and services are, but for now I’ll say that this command didn’t have the same outcome as the Docker Swarm one. Our nginx service in Swarm exposes the port 80 on the node, permitting outside access.
In Kubernetes, the combination of both the expose and port flags only exposed the container port within the cluster, and there isn’t a way to expose it on the host with the inline command (or I didn’t find one). You will need YAML for that, but that’s what you’ll be doing in production anyway.
Running a Service — Docker Swarm (YAML)
You can define services (as well as volumes, networks and configs) in a Stack File. A Stack File is a YAML file that uses the same notation as Docker-Compose, with added functionality. Let’s spin up our nginx service using this technique:
> cat nginx.yml
version: '3'
services: nginx: image: nginx:latest ports: - 80:80 deploy: mode: replicated replicas: 1
> docker stack deploy --compose-file nginx.yml nginxstack
Creating network nginxstack_defaultCreating service nginxstack_nginx
As we didn’t specify any network, Docker Swarm createad one for us. Keep in mind that this means that the nginx service cannot be accesed via service name from another service. If we want to do this, we can either define all the services that need to communicate with each other in the same YAML (as well as a network), or import a pre-existing overlay network in both stacks.
Running a Service — Kubernetes (YAML)
Kubernetes allows to create resources via a Kubernetes Manifest File. Those files can be either YAML files or JSON files. Using YAML is the most recommended, because it’s pretty much the standard.
> cat nginx.yml
apiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 1 selector: matchLabels:app: nginx template: metadata: labels:app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80---
apiVersion: v1kind: Servicetype: NodePortmetadata: name: nginxspec: selector:app: nginx ports: - port: 80
> kubectl apply -f nginx.yml
service "nginx" createddeployment "nginx" created
Because it’s built around a more modular architecture, Kubernetes requires two resources to achieve the same functionality that Swarm has: A Depoyment and a Service.
A Deployment pretty much defines the characteristics of a service. It is where containers, volumes, secrets and configurations are defined. Deployments also define the number of replicas, and the replication and placement strategies. You can see them as the equivalent of a stack definition in swarm, minus load balancing.
In fact, deployment’s are a higher-level abstraction over lower-level Kubernetes resources suh as Pods and Replica Sets. Everything defined in the template part of the deployment definition defines a pod, which is the smallest unit of scheduling that Kubernetes provides. A pod does not equal to a container. It’s a set of resources that are meant to be scheduled together; for example a Container and a Volume, or two Containers. In most cases, a Pod will contain only one container, but it’s important to understand that difference.
The second part of the file defines a Service resource, which can be seen as a way to refer to a set of pods in the network and load balance between them. The type NodePort tells Kubernetes to assign a externally-accessible port on every node of the cluster (the same on all nodes). This is what swarm did as well. You tell Services what to load balance between by using selectors, and this is why labeling is so important in Kubernetes.
In this case, Kubernetes is much more powerful: For example, you can define a service of type LoadBalancer, which will spawn a Load Balancer in your cloud provider (prior configuration), such as an ELB in AWS, which will point to your service. The default service type is ClusterIP, which defines a service that can be accessed anywhere in the cluster on a given port, but not externally. Using ClusterIP is equal to defining a service without an external mapping in Swarm.
Creating volumes
Volumes are needed to maintain state and also for configuration. Both orchestrators provide simple ways to defining those, but Kubernetes takes the lead with a huge lot more capabilities.
Creating volumes — Docker Swarm
Let’s add a volume to our nginx service:
> cat nginx.yml
version: '3'
services: nginx: image: nginx:latest ports: - 80:80volumes: - nginx-volume:/srv/www deploy: mode: replicated replicas: 1
volumes: nginx-volume:
This is the simplest case, and obviously this kind of volume does not provide any benefit in this case, but is enough for a demonstration.
Creating volumes — Kubernetes
Doing the same in K8S is pretty easy:
> cat nginx.yml
apiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - mountPath: /srv/www name: nginx-volume volumes: - name: nginx-volume emptyDir: {}
---
apiVersion: v1kind: Servicetype: NodePortmetadata: name: nginxspec: selector: app: nginx ports: - port: 80
The emptyDir volume type is the simplest type of volume Kubernetes provides. It maps a folder inside the container with a folder in the node that dissapears when the pod is stopped. Kubernetes comes with 26 types of volumes, so I think pretty much covers any use case. For example, you can define a volume backed by an EBS Volume in AWS.
That’s it
There are certainly more resources other than services and volumes, but I will left them out of this guide for now. One of my favorite resources in kubernetes are ConfigMaps, which are similar to Docker Configs but provide better functionality. I will make an effort to write another guide comparing those two, but for now, let’s call it a day.
Anoter picture of containers. Just because.
Conclusion
Using kubernetes the same way as Swarm is easier than ever. It will take a while for us to make the decission to migrate all our infrastructure to Kubernetes. At the time of this writing, Swarm gives us all we need, but it’s nice to know that the entry barrier of K8S is lowering with the passing of time.
I’m a Software Engineer based in Buenos Aires, Argentina. Currently working as a Platform Engineer at Etermax, the leading Mobile Gaming company in Latin America.
A Kubernetes guide for Docker Swarm users was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.