Return to Blogs


Deploying ADOP to kubernetes

ADOP platform is an open source platform that glues together a set of open source development tools with the usage of docker containers, providing a good starting point for creating CI/CD pipelines.

If you don’t yet know ADOP, I encourage you to have a try at it before you continue reading this article.

This article is targeted for those who like to go forward with the DevOps tools, exploring new technologies and like to help others to elevate their knowledge on DevOps. This is a very high level guide for having the DevOps infrastructure and CI/CD pipeline running on kubernetes.

Prerequisites:

  1. Well understanding of ADOP framework.

  2. Well understanding of Kubernetes cluster.

  3. To have a kubernetes cluster up and running using AWS infrastructure. (The installation and configuration of a kubernetes cluster is not covered in this article)

Initial understanding

As you can understand, ADOP has been created to be launched by using docker-compose, a tool for defining and running multi-container Docker applications using a simple yaml file. Once ADOP is launched, several steps occur behind the scenes:

  1. Environment variables values are generated. Depending on the environment, and values provided by the user, a bash script is executed assigning the corresponding values to the variables.

  2. The order in which the applications are run is mostly synchronized for being started at a specific time. In kubernetes we should run the applications in the same order.

  3. The ssh keys are also generated and provisioned to the containers to grant specific rights between containers.

  4. Some other tweaks on the applications are performed like logging configurations, Nginx configurations, etc.

Converting the docker-compose scripts to kubernetes scripts

You can get your kubernetes scripts using a tool named Kompose which is a tool that help you convert docker-compose scripts to kubernetes scripts, it takes a Docker Compose file and translates it into Kubernetes resources.

The translation is not exact, but it helps a lot with the creation of a kubernetes scripts that you can later only tweak.

First, you have to provide the values for all those environment variables used by ADOP, so they can be taken by the kubernetes scripts when they are generated.

Creation of Pod files

A Pod is a specification, either on json or yaml format, of one or more containers that are to be deployed on kubernetes cluster. Pods are always co-located and co-scheduled, and run in a shared context.

Kompose only creates a service and deployment files, but not Pod files. It is convenient that you first create your Pods, and then validate that each of them can be property started. The way that you can save time, is to copy the Pod specification from the deployment file created by kompose tool.

It is better to start with Pods creation, since you can easily test the basic unit of deployment in kubernetes before to go with something more complex. In case of doing things this way, you’d better create those manually so you can have more control of the tweakings.

The following Pods were created on my example:

Creation of Services

It is important to note that now we are working on a different context than docker- compose or docker host, where the applications were visible in the same local network by its container name. Now on kubernetes, we won’t know in which node the container is to be deployed, so they don’t see each other if we don’t make use of the service discovery. Kubernetes provides it’s own service discovery.

For handling this matter, you have to create the service scripts that let kubernetes assign a service name with a specific selector of the pod, and then can be visible for the other services.

Each service links a specific pod label for such service name specified on the selector argument as in the example shown below:

This service discovery let the containers to communicate to another container by simply using its service name.

The following is a list of the services created:

Create a NFS volume for the DevOps applications

Since Kubernetes is a cluster that manages all your resources, you should provide kubernetes with the information of the storage area to be used by the containers. In this case this is specified in the POD:

spec:
 volumes:
  - name: "nginx-config"
  nfs:
   server: <efs server> This is the URL of the EFS server created on AWS.
   path: /nginx_config

In my case I choose to use a NFS volume where all the information of the ADOP containers is stored and saved. It’s just matter of mapping the container directory to the NFS volume created as shown in the above example.

Starting the services in kubernetes

Once you have your yml files for the services you can create them in kubernetes. There’s no requirement for doing that over any special order. You just can create them issuing the following command:

for file in `ls *.yml`; do kubectl create -f $file; done

Starting the pods in kubernetes

You can start also your pods in the same way as the services, but you have to consider that the proxy container won’t start until some other applications are up and running, it is the case of kibana, Jenkins, sonar, sensu-uchiwa and nexus. Once all them are properly running, you’ll be able to watch the ADOP Dashboard.

Conclusion and final thoughts

Once you have your scripts, it is very simple to create an environment in kubernetes, usually what you may do is to create a namespace for each of my environments isolating your deployments in different namespace.

For people that want to start a journey in the great DevOps area, this is a very illustrative exercise and give a great idea of what DevOps means.

 Return to Blogs