Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Automated Containerization and Migration of On-premise Applications to Cloud Platforms

Containerized applications are becoming more popular with each passing year. All enterprise applications are adopting container technology as they modernize their IT systems. Migrating your applications from VMs or physical machines to containers comes with multiple advantages like optimal resource utilization, faster deployment times, replication, quick cloning, lesser lock-in and so on. Various container orchestration platforms like Kubernetes, Google Container Engine (GKE), Amazon EC2 Container Service (Amazon ECS) help in quick deployment and easy management of your containerized applications. But in order to use these platforms, you need to migrate your legacy applications to containers or rewrite/redeploy your applications from scratch with the containerization approach. Rearchitecting your applications using containerization approach is preferable, but is that possible for complex legacy applications? Is your deployment team capable enough to list down each and every detail about the deployment process of your application? Do you have the patience of authoring a Docker file for each of the components of your complex application stack?

Automated migrations!

Velotio has been helping customers with automated migration of VMs and bare-metal servers to various container platforms. We have developed automation to convert these migrated applications as containers on various container deployment platforms like GKE, Amazon ECS and Kubernetes. In this blog post, we will cover one such migration tool developed at Velotio which will migrate your application running on a VM or physical machine to Google Container Engine (GKE) by running a single command.

Migration tool details

We have named our migration tool as A2C(Anything to Container). It can migrate applications running on any Unix or Windows operating system. 

The migration tool requires the following information about the server to be migrated:

  • IP of the server
  • SSH User, SSH Key/Password of the application server
  • Configuration file containing data paths for application/database/components (more details below)
  • Required name of your docker image (The docker image that will get created for your application)
  • GKE Container Cluster details

In order to store persistent data, volumes can be defined in container definition. Data changes done on volume path remain persistent even if the container is killed or crashes. Volumes are basically filesystem path from host machine on which your container is running, NFS or cloud storage. Containers will mount the filesystem path from your local machine to container, leading to data changes being written on the host machine filesystem instead of the container's filesystem. Our migration tool supports data volumes which can be defined in the configuration file. It will automatically create disks for the defined volumes and copy data from your application server to these disks in a consistent way.

The configuration file we have been talking about is basically a YAML file containing filesystem level information about your application server. A sample of this file can be found below:

CODE: https://gist.github.com/velotiotech/0383563268930c993878e1f4ee7eab93.js

The configuration file contains 3 sections: includes, volumes and excludes:

  • Includes contains filesystem paths on your application server which you want to add to your container image.
  • Volumes contain filesystem paths on your application server which stores your application data. Generally, filesystem paths containing database files, application code files, configuration files, log files are good candidates for volumes.
  • The excludes section contains filesystem paths which you don’t want to make part of the container. This may include temporary filesystem paths like /proc, /tmp and also NFS mounted paths. Ideally, you would include everything by giving “/” in includes section and exclude specifics in exclude section.

Docker image name to be given as input to the migration tool is the docker registry path in which the image will be stored, followed by the name and tag of the image. Docker registry is like GitHub of docker images, where you can store all your images. Different versions of the same image can be stored by giving version specific tag to the image. GKE also provides a Docker registry. Since in this demo we are migrating to GKE, we will also store our image to GKE registry.

GKE container cluster details to be given as input to the migration tool, contains GKE specific details like GKE project name, GKE container cluster name and GKE region name. A container cluster can be created in GKE to host the container applications. We have a separate set of scripts to perform cluster creation operation. Container cluster creation can also be done easily through GKE UI. For now, we will assume that we have a 3 node cluster created in GKE, which we will use to host our application.

Tasks performed under migration

Our migration tool (A2C), performs the following set of activities for migrating the application running on a VM or physical machine to GKE Container Cluster:

1. Install the A2C migration tool with all it’s dependencies to the target application server

2. Create a docker image of the application server, based on the filesystem level information given in the configuration file

3. Capture metadata from the application server like configured services information, port usage information, network configuration, external services, etc.

4.  Push the docker image to GKE container registry

5. Create disk in Google Cloud for each volume path defined in configuration file and prepopulate disks with data from application server

6. Create deployment spec for the container application in GKE container cluster, which will open the required ports, configure required services, add multi container dependencies, attach the pre populated disks to containers, etc.

7. Deploy the application, after which you will have your application running as containers in GKE with application software in running state. New application URL’s will be given as output.

8. Load balancing, HA will be configured for your application.

Demo

For demonstration purpose, we will deploy a LAMP stack (Apache+PHP+Mysql) on a CentOS 7 VM and will run the migration utility for the VM, which will migrate the application to our GKE cluster. After the migration we will show our application preconfigured with the same data as on our VM, running on GKE.

Step 1

We setup LAMP stack using Apache, PHP and Mysql on a CentOS 7 VM in GCP. The PHP application can be used to list, add, delete or edit user data. The data is getting stored in MySQL database. We added some data to the database using the application and the UI would show the following:

Step 2

Now we run the A2C migration tool, which will migrate this application stack running on a VM into a container and auto-deploy it to GKE.



CODE: https://gist.github.com/velotiotech/2a2cee94af577ad8fe09edcef7f7f9e4.js

CODE: https://gist.github.com/velotiotech/63bb4011b60b08a909ae4720cdd6dfac.js



Deploying to GKE

CODE: https://gist.github.com/velotiotech/b0756f36565f1b05e1d3c9f614c353fb.js

CODE: https://gist.github.com/velotiotech/64b33f141d2c1d75f2de925b4a82520c.js

CODE: https://gist.github.com/velotiotech/4f3032b5ba1ef7f74ec0c9c0586c8454.js

You can access your application using above connection details!

Step 3

Access LAMP stack on GKE using the IP 35.184.53.100 on default 80 port as was done on the source machine.

Here is the Docker image being created in GKE Container Registry:

We can also see that disks were created with migrate-lamp-x, as part of this automated migration.

Load Balancer also got provisioned in GCP as part of the migration process

Following service files and deployment files were created by our migration tool to deploy the application on GKE:

CODE: https://gist.github.com/velotiotech/c3e09cf14dcb35b2c387ee60d0de6175.js



CODE: https://gist.github.com/velotiotech/cf4206b46fb396979a45a99b6d6d7259.js

Conclusion

Migrations are always hard for IT and development teams. At Velotio, we have been helping customers to migrate to cloud and container platforms using streamlined processes and automation. Feel free to reach out to us at contact@velotio.com to know more about our cloud and container adoption/migration offerings.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Automated Containerization and Migration of On-premise Applications to Cloud Platforms

Containerized applications are becoming more popular with each passing year. All enterprise applications are adopting container technology as they modernize their IT systems. Migrating your applications from VMs or physical machines to containers comes with multiple advantages like optimal resource utilization, faster deployment times, replication, quick cloning, lesser lock-in and so on. Various container orchestration platforms like Kubernetes, Google Container Engine (GKE), Amazon EC2 Container Service (Amazon ECS) help in quick deployment and easy management of your containerized applications. But in order to use these platforms, you need to migrate your legacy applications to containers or rewrite/redeploy your applications from scratch with the containerization approach. Rearchitecting your applications using containerization approach is preferable, but is that possible for complex legacy applications? Is your deployment team capable enough to list down each and every detail about the deployment process of your application? Do you have the patience of authoring a Docker file for each of the components of your complex application stack?

Automated migrations!

Velotio has been helping customers with automated migration of VMs and bare-metal servers to various container platforms. We have developed automation to convert these migrated applications as containers on various container deployment platforms like GKE, Amazon ECS and Kubernetes. In this blog post, we will cover one such migration tool developed at Velotio which will migrate your application running on a VM or physical machine to Google Container Engine (GKE) by running a single command.

Migration tool details

We have named our migration tool as A2C(Anything to Container). It can migrate applications running on any Unix or Windows operating system. 

The migration tool requires the following information about the server to be migrated:

  • IP of the server
  • SSH User, SSH Key/Password of the application server
  • Configuration file containing data paths for application/database/components (more details below)
  • Required name of your docker image (The docker image that will get created for your application)
  • GKE Container Cluster details

In order to store persistent data, volumes can be defined in container definition. Data changes done on volume path remain persistent even if the container is killed or crashes. Volumes are basically filesystem path from host machine on which your container is running, NFS or cloud storage. Containers will mount the filesystem path from your local machine to container, leading to data changes being written on the host machine filesystem instead of the container's filesystem. Our migration tool supports data volumes which can be defined in the configuration file. It will automatically create disks for the defined volumes and copy data from your application server to these disks in a consistent way.

The configuration file we have been talking about is basically a YAML file containing filesystem level information about your application server. A sample of this file can be found below:

CODE: https://gist.github.com/velotiotech/0383563268930c993878e1f4ee7eab93.js

The configuration file contains 3 sections: includes, volumes and excludes:

  • Includes contains filesystem paths on your application server which you want to add to your container image.
  • Volumes contain filesystem paths on your application server which stores your application data. Generally, filesystem paths containing database files, application code files, configuration files, log files are good candidates for volumes.
  • The excludes section contains filesystem paths which you don’t want to make part of the container. This may include temporary filesystem paths like /proc, /tmp and also NFS mounted paths. Ideally, you would include everything by giving “/” in includes section and exclude specifics in exclude section.

Docker image name to be given as input to the migration tool is the docker registry path in which the image will be stored, followed by the name and tag of the image. Docker registry is like GitHub of docker images, where you can store all your images. Different versions of the same image can be stored by giving version specific tag to the image. GKE also provides a Docker registry. Since in this demo we are migrating to GKE, we will also store our image to GKE registry.

GKE container cluster details to be given as input to the migration tool, contains GKE specific details like GKE project name, GKE container cluster name and GKE region name. A container cluster can be created in GKE to host the container applications. We have a separate set of scripts to perform cluster creation operation. Container cluster creation can also be done easily through GKE UI. For now, we will assume that we have a 3 node cluster created in GKE, which we will use to host our application.

Tasks performed under migration

Our migration tool (A2C), performs the following set of activities for migrating the application running on a VM or physical machine to GKE Container Cluster:

1. Install the A2C migration tool with all it’s dependencies to the target application server

2. Create a docker image of the application server, based on the filesystem level information given in the configuration file

3. Capture metadata from the application server like configured services information, port usage information, network configuration, external services, etc.

4.  Push the docker image to GKE container registry

5. Create disk in Google Cloud for each volume path defined in configuration file and prepopulate disks with data from application server

6. Create deployment spec for the container application in GKE container cluster, which will open the required ports, configure required services, add multi container dependencies, attach the pre populated disks to containers, etc.

7. Deploy the application, after which you will have your application running as containers in GKE with application software in running state. New application URL’s will be given as output.

8. Load balancing, HA will be configured for your application.

Demo

For demonstration purpose, we will deploy a LAMP stack (Apache+PHP+Mysql) on a CentOS 7 VM and will run the migration utility for the VM, which will migrate the application to our GKE cluster. After the migration we will show our application preconfigured with the same data as on our VM, running on GKE.

Step 1

We setup LAMP stack using Apache, PHP and Mysql on a CentOS 7 VM in GCP. The PHP application can be used to list, add, delete or edit user data. The data is getting stored in MySQL database. We added some data to the database using the application and the UI would show the following:

Step 2

Now we run the A2C migration tool, which will migrate this application stack running on a VM into a container and auto-deploy it to GKE.



CODE: https://gist.github.com/velotiotech/2a2cee94af577ad8fe09edcef7f7f9e4.js

CODE: https://gist.github.com/velotiotech/63bb4011b60b08a909ae4720cdd6dfac.js



Deploying to GKE

CODE: https://gist.github.com/velotiotech/b0756f36565f1b05e1d3c9f614c353fb.js

CODE: https://gist.github.com/velotiotech/64b33f141d2c1d75f2de925b4a82520c.js

CODE: https://gist.github.com/velotiotech/4f3032b5ba1ef7f74ec0c9c0586c8454.js

You can access your application using above connection details!

Step 3

Access LAMP stack on GKE using the IP 35.184.53.100 on default 80 port as was done on the source machine.

Here is the Docker image being created in GKE Container Registry:

We can also see that disks were created with migrate-lamp-x, as part of this automated migration.

Load Balancer also got provisioned in GCP as part of the migration process

Following service files and deployment files were created by our migration tool to deploy the application on GKE:

CODE: https://gist.github.com/velotiotech/c3e09cf14dcb35b2c387ee60d0de6175.js



CODE: https://gist.github.com/velotiotech/cf4206b46fb396979a45a99b6d6d7259.js

Conclusion

Migrations are always hard for IT and development teams. At Velotio, we have been helping customers to migrate to cloud and container platforms using streamlined processes and automation. Feel free to reach out to us at contact@velotio.com to know more about our cloud and container adoption/migration offerings.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings