The client was in need of a fully-managed Kubernetes solution. Considering there was a rapid growth in the applicant database of the client, and the teams working in isolated environments, the company reached out to us. They needed a cost-effective solution that could ensure greater flexibility, handle the increasing workload, and provide faster deployments.
The client is a veteran HR tech company with offices in New York, San Francisco, and Los Angeles. Established in 1992, the client is one of the pioneers in the space to adopt technology and move its operations from a traditional offline model in 1998.
The rapid growth over the years led the client to batch process an unprecedented number of applicants in their database, which soon became cumbersome with the outdated IT infrastructure which they had been using since their last overhaul.
Although their IT setup was pretty extensive, all their teams including software development, security, QA, system administration, and operations, all worked in silos. This made it difficult to scale environments efficiently and slowed down every stage of development, from application prototyping to testing, and final deployment. The huge cost involved in running disparate applications in different environments was also a cause of concern.
The client needed a fully-managed Kubernetes solution that would improve the portability of their applications across clouds and internal environments and give them the desired flexibility and speed for faster deployments and efficiency.
“We needed to set up a sophisticated Kubernetes environment and we were genuinely impressed by Velotio’s ability to deliver the complex but robust system in record time. Their DevOps capability and extensive knowledge about Kubernetes were essential for the success of this project.”
Velotio’s team re-architected the tightly-coupled microservices-based system into a modern, monolithic architecture by decoupling services to reduce dependencies. To build the microservices model, we utilized share-nothing datastores, establishing common API standards, key scheduling, and orchestration concepts like liveness, circuit breaking, load balancing, health checks, and readiness. The team is now laying the foundation for a robust and scalable, self-service IT infrastructure to leverage the SRE approach in the coming quarters.
Velotio built a Kubernetes platform-as-a-service on Cloud Native Services (CNS), like Block Storage and Databases. The entire infrastructure was upgraded to a container-native, cloud platform and now had all the core competencies of a Kubernetes set up, like fast compute, as well as networking and storage. This ensured 99.9% SLA on all environments with improved elasticity and performance to manage application workloads. The client can now create universal concepts to be applicable across all IT teams.
The team approached Kubernetes as a framework of several unified concepts and built a common language for app development, infrastructure setup, deployment and reliability. The system can now run a production-grade Kubernetes anywhere with key technical capabilities like remote monitoring, cluster installation, package management, container image provenance, CI/CD, container and host-based security, and local development.
The time to onboard teams onto new environments decreased by almost 75%. Control validations were built into the templated pipeline that could easily be duplicated.
We were able to achieve the administrators to servers ratio of 1:100 which was 1:20 earlier. This was done by focussing on Kubernetes automation, cloud scalability, and Infrastructure as a Code (IaaC) which improved efficiency and reduced operational costs of distributed systems.
The consistency and security in environment configuration improved with programmatic reproducibility. The security, compliance, and audit teams were finally able to shift focus to building their own automation and scanning tools, which lowered the overall production cost.
We ensured 99.9% SLA on all environments with improved elasticity and performance to manage application workloads.