Velotio Technologies is a product engineering company working with innovative startups and enterprises. We have provided core product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS & Machine Learning space. Our team of 175+ elite software & DevOps engineers solve hard technical problems while transforming customer ideas into successful products
As part of the Cloud Data Platform team, your focus is to operate and support Kafka and other infrastructure (Spark, Flink, Glue, Airflow, Redshift etc.) with best practices and surrounding tooling. You’ll be joining a team that is responsible for building a truly global, self-service platform to enable the growing number of engineering teams to build, test, deploy, and manage the complete operational life cycle of their services in a fully autonomous fashion.
Roles & Responsibilities:
- You will help drive technical decision-making, particularly with regard to operations and the architectural direction of the platform.
- As an expert in your area, you will help set the tone for how your team operates. You’ll emphasize modern, rigorous software development practices that emphasize testability, repeatability, and self-service automation. You’ll conduct code reviews and mentor more junior developers. You’ll openly collaborate with other teams’ leads and help raise the bar of engineering excellence across the entire organization
- Your role will focus on the development of the common platform services.
- You’ll solve problems related to complex cloud-infrastructure automation, multi-region networking, global message buses and other common services across many AWS accounts.
- You’ll architect platform APIs for other teams to build on top of, you’ll develop Kubernetes operators, you’ll design processes/workflows, and you’ll help to do it all in a collaborative, team environment using modern, rigorous software development practices that emphasize testability, repeatability, and self-service automation.
Desired Skills & Experience:
- 4+ years’ experience in a Cloud, DevOps, SRE and/or software engineering role.
- Experience leading the development of large-scale projects, e.g. breaking down tasks, delegating work, assisting in the creation of roadmaps and work back plans
- Deep expertise in operating atleast one data platform at scale - Kafka, Big Table, Spark, Redshift, EMR etc.
- Experience with any one of the programming languages - Golang, Java, Scala, Python, Ruby etc.
- Deep understanding of distributed systems in Kubernetes
- Hands-on experience with multiple IaC tools (e.g. AWS CDK, Terraform, Crossplane, AWS CF)
- Experience with the development and operation of high throughput, low-latency systems
- Hands-on experience with automating development workflow pipelines (CI/CD)
- Operational experience (i.e. on-call rotation, incident response)
- Ability to collaborate effectively with remote peers across disparate geographies and time-zones
- Excellent written and verbal communication skills with particular emphasis on technical documentation (including diagramming)
- Strong CS fundamentals.
Bonus Points if you have...
- You have experience related to building a global Kafka architecture with regional clusters, mirrors, backups and have solved for different use cases such as high traffic telemetry (logs & metrics), business critical events and analytics like Kafka streams.
- Experience with Distributed Systems Development (e.g. asynchronous communication patterns, consensus algorithms, distributed transactions) and/or Micro Service Architecture (e.g. Event Driven, Global Scale)
- Experience developing Kubernetes operators in Golang
Note: Currently, all interview and on-boarding processes at Velotio will be carried out remotely through virtual meetings until further notice.