As a DevOps Engineer, you will build out a Data Prep & ETL system inside DataRobot utilizing distributed frameworks such as Spark, Hadoop, and Kubernetes. You will work on availability, security, scalability, and resiliency of solutions focused on data processing. You will also be responsible for the design and integration of pipelines for all kinds of automated testing.
The ideally, candidate can bring new ideas from concept to implementation, write quality, testable code, and participate in design/development discussions. We value engineers who are familiar with DevOps tools and practices, who do not believe that any problem is too hard, and who are willing and eager to chase problems down no matter where they lead.
- A passion for automating everything
- A passion for collaborating and tearing down communication silos
- 3+ years of experience in DevOps focused on Big Data environments
- 3+ years of experience scripting in Bash, Java, Scala, Python or similar
- 3+ years of experience with Linux (Ubuntu, RedHat or similar)
- Experience with Docker and/or container orchestration (Docker, Kubernetes, Mesos, or similar)
- Experience in configuring and setting up Kubernetes and Hadoop clusters
- Experience in setting up Kerberos authentication and impersonation
- Experience with Hadoop ecosystem/Hadoop stack (Hadoop, Spark, Hive, etc)
- Experience with Kubernetes ecosystem/Kubernetes stack (Kubeflow, EKS)
- Good coding skills. In the interview process, you will be evaluated on your performance in a number of coding and design scenarios - be prepared to think!
- Good writing and communication skills
- Experience with launching/managing computing resources in AWS, Azure or similar
- Experience with CI/CD tools (Jenkins, Team City, Ansible, etcd, Terraform or similar)
- Familiar with DevOps methodologies
- Familiarity with both Cloud Deployment and On-Premise Release Workflows
- Application-level metrics familiarity (ELK stack, Instana, Grafana, Prometheus, Tracing)
- Have experience creating automated build pipelines