We welcome a motivated Senior Platform Engineer to build and maintain our big data infrastructure.
Technical stack: Hadoop, Kafka, Spark, Trino, Tableau, Linux, Bash, Ansible, Terraform, Prometheus, Grafana, Elastic Stack, MinIO
– Hadoop Ecosystem deployment, monitoring, maintenance, and configuration automation using Ansible
– Deploy, manage and improve the performance of Kafka, Spark, and other Big Data tools
– Solve platform performance issues and apply best practices
– Identify and define system security requirements
– Participate in architectural decision-making in support of scalability and maintainability.
– Identify and drive improvements in infrastructure and system reliability, performance, monitoring, and overall stability of the platform
– Work with other teams to build, test, and roll out systems.
– Participate in training and mentoring other team members
– 5+ years of experience deploying and managing medium/large scale distributed systems.
– 5+ years of experience in the Linux environment.
– 2+ years of experience in Hadoop platform administration.
– Strong scripting experience: Bash, or equivalent language
– Solid engineering and administrative knowledge of Apache Hadoop and related projects is a must.
– Working experience with solving HDFS, Yarn, Kafka, and Hive performance issues.
– Working experience with developing automation using Ansible
– Working experience with BI Tool integration with Hadoop.
– Working experience with Hadoop monitoring using Prometheus, Grafana, etc.
– Experience engineering modern Analytics and Big Data technologies, with expertise in Presto/Trino, Druid/Pinot at scale is a plus
– Experience in container applications like Docker, and Kubernetes is a plus
– Fluency in English is a requirement.
We look forward to meeting you in person to discuss the role in detail and hear about your career goals. Please, apply for the vacancy by pressing the “Apply for job” button below.