Senior Platform Engineer, Big Data

July 20, 2022
Full Time Armenia, Philippines

Webb Fontaine Holding

Senior Platform Engineer, Big Data | Position Summary

People matter. The Webb Fontaine culture focuses on its employees’ success and happiness ensuring they feel valued across the organization. Our team members are identified by their passion and enthusiasm for excellence and innovation by getting results and developing their skills. Being a team player is key: we care and support each other through integrity and openness.

We welcome a motivated Senior Platform Engineer to build and maintain our big data infrastructure.

Technical stack: Hadoop, Kafka, Spark, Trino, Tableau, Linux, Bash, Ansible, Terraform, Prometheus, Grafana, Elastic Stack, MinIO

 

What you will do:

– Hadoop Ecosystem deployment,  monitoring, maintenance, and configuration automation using Ansible

– Deploy, manage and improve the performance of Kafka, Spark, and other Big Data tools

–  Solve platform performance issues and apply best practices 

– Identify and define system security requirements

– Participate in architectural decision-making in support of scalability and maintainability.

– Identify and drive improvements in infrastructure and system reliability, performance, monitoring, and overall stability of the platform

– Work with other teams to build, test, and roll out systems.

– Participate in training and mentoring other team members

 

What you will bring in:

– 5+ years of experience deploying and managing medium/large scale distributed systems.

– 5+ years of experience in the Linux environment.

– 2+ years of experience in Hadoop platform administration.

– Strong scripting experience: Bash, or equivalent language

– Solid engineering and administrative knowledge of Apache Hadoop and related projects is a must.

– Working experience with solving HDFS, Yarn, Kafka, and Hive performance issues.

– Working experience with developing automation using Ansible

– Working experience with BI Tool integration with Hadoop.

– Working experience with Hadoop monitoring using Prometheus, Grafana, etc.

– Experience engineering modern Analytics and Big Data technologies, with expertise in Presto/Trino, Druid/Pinot at scale is a plus

– Experience in container applications like Docker, and Kubernetes is a plus

– Fluency in English is a requirement.

 

How to apply

We look forward to meeting you in person to discuss the role in detail and hear about your career goals. Please, apply for the vacancy by pressing the “Apply for job” button below.

Upload your CV/resume or any other relevant file. Max. file size: 130 MB.
Upload a motivation letter