• Notera att ansökningsdagen för den här annonsen kan ha passerat. Läs annonsen noggrant innan du går vidare med din ansökan.

Job Summary:
We are seeking a solid Big Data Operations Engineer focused on operations to administer/scale our multipetabyte Hadoop clusters and the related services that go with it. This role focuses primarily on provisioning, ongoing capacity planning, monitoring, management of Hadoop platform and application/middleware that run on Hadoop. (an onsite role in Malmö).

Job Description:
Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
Strong development/automation skills. Must be very comfortable with reading and writing
Python and Java code.
Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to
large clusters.
Tools-first mindset. You build tools for yourself and others to increase efficiency and to make
hard or repetitive tasks easy and quick.
Experience with Configuration Management and automation.
Organized, focused on building, improving, resolving and delivering.
Good communicator in and across teams, taking the lead.
Education:
Bachelors or Master Degree in Computer Science or similar technical degree.

Responsible for maintaining and scaling production Hadoop, HBase, Kafka, and Spark clusters.
Responsible for the implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting.
Provide hardware architectural guidance, plan and estimate cluster capacity, Hadoop cluster deployment.
Improve scalability, service reliability, capacity, and performance.
Triage production issues when they occur with other operational teams.
Conduct ongoing maintenance across our large scale deployments.
Write automation code for managing large Big Data clusters
Work with development and QA teams to design Ingestion Pipelines, Integration APIs, and provide Hadoop ecosystem services
Participate in the occasional on-call rotation supporting the infrastructure.
Hands on to troubleshoot incidents, formulate theories and test hypothesis, and narrow down possibilities to find the root cause.

Competence demands:
Hands on experience with managing production clusters (Hadoop, Kafka, Spark, more).
Strong development/automation skills. Must be very comfortable with reading and writing Python and Java code.
Overall 10+ years with at least 5+ years of Hadoop experience in production, in medium to large clusters.
Tools-first mindset. You build tools for yourself and others to increase efficiency and to make hard or repetitive tasks easy and quick.
Experience with Configuration Management and automation.

You have hands-on experience with managing production clusters (Hadoop, Kafka, Spark, more),
strong development/automation skills and overall 10+ years with at least 5+ years of Hadoop experience in production.

You are responsible and self-going person, who takes ownership of the tasks.

You work very well with the teams of developers and you have excellent communications skills.

You are going to be working in our office in central Malmö as well as perform assignments at our many clients in the region.

Detta är en jobbannons med titeln "Big Data Operations Engineer" hos företaget Prodata Consult International AB och publicerades på webbjobb.io den 16 januari 2019 klockan 00:00.

Hur du söker jobbet

Ansökan sker via e-post till [email protected]. Vänligen använd rubriken/referensen "Big Data Operations Engineer".

webbjobb-logo-white webbjobb-logo-grey webbjobb-logo-black