- Notera att ansökningsdagen för den här annonsen kan ha passerat. Läs annonsen noggrant innan du går vidare med din ansökan.
Do you want to shape the data infrastructure of Klarna?
We need to merge all our data sources into one coherent stream, constantly massaging, crunching and aggregating. We want data from all our services to be accessible for data consumers - whether they?re engineers building a new product or Credit Risk analysts training predictive models. To be able to do this we need to build highly reliable and scalable data pipelines that process and feed data into our organisation. And hopefully this is where you come in.
We have a hadoop cluster in place which is growing rapidly. It?s now up to us to make the most of it and go beyond: investigating new approaches and technologies, closing the gap to true real-time. Together with skilled engineers across the organisation you need to set the foundation and build something that can scale over time. If you want to make a difference, this is the place to be!
Required qualifications:
Exceptional interest in distributed computation and technical curiosity
Strong programming skills in Java, Scala or another relevant programming language
Strong knowledge in a range of database systems and query languages
Good understanding of computer science fundamentals, data structures and algorithms
Preferred qualifications:
Building and managing large Hadoop clusters, data pipelines or NoSQL environments
Proficiency in Kafka, Spark, Storm, Cassandra
Proficiency in Hadoop ecosystem such as Crunch, Cascading, PIG, Flume or Oozie
Experience with continuous deployment and configuration management systems
We are passionate about data and the most important thing is your motivation to build data systems at scale. Sounds interesting? Up for a challenge?
Great we should talk!
Location
Stockholm