• Notera att ansökningsdagen för den här annonsen kan ha passerat. Läs annonsen noggrant innan du går vidare med din ansökan.

The main tasks this person will support with are:

Integrate into an agile scrum team and apply agile ways of working in relation to software development, including rapid prototyping and outcome-based evaluation.

Work with and contribute to a DevOps setup (continuous integration, etc.) for test-driven development on Google Cloud Platform.

Write demanding SQL queries that Extract, Transform and Load data from storage to an appropriate format, using joins, nesting and similar operations as an integral component of data processing pipelines to facilitate reporting and analytics.

Produce all required design specifications and drive the implementation of appropriate data models, schemas and their associated processing and transformation pipelines, in order to satisfy changing business requirements while ensuring continuously stable running operations.

Set up monitoring for data quality. Implement data validation checks and data quality dashboards, in order to ensure a high degree of trust in the data residing in the systems, as well as in the decisions supported by insights extracted from it.


Desired knowledge, competence, experience, skills etc.:

  • Broad knowledge of SQL for data processing and analysis
  • Broad knowledge of programming languages (e.g. Java, Go, Python, or Scala), including concepts from functional and object-oriented programming paradigms
  • Good knowledge of new and emerging tools for extracting, ingesting, and processing of large datasets (Apache Spark, Beam, Kafka, or equivalent)
  • Good knowledge of digital product development principles and the importance of rapid validated learning cycles to optimise performance long term
  • Good knowledge of collaborative software engineering practices (Agile, DevOps), in which solutions evolve through the effort of self-organising cross-functional teams.

Three most important things:

1. Broad knowledge of SQL for data processing and analysis

2. Broad knowledge of programming languages (e.g. Java, Go, Python, or Scala), including concepts from functional and object-oriented programming paradigms

3. Good knowledge of new and emerging tools for extracting, ingesting, and processing of large datasets (Apache Spark, Apache Beam, Kafka, or equivalent)


Detta är en jobbannons med titeln "Senior Data Engineer" hos företaget Berlin Information Technology AB och publicerades på webbjobb.io den 3 juni 2021 klockan 19:14.

Hur du söker jobbet

Ansökan sker via e-post till [email protected]. Vänligen använd rubriken/referensen "Senior Data Engineer".

webbjobb-logo-white webbjobb-logo-grey webbjobb-logo-black