Data at Wolt

As the scale of Wolt has rapidly grown, we are introducing new users to our data platform every day and want this to become a coherent and streamlined experience for all users, whether they’re Analysts, Data Scientists working with our data or teams bringing new data to the platform from their applications. We aim to both provide new platform capabilities across batch, streaming, orchestration and data integration to serve our user’s needs, as well as building an intuitive interface for them to solve their use cases without having to learn the details of the underlying tools.

In the context of this role we are hiring an experienced Senior Software Engineer to provide technical leadership and individual contribution in one the following workstreams:

Data Governance

Wolt’s Data Group has already developed an initial foundational tooling in the areas of data management, security, auditing, data catalog and quality monitoring, but through your technical contributions you will ensure our Data Governance tooling is state of the art. You’ll be improving the current Data Governance platform, making sure it can be further integrated with the rest of the Data Platform and Wolt Services in a scalable, secure, compliant way, without significant disruptions to the teams.

Data Experience

We want to ensure our Analysts, Data Scientists, and Engineers can discover, understand, and publish high-quality data at scale. We have recently released a new data platform tool which enables simple, yet powerful creation of workflows via a declarative interface. You will help us ensure our users succeed in their work with effective and polished user experiences by developing our internal user-facing tooling and curating our documentation to the highest standards. And what's best, you get to work closely with excited users to get continuous feedback about released features while supporting and onboarding them to new workflows.

Data Lakehouse

We recently started this workstream to manage data integration, organization, and maintenance of our new Iceberg based data lakehouse architecture. Together, we build and maintain ingestion pipelines to efficiently gather data from diverse sources, ensuring seamless data flow. We create and manage workflows to transform raw data into structured formats, guaranteeing data quality and accessibility for analytics and machine learning purposes.

At the time you’ll join we’ll match you with one of these work streams based on our needs and your skills, experience and preferences.

How we work

Our teams have a lot of autonomy and ownership in how they work and solve their challenges. We value collaboration, learning from each other and helping each other out to achieve the team’s goals. We create an environment of trust, in which everyone’s ideas are heard and where we challenge each other to find the best solutions. We have empathy towards our users and other teams. Even though we’re working in a mostly remote environment these days, we stay connected and don’t forget to have fun together building great software!


Our tech stack

Our primary programming language of choice is Python. We deploy our systems in Kubernetes and AWS. We use Datadog for observability (logging and metrics). We have built our data warehouse on top of Snowflake and orchestrate our batch processes with Airflow and Dagster. We are heavy users of Kafka and Kafka Connect. Our CI/CD pipelines rely on GitHub actions and Argo Workflows.

Our humble expectations

The vast majority of our services, applications and data pipelines are written in Python, so several years of having shipped production quality software in high throughput environments written in Python is essential. You should be very comfortable with typing, dependency management, unit-, integration- and end-to-end tests. If you believe that software isn’t just a program running on a machine, but the solution to someone’s problem, you’re in the right place.

Having previous experience in planning and executing complex projects that touch multiple teams/stakeholders and run across a whole organization is a big plus. Good communication and collaboration skills are essential, and you shouldn’t shy away from problems, but be able to discuss them in a constructive way with your team and the Wolt Product team at large.

Familiarity with parts of our tech stack is definitely a plus, but we hire for attitude and ability to learn over knowing a specific technology that can be learned.

The tools we are building inside of the data platform ultimately serve our many stakeholders across the whole company, whether they are Analysts, Data Scientists or engineers in other teams that produce or consume data.

We want all of our users to love the tools we’re building and that is why we want you to focus on building intuitive and user friendly applications that enable everyone to use and work with data at Wolt.

Next steps

The compensation is a combination of monthly pay and equity. The latter makes it exceptionally easy to be excited about our company growing and doing well, as you'll own a piece of the pie. 🙌

📍This role can be based in one of our tech hubs in Helsinki or Stockholm, or you can work remotely anywhere in Finland, Sweden and Estonia. Read more about our remote setup here.


The position will be filled as soon as we find the right person, so make sure to apply as soon as you realize you really, really want to join us!

For any further questions about the position, you can turn to Product+ Talent Acquisition Partner - Fernanda Prado at [email protected]

Detta är en jobbannons med titeln "(senior) python engineer, data group" hos företaget Wolt Sverige AB och publicerades på webbjobb.io den 3 april 2024 klockan 09:58.

Hur du söker jobbet

webbjobb-logo-white webbjobb-logo-grey webbjobb-logo-black