Mid-Level Data Engineer

OnHires
Canton de Saint-Mihiel, France
19 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Remote
Canton de Saint-Mihiel, France

Tech stack

API
Airflow
Amazon Web Services (AWS)
Azure
Cloud Computing
Information Engineering
ETL
Relational Databases
Python
NoSQL
Raw Data
SQL Databases
SQLAlchemy
Business Intelligence Development Studio
Backend
Pandas

Job description

This is a mid-level role designed for engineers who already have solid Python + ETL experience and want to deepen their expertise with workflow orchestration, data quality, and modern BI tooling.

You will build reliable pipelines, improve data accuracy, and support the team in turning raw data into clean, usable datasets and dashboards. You won't be expected to lead projects or mentor others - but you must be able to independently handle well-defined engineering tasks., You will work alongside senior engineers and product stakeholders to maintain and evolve the data platform. Your work will include, * Develop and maintain ETL pipelines using Python and Prefect

  • Write clean, tested code for transformations, ingestion jobs, and integrations
  • Optimize pipeline performance and reliability

Data Quality

  • Implement validation rules, sanity checks, and lightweight monitoring
  • Investigate and resolve data issues in collaboration with the team
  • Contribute to improving data consistency and transparency, * Help develop new data models and contribute to schema design
  • Build and maintain dashboards in Metabase to support internal stakeholders
  • Support analytical reporting by preparing clean datasets

Requirements

Do you have experience in SQL?, * 3-5+ years in Data Engineering or Python Engineering

  • Strong proficiency in Python (pandas, SQLAlchemy, typing, testing)
  • Hands-on experience with ETL orchestration (Prefect preferred; Airflow acceptable)
  • Solid SQL skills and familiarity with relational databases
  • Experience with data quality checks, validation, or monitoring
  • Experience with Metabase or similar BI tools
  • Understanding of cloud-based data workflows (AWS, GCP, or Azure)
  • Ability to work independently on assigned tasks
  • English proficiency and clear communication, * dbt or similar transformation tools
  • Experience with NoSQL databases
  • Basic analytics or statistics knowledge
  • Exposure to APIs or light backend work
  • Experience in distributed remote teams

About the company

Our client is a remote-first SaaS product company based in Berlin, building data-intensive solutions that power advanced analytics and operational decision-making. They work with complex, high-volume data flows and rely on a modern, well-structured engineering stack to keep things reliable, transparent, and scalable. What makes this environment attractive for engineers: * You work end-to-end with real data products, not abstract pipelines. * You collaborate with a small, experienced team that values clean code, simplicity, and engineering discipline. * You use modern tooling - Python, Prefect, SQL, Metabase, cloud services - without legacy overload or bureaucratic overhead. * You get clear requirements, manageable scope, and the ability to make meaningful improvements. * You grow by solving practical, non-trivial data challenges that directly impact the product.

Apply for this position