Remote Senior Data Engineer
Role details
Job location
Tech stack
Job description
Vollzeit, * Typ k.A.
Gewünschte Fähigkeiten & Kenntnisse
Networking RabbitMQ Cloud CAN MS Access PostgreSQL Make NumPy Data Warehouse Migration Flask ETL Automatisierung Design IT Snowflake Django Mobile App AWS ZEN Python Support Engineering Teamfähigkeit Flexibilität, You'll work on production-grade data pipelines that power booking curves, occupancy rates, pickup analytics, and other core reports relied on daily by multiple teams and customers.,
- Design, build, and maintain scalable and reliable data pipelines using our modern data stack (Snowflake, Dagster, and dbt).
- Own end-to-end data flows, from ingestion services (Django & Celery) to analytics-ready models in the data warehouse.
- Contribute to the migration of legacy Django/Celery-based pipelines toward our modern data platform architecture.
- Collaborate closely with a Product Manager, Data Engineers, and Backend Engineers to prioritize integrations and deliver high-impact data capabilities.
- Ensure data quality, reliability, and observability through testing, monitoring, and clear documentation.
- Support multiple internal teams by providing accurate, timely, and well-documented reservation data they can trust.
- Continuously improve scalability, automation, and operational efficiency as data volume and integrations grow.
- Take ownership of features and improvements from design to production, including post-deployment monitoring and iteration., * Airflow, Amazon Redshift, Amazon S3, Amazon Web Services, Architektur, Automatisierung, Backend, Big Data, Bigquery, Celery, Cloud Computing, Data Warehousing, Databricks, Datadog, Daten-Pipeline, Datenmodelierung, Datenqualität, Django, ETL, Fastapi, Flask, Information Engineering, Modellierungsfähigkeiten, Numpy, Pandas, Performance-Tuning, Postgresql, Python, Rabbitmq, Schreiben von Dokumentation, Sentry, Skalierbarkeit, Snowflake, Softwareentwicklung, Streaming, Terraform, Testen, Web Application Framework, Wirtschaftliche Effizienz, Workflows
Persönliche Fähigkeiten
- Entscheidungsfähigkeit, Kommunikation, Teamarbeit, Verantwortungsbereitschaft, Zuverlässigkeit
Requirements
-
4 years of professional Python experience, ideally in data engineering and/or backend systems.
-
Strong experience building and maintaining ETL/ELT pipelines in production environments on modern cloud data warehouses (e.g. Snowflake, Databricks, BigQuery, Redshift).
-
Strong experience in data modeling, including analytics-ready schema design, fact/dimension modeling, and performance optimization, along with data testing and documentation practices that ensure long-term data quality, trust, and maintainability.
-
Experience working with orchestrated data pipelines (e.g., Dagster, Airflow, or similar tools).
-
Experience building backend or ingestion services using Python web frameworks such as Django, FastAPI, or Flask.
-
Familiarity with background task processing and asynchronous workflows (e.g., Celery or similar systems).
-
Experience working with cloud infrastructure, preferably AWS (e.g., S3, RDS/Aurora).
-
Strong understanding of software design principles and data pipeline architecture.
-
Experience working with large datasets using tools such as pandas, polars or NumPy.
-
An analytical mindset with an interest in data quality, KPIs, and data-driven decision-making.
-
Excellent communication skills-you can explain complex technical topics clearly to both technical and non-technical stakeholders.
-
High ownership mentality-you take responsibility for reliability, quality, and long-term maintainability.
-
A collaborative, egoless team player who thrives in cross-functional environments.
-
Fluent in English and comfortable participating in technical discussions.
-
Based in or able to work within the European Time Zone (UTC 0 to UTC 2).,
-
Hands-on experience with dbt (modeling, testing, documentation, performance tuning).
-
Experience with Dagster specifically.
-
Experience with or interest in modernizing legacy data pipelines.
-
Hands-on experience with:
-
Celery & RabbitMQ
-
PostgreSQL
-
Django/FastAPI
-
Infrastructure as Code (OpenTofu / Terraform)
-
Datadog and/or Sentry
-
Experience building data observability and monitoring solutions.
-
Familiarity with product-oriented data teams serving multiple stakeholders.
-
Located near Mannheim, Germany (bonus points!).