Principal Data Engineer

BestSecret Group
Berlin, Germany
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English, German

Job location

Remote
Berlin, Germany

Tech stack

Airflow
Amazon Web Services (AWS)
Azure
Cloud Engineering
Code Generation
Continuous Integration
Data Architecture
Information Engineering
Data Infrastructure
ETL
Data Systems
Data Warehousing
Relational Databases
Database Theory
Linux
Python
Machine Learning
SQL Databases
Data Processing
Gitlab
Microsoft Fabric
Kubernetes
Information Technology
Deployment Automation
Real Time Data
Kafka
Looker Analytics

Job description

  • Technical Leadership: Provide strategic and technical guidance to the data engineering team. Lead initiatives to improve data quality, reliability, and infrastructure efficiency.

  • Develop & Maintain Advanced Data Systems: Architect and maintain advanced, scalable ETL processing pipelines for batch and near-real-time data, including analytical and machine learning workflows using a modern technology stack (Airflow, Kubernetes, Starburst, dbt, Kafka).

  • Enterprise Data Architecture: Design and continuously extend our enterprise data warehouse to align with business goals, supporting the company's analytics-driven business philosophy.

  • Optimization and Enhancement: Focus on optimizing and enhancing our data infrastructure using Starburst, Iceberg, Microsoft Fabric, and Looker. Ensure these tools work seamlessly together to deliver high-performance data processing and analytics capabilities.

  • Integration and Collaboration: Collaborate closely with different teams to ensure the integration of these tools aligns with business goals and operational objectives.

  • Performance Improvement: Lead efforts to improve the performance of existing systems, ensuring robust, efficient, and scalable data solutions that meet the needs of diverse stakeholders.

  • Result-Driven Initiatives: Drive initiatives that leverage the full potential of our data stack to deliver actionable insights and support data-driven decision-making processes across the business.

  • Automation & CI/CD Pipeline Management: Lead automation initiatives that allow code generation for pipelines and expand our CI/CD capabilities using GitLab pipelines to enhance testing and deployment automation.

  • Cross-Team Collaboration: Work closely with cross-functional teams to understand business requirements and produce data solutions that aid in impactful decision making.

  • Engagement and Consultation: Engage with business units to ensure technical solutions support business objectives and improve decision-making processes.

Requirements

Do you have experience in SQL?, Do you have a Bachelor's degree?, * Advanced Experience in Data Engineering: Proven expertise in designing and implementing data solutions, with a minimum Bachelor's degree in a quantitative subject (computer science, physics, mathematics, engineering, etc.).

  • Technical Proficiency: Extensive knowledge of SQL, particularly with dbt, along with an understanding of RDBMS and Columnar Database concepts. Proficiency in Python and experience with Linux platforms are essential.

  • Cloud Architecture Experience: Experience with cloud data architectures (AWS, Azure) preferred.

  • Adaptability and Innovation: Continuous learner with a strong inclination to stay updated with emerging technologies. Capable of pioneering innovative data solutions to maintain competitive advantage.

  • Communication and Leadership: Excellent communication skills in English (German is a plus), with the ability to work independently and collaboratively, leading by example and enhancing team synergy and performance.

#LI-LJ1

Apply for this position