Data Engineer
Role details
Job location
Tech stack
Job description
In this role, you take ownership of building and scaling a reliable, high-performing data platform that turns complex data into trusted, business-ready insights. You work closely with stakeholders across Analytics, Product, and Engineering and make a significant contribution to enabling data-driven decision-making across the company. If you're excited about designing modern data systems, solving complex data challenges, and using data to create real business impact, then this is the right place for you. Your Tasks You take ownership of designing, building, and scaling robust ETL/ELT pipelines, ensuring reliable and efficient data flow into our BigQuery data warehouse. You design and maintain scalable data models (e.g., Star/Snowflake schemas) that ensure high performance, data integrity, and usability for downstream analytics. You transform and structure data into trusted, production-ready datasets, using SQL and dbt to ensure quality, transparency, and maintainability. You build and operate data-centric microservices in Python and manage cloud-native data infrastructure in GCP, ensuring scalable, reliable, and cost-efficient systems. You collaborate closely with stakeholders and leverage AI-augmented engineering practices to deliver high-quality datasets and continuously improve data workflows, observability, and system intelligence., Diversity is important to us. We welcome applications from people regardless of gender, nationality, ethnic and social background, religion, beliefs, disability, age, or sexual orientation and identity.
Requirements
Relevant experience: You have proven experience (4 - 5 years) building and operating ETL/ELT pipelines and data platforms in a production environment, ideally within a cloud-based ecosystem such as GCP. Industry knowledge: You have a solid understanding of modern data architectures (e.g., data lakes, data warehouses, streaming systems) and are able to design scalable and reliable data solutions. Core competency: You have a strong ability to ensure data quality, reliability, and maintainability, applying best practices in data modeling, testing, and observability. Key technical skills: You have hands-on experience with SQL, dbt, BigQuery, and Python, as well as cloud-native data services (e.g., Pub/Sub, Dataproc) and building data-centric microservices. Languages: You are proficient in Python and SQL; experience with additional data