Senior Data Engineer
Role details
Job location
Tech stack
Job description
We are looking for a Senior Data Engineer to shape and scale our open lakehouse data platform built on Snowflake (compute) and Apache Iceberg tables on AWS S3. You'll own end-to-end design and evolution of robust, scalable pipelines that power analytics, ML, and customer-facing features across our SaaS digital advertising products.
You'll collaborate with Product and Engineering to keep systems performant, reliable, and business aligned. You'll ship hands-on, lead technical design, run code reviews, and mentor others in modern data engineering practices.
To thrive here, you need:
- Deep expertise in open lakehouse architectures - Snowflake as compute + Apache Iceberg on cloud object storage
- Hands-on production experience with:
- Iceberg catalog management (Apache Polaris, Glue, or Hive)
- Time-travel / snapshot queries
- Partition evolution & schema evolution safety
- Snowflake Iceberg external tables, query tuning, clustering, cost control, RBAC/masking
- Strong production proficiency in dbt - authoring complex models, incremental logic, snapshots, exposures, custom tests, and CI/CD integration using dbt Core + Snowflake/Iceberg adapters
Highly valuable
- Experience working with AWS Cloud Platform
- DataOps / IaC (Terraform, dbt Cloud)
- Real-time streaming (Apache Kafka/Flink, AWS Kinesis)
You're passionate about clean, efficient architecture and have a proven track record building and operating production-grade open lakehouses. You raise the bar through high-quality delivery, team collaboration, and lasting improvements in reliability and engineering culture., * Design, build, and maintain scalable data pipelines, architectures, and platforms with a focus on reliability and efficiency
- Implement ETL/ELT processes with rigorous quality checks and governance to ensure data accuracy and consistency
- Mentor data engineers, share best practices, and foster a culture of learning and ownership
- Partner with Engineering, Product, and Business to translate requirements into high-impact data solutions
- Own project execution end-to-end-scoping, estimation, delivery, and communication
- Champion testing, documentation, and observability through design reviews and technical leadership
- Stay ahead of industry trends in cloud data, big data processing, and real-time analytics.
Requirements
Do you have experience in Terraform?, * 5+ years in data engineering, with hands-on production experience building open lakehouses using Snowflake + Apache Iceberg
- Strong production track record with dbt - complex models, dependencies, incremental logic, custom tests, CI/CD
- Advanced SQL + Python; you build idempotent, observable, schema-safe pipelines
- Deep knowledge of data modelling trade-offs, distributed systems, and big data frameworks
- Excellent communicator - you distil complex topics for technical and non-technical audiences with empathy
- Proven collaborator with strong problem-solving, mentoring, and project management skills
- (Bonus) Built and maintained a production-grade open lakehouse from scratch (Iceberg + catalog + compute)
- (Bonus) Familiar with DataOps, IaC, or real-time streaming pipelines.