Data Engineer
Role details
Job location
Tech stack
Job description
The Senior Data Engineer will build and maintain data platform infrastructure, ensuring data reliability across the business, drive architecture decisions, and enhance monitoring of data systems while partnering cross-functionally., This is a unique opportunity to work on infrastructure that sits at the center of how dbt Labs runs as a business - with executive visibility, deep cross-functional reach, and the added dimension of dogfooding the very products we build. If you're excited by the challenge of solving hard platform problems with cutting-edge tooling and making a direct, lasting impact on company growth, this role is for you. In this role, you can expect to:
- Own the architecture and operations of our data lakehouse, including object storage, table formats, maintenance, and query engine integrations
- Build and maintain the infrastructure layer that transforms and serves data reliably at scale-from raw landing zones through to curated, queryable datasets
- Partner with product engineering to establish data contracts and schema standards around event telemetry, ensuring data arrives in the lakehouse in a form that's reliable and ready for downstream use
- Drive decisions on data platform architecture, tooling, and engineering best practices across storage, compute, and access layers
- Enhance observability and monitoring of data infrastructure, including pipeline reliability, data freshness, and system performance
- Partner cross-functionally with teams across Analytics, Infrastructure, and Product to understand data needs and deliver impactful platform solutions
- Provide product feedback by dogfooding new data infrastructure and AI technology
Requirements
- Expert-level SQL and Python skills
- 5+ years of experience as a data engineer, and 8+ years of total experience in software engineering (including data engineering roles)
- Strong knowledge of data lakehouse architecture, including storage layer design, table formats, and compute/query engine integration
- Experience defining and enforcing data contracts or schema standards in collaboration with upstream engineering teams
- Hands-on experience with modern orchestration tools like Airflow, Dagster, or Prefect
- Working knowledge of cloud infrastructure tooling, including Terraform, Helm, and Kubernetes
- Hands-on experience running Apache Spark in production, including job tuning, cluster sizing, and managing failures at scale
- A bias for action-able to stay focused and prioritize effectively in an ambiguous environment
You'll stand out if you have:
- Experience developing and scaling dbt projects
- Hands-on experience with Apache Iceberg or other open table formats in production, including multi-region or multi-cloud deployments
- Experience designing platform infrastructure that serves multiple downstream teams and use cases
- Experience working in a SaaS or high-growth tech environment
Benefits & conditions
We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab's total rewards during your interview process. In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.
- The typical starting salary range for this role is:
- $147,000 - $178,000
- The typical starting salary range for this role in the select locations listed is:
- $163,000 - $198,000, * Unlimited vacation time with a culture that actively encourages time off
- 401k plan with 3% guaranteed company contribution
- Comprehensive healthcare coverage
- Generous paid parental leave
- Flexible stipends for:
- Health & Wellness
- Home Office Setup
- Cell Phone & Internet
- Learning & Development
- Office Space