Data Engineer / Databricks / End-to-End Ownerhp

Motion Recruitment Partners LLC.
Chicago, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Chicago, United States of America

Tech stack

Artificial Intelligence
Cloud Database
Continuous Integration
Information Engineering
Data Infrastructure
Python
Operational Databases
Raw Data
SQL Databases
Data Streaming
Cloud Platform System
Spark
GIT
Data Lake
Data Pipelines
Databricks

Job description

A well-established, global professional services organization is expanding its Data & AI engineering team and hiring a Data Engineer to support a large-scale cloud data platform initiative. This role is based in Chicago and focuses on building modern data pipelines using Azure Databricks, Python, SQL, and Spark in a production environment.

This is a hands-on engineering opportunity for someone who enjoys owning data pipelines end-to-end and wants exposure to enterprise-scale data challenges. You'll work alongside experienced engineers and business partners to turn raw data into trusted, analytics-ready datasets that power reporting, insights, and AI use cases.

This team is early in its Databricks journey, which means your work will matter immediately. You'll help shape how pipelines are built, how data is governed, and how the platform evolves over time. It's ideal for a strong mid-level engineer who wants to deepen their Databricks expertise while working on a highly visible, long-term transformation initiative., Tech Breakdown

  • 70% Azure Databricks / Spark / Delta Lake
  • 20% Python & SQL development
  • 10% CI/CD, testing, and optimization

Daily Responsibilities

  • 80% Hands-On Engineering
  • 10% Collaboration with BI, AI, and product teams
  • 10% Documentation, reviews, and platform improvements

Requirements

  • 6+ years of hands-on data engineering experience
  • Strong experience building pipelines using Databricks
  • Proficiency in Python and SQL for production data processing
  • Working knowledge of Apache Spark and Delta Lake
  • Experience with cloud data storage (ADLS, object storage)
  • Experience with data modeling and schema design
  • Familiarity with Git, CI/CD concepts, and basic testing/monitoring

Desired Skills & Experience

  • Azure Databricks in an enterprise environment
  • Experience with Unity Catalog and Databricks Jobs/Workflows
  • Exposure to streaming or CDC pipelines
  • Experience supporting BI or AI use cases
  • Familiarity with Infrastructure as Code

Apply for this position