Data Pipeline & Connectivity Engineer

Databricks
Charlotte, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 180K

Job location

Charlotte, United States of America

Tech stack

API
Amazon Web Services (AWS)
Information Engineering
Microsoft Dynamics GP
Netsuite
SAP Applications
Data Streaming
Data Ingestion
Azure
Data Lake
PySpark
Infor
Data Pipelines
Api Management
Legacy Systems
Databricks

Job description

? Connectivity & Data Ingestion

  • Design and implement robust ingestion pipelines across highly fragmented environments (multiple ERPs, isolated networks, legacy systems)

  • Build connectivity across:

  • ERP systems (e.g., Great Plains, SAP, Infor, custom systems)

  • APIs, flat files, streaming sources, and third-party platforms

Requirements

? Must-Have Experience

  • 5-10+ years in data engineering / pipeline engineering

  • Deep hands-on experience with:

  • Databricks (required)

  • PySpark

  • Delta Lake

Proven experience building:

  • Data pipelines across multiple disconnected systems
  • Scalable ingestion frameworks

? Strongly Preferred

  • Experience in complex, multi-entity environments (PE-backed, M&A, roll-ups)

  • ERP data integration experience:

  • Great Plains, SAP, Infor, NetSuite, etc.

Experience with:

  • AWS or Azure data ecosystems
  • API integrations and event-based pipelines
  • Data orchestration tools, * Builder mindset - thrives in greenfield + messy environments
  • Comfortable operating with incomplete data and evolving requirements
  • Can own problems end-to-end, not just execute tickets
  • Balances speed with scalable architecture

About the company

We are a rapidly growing, private equity-backed services platform operating across dozens of acquired business units. Our environment consists of 50+ disconnected systems and ERPs, each operating independently across separate networks. We are building a modern, Databricks-first data and AI platform to unify operations, enable real-time insights, and power next-generation AI use cases., This is a foundational engineering role focused on solving one of the hardest problems in modern data: connecting fragmented systems into a scalable, governed data platform. You will own the design and build of data pipelines, ingestion frameworks, and connectivity patterns that bring together disparate environments into Databricks-laying the groundwork for data products, analytics, and AI. This is not a support role. This is a hands-on builder role for someone who wants to architect and implement at scale.

Apply for this position