Data Pipeline & Connectivity Engineer
Role details
Job location
Tech stack
Job description
? Connectivity & Data Ingestion
-
Design and implement robust ingestion pipelines across highly fragmented environments (multiple ERPs, isolated networks, legacy systems)
-
Build connectivity across:
-
ERP systems (e.g., Great Plains, SAP, Infor, custom systems)
-
APIs, flat files, streaming sources, and third-party platforms
Requirements
? Must-Have Experience
-
5-10+ years in data engineering / pipeline engineering
-
Deep hands-on experience with:
-
Databricks (required)
-
PySpark
-
Delta Lake
Proven experience building:
- Data pipelines across multiple disconnected systems
- Scalable ingestion frameworks
? Strongly Preferred
-
Experience in complex, multi-entity environments (PE-backed, M&A, roll-ups)
-
ERP data integration experience:
-
Great Plains, SAP, Infor, NetSuite, etc.
Experience with:
- AWS or Azure data ecosystems
- API integrations and event-based pipelines
- Data orchestration tools, * Builder mindset - thrives in greenfield + messy environments
- Comfortable operating with incomplete data and evolving requirements
- Can own problems end-to-end, not just execute tickets
- Balances speed with scalable architecture