TELECOMMUTE Data Engineer III

VACO LLC
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

Data analysis
Automation of Tests
Azure
Business Intelligence
Big Data
Information Systems
Data Architecture
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Security
Data Warehousing
Hive
Python
SQL Azure
Object-Oriented Software Development
Performance Tuning
Power BI
Azure
SQL Server Integration Services
Unstructured Data
Microsoft Power Automate
Azure
Sql Optimization
Snowflake
Spark
Data Lake
PySpark
Information Technology
Data Lineage
Data Management
Data Delivery
Data Pipelines
Databricks
Web Api

Job description

A leading healthcare services organization is seeking a Senior Data Engineer to join a high-performing data and analytics team. This role is central to the company''s data infrastructure: designing, building, and maintaining scalable data pipelines and models that power analytics and reporting across the enterprise. You''ll work closely with BI developers, product owners, and business stakeholders to ensure data is reliable, well-governed, and built to scale., Data Pipeline Design & Development

  • Design, build, and maintain scalable ETL/ELT pipelines using Databricks and Azure technologies
  • Ingest data from diverse source systems, applying transformation and data quality logic
  • Automate manual processes and optimize data delivery for reliability and performance
  • Architect solutions for greater scalability and long-term maintainability

Data Modeling & Analytics Enablement

  • Design and implement analytics-ready data models (star, snowflake, dimensional) to support BI and reporting consumption
  • Build and maintain Delta tables, Spark Declarative Pipelines, and Jobs/Workflows in Databricks
  • Assemble large, complex datasets that meet functional and non-functional business requirements

Data Quality & Governance

  • Create automated tests to continuously monitor data model quality
  • Manage metadata, data lineage, and pipeline dependencies across development and production environments
  • Ensure data security, separation, and compliance with HIPAA, HITECH, and applicable regulations

Collaboration & Stakeholder Support

  • Partner with data architects, DBAs, BI developers, and data scientists to align solutions with enterprise architecture standards
  • Support stakeholders with data-related technical issues and infrastructure needs
  • Mentor junior data engineers and contribute to team technical standards and best practices

Requirements

  • 10+ years in a Data Engineering or BI Developer role
  • Bachelor''s degree in Computer Science, Statistics, Information Systems, or a related quantitative field (or equivalent experience)

Technical Requirements

  • Advanced SQL proficiency, complex query development, performance tuning, and optimization in Azure SQL and distributed query environments, including SSIS
  • Hands-on experience building ETL/ELT pipelines in Databricks using Apache Spark (PySpark / Spark SQL)
  • Strong working knowledge of Azure cloud services: Databricks, Azure Data Factory, Azure SQL DB, Azure Data Lake Storage Gen2, Logic Apps
  • Experience with Python or Scala for object-oriented and scripting workflows
  • Demonstrated experience with structured, semi-structured, and unstructured data

Architecture & Design

  • Deep understanding of data lake and lakehouse architecture, including Delta Lake, partitioning, schema enforcement, and incremental ingestion patterns
  • Proven ability to design dimensional data models that support Power BI and other BI consumption layers, * Experience with data pipeline architecture design at enterprise scale
  • Familiarity with Spark Declarative Pipelines and Databricks Jobs/Workflows beyond basic implementation
  • Experience developing or maintaining APIs and web services
  • Background working in regulated healthcare environments (HIPAA/HITECH compliance experience)
  • Experience with workload orchestration and pipeline dependency management across environments
  • Familiarity with data quality frameworks and automated testing patterns for pipeline validation

Apply for this position