Data Engineer - Remote

Unitedhealth Group Inc
Washington, United States of America
7 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 164K

Job location

Remote
Washington, United States of America

Tech stack

Airflow
Amazon Web Services (AWS)
Data analysis
Application Integration Architecture
Automation of Tests
Azure
Cloud Computing
Code Review
Continuous Integration
Information Engineering
Data Governance
ETL
Data Security
Data Systems
Data Warehousing
DevOps
Distributed Systems
Python
Metadata Standards
Online Analytical Processing
Online Transaction Processing
Operational Databases
Performance Tuning
Standard Sql
Software Engineering
SQL Stored Procedures
Workflow Management Systems
Enterprise Data Management
Google Cloud Platform
Cloud Platform System
Sql Optimization
Fast Healthcare Interoperability Resources
Snowflake
Containerization
Data Lake
PySpark
Kubernetes
Data Lineage
Collibra
Health Level Seven International
Integration Frameworks
Machine Learning Operations
Software Version Control
Data Pipelines
Docker
Databricks
Programming Languages

Job description

  • Design, develop, and maintain scalable data pipelines using Python, PySpark, and other modern programming languages to support both batch and streaming workloads
  • Build and optimize data processing frameworks on cloud platforms such as Databricks or Snowflake, ensuring performance, reliability, and cost efficiency
  • Design and implement robust data models, including transactional (OLTP) and dimensional (OLAP) schemas, to support analytics, reporting, and application integration
  • Develop high quality SQL code including complex queries, stored procedures, and views, with a focus on performance tuning and efficient data access patterns
  • Create and manage workflow orchestration using Apache Airflow or similar tools, ensuring reliable scheduling, dependency management, and monitoring
  • Implement and enforce data governance and metadata standards through tools such as Microsoft Purview, including data lineage, classification, cataloging, and security policies
  • Build automated data quality and validation frameworks to ensure accuracy, completeness, and reliability of production datasets
  • Collaborate with cross functional teams including data architects, analysts, scientists, and business stakeholders to understand requirements and deliver scalable, well designed data solutions
  • Lead technical design sessions and code reviews, promoting engineering best practices, reusability, and maintainability
  • Support cloud infrastructure and DevOps practices, including CI/CD pipelines, version control, testing automation, and environment management
  • Monitor and troubleshoot production data pipelines, proactively addressing issues, performance bottlenecks, and system failures
  • Contribute to the evolution of the enterprise data platform, recommending tools, frameworks, and architectures to improve scalability and efficiency
  • To support our mission, OSIT has initiated a multi-year modernization program aimed at updating and enhancing enterprise technology systems in accordance with modern design standards

You'll be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role as well as provide development for other roles you may be interested in.

Requirements

  • 7+ years of experience in data engineering, software engineering, or similar disciplines
  • Hands-on experience with Databricks or Snowflake
  • Experience with orchestration tools such as Apache Airflow
  • Experience working with cloud ecosystems (Azure preferred; AWS/Google Cloud Platform acceptable)
  • Advanced SQL skills and experience with OLTP and OLAP data modeling
  • Solid understanding of modern data warehousing, data lake, and ELT/ETL design patterns
  • Solid programming expertise in Python, PySpark, or similar languages
  • If you are offered this position, you will be required to provide extensive personal information to obtain and maintain a suitability or determination of eligibility for a Confidential/Secret or Top Secret security clearance as a condition of your employment
  • United States Citizenship, * Healthcare industry experience, including claims, clinical, FHIR, HL7, or provider data
  • Experience with containerization (Docker, Kubernetes) for data workloads
  • Experience supporting machine learning workflows or analytical data science pipelines
  • Familiarity with data governance tools, especially Microsoft Purview
  • Knowledge of distributed computing concepts and performance tuning

*All employees working remotely will be required to adhere to UnitedHealth Group's Telecommuter Policy

Benefits & conditions

Pay is based on several factors including but not limited to local labor markets, education, work experience, certifications, etc. In addition to your salary, we offer benefits such as, a comprehensive benefits package, incentive and recognition programs, equity stock purchase and 401k contribution (all benefits are subject to eligibility requirements). No matter where or when you begin a career with us, you'll find a far-reaching choice of benefits and incentives. The salary for this role will range from $91,700 to $163,700 annually based on full-time employment. We comply with all minimum wage laws as applicable.

Apply for this position