Lead Data Engineer

ECHOSTAR
Englewood, United States of America
29 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 157K

Job location

Englewood, United States of America

Tech stack

Query Performance
Third Normal Form
Agile Methodologies
Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Cloud Engineering
Data as a Services
Information Engineering
ETL
Data Systems
Data Warehousing
DevOps
Hive
Identity and Access Management
SQL Databases
Automated Data Processing (ADP)
Data Processing
Snowflake
Generative AI
Amazon Web Services (AWS)
Gitlab
GIT
Cloudformation
Data Lake
PySpark
Information Technology
Data Management
Cloudwatch
Terraform
Software Version Control
Data Pipelines
Databricks
Control M

Job description

Candidates must be willing to participate in at least one in-person interview, which may include a live whiteboarding or technical assessment session.

Your mission as a Lead Data Engineer involves maintaining and optimizing petabyte-scale data architectures within a complex enterprise cloud environment. This role ensures the continuous reliability and quality of data used for high-stakes business reporting by leading operational support and root cause analysis for critical ETL jobs. By integrating Generative AI tools and Infrastructure-as-Code, you will drive the next generation of data engineering efficiency across our company.

What Success Looks Like (Objectives)

  • Monitor and provide high-level operational support for large enterprise data warehouse systems, resolving complex ETL failures and ensuring data quality
  • Maintain and optimize scalable batch and streaming data pipelines on the AWS platform, leveraging S3, Glue, and Snowflake/Databricks for peak performance
  • Lead incident management and root cause analysis initiatives to develop robust operational metrics and drive continuous improvement of production systems
  • Partner with cross-functional Agile teams, including Data Scientists and DevOps, to implement sophisticated ETL/ELT transformation logic
  • Manage CI/CD pipelines and Infrastructure-as-Code using GitLab while exploring Generative AI integration points like Amazon Q for pipeline optimization
  • Participate in shift-based working hours and on-call support to guarantee the continuous reliability and performance of enterprise data systems

Requirements

  • Expertise in architecting high-performance data pipelines using PySpark and Spark SQL with a focus on cost-optimization and query performance tuning
  • Advanced proficiency in developing parameterized SQL scripts and orchestration logic to automate sophisticated end-to-end data processing and business reporting
  • Strategic mastery of AWS data services (EC2, EMR, GLUE, S3) and core cloud architecture components including VPC, IAM, CloudWatch, and Data Lake frameworks
  • Professional expertise in designing and managing workflow orchestration using tools such as Control-M or Apache Airflow to support automated data processing
  • Deep understanding of dimensional and 3NF data modeling concepts to ensure high-quality and performant data solutions
  • AI Innovation skills to evaluate findings from POC initiatives and apply tools like Amazon Q, Gemini, or Databricks Genie Rooms to data engineering workflows
  • Critical Experience supporting the production operations of large-scale Enterprise Data Warehouses and architecting petabyte-scale ingestion pipelines, * Proven ability to evaluate and communicate findings from Proof-of-Concept (POC) initiatives to stakeholders
  • Strong analytical and problem-solving skills with a track record of driving continuous improvement in production systems, * Bachelor's degree in Computer Science or a related technical field
  • 5+ years of professional experience in the operation and production support of large Enterprise Data Warehouses
  • Hands-on experience with AWS data services and data platforms (Snowflake or Databricks)
  • Proficiency with Git/GitLab for version control and CI/CD processes
  • Knowledge of Infrastructure-as-Code tools such as Terraform or AWS CloudFormation

Benefits & conditions

Compensation: $110,100.00/Year - $157,300.00/Year, We offer versatile health perks, including flexible spending accounts, HSA, a 401(k) Plan with company match, ESPP, career opportunities, and a flexible time away plan; all benefits can be viewed here: EchoStar Benefits .

The base pay range shown is a guideline. Individual total compensation will vary based on factors such as qualifications, skill level, and competencies; compensation is based on the role's location and is subject to change based on work location.

Candidates need to successfully complete a pre-employment screen, which may include a drug test and DMV check. Our company is committed to fostering an inclusive and equitable workplace where every individual has the opportunity to succeed. We are dedicated to providing individuals with criminal or arrest records a fair chance of employment in accordance with local, state, and federal laws.

The posting will be active for a minimum of 3 days. The active posting will continue to extend by 3 days until the position is filled.

About the company

EchoStar is reimagining the future of connectivity. Our business reach spans satellite television service, live-streaming and on-demand programming, smart home installation services, mobile plans and products.

Apply for this position