Sr Data Engineer

HR RECRUITING SERVICES LLC
Glendale, United States of America
4 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Shift work
Languages
English
Experience level
Senior

Job location

Glendale, United States of America

Tech stack

Java
API
Airflow
Amazon Web Services (AWS)
Data analysis
Google BigQuery
Cloud Computing
Databases
Information Engineering
Data Governance
Data Infrastructure
Data Systems
Distributed Computing Environment
Python
Online Analytical Processing
Online Transaction Processing
Software Engineering
SQL Databases
Data Streaming
Snowflake
Spark
Data Lake
Core Data
GraphQL
Api Design
Data Pipelines
Databricks

Job description

We are seeking a Senior Data Engineer based in Glendale, CA, to play a pivotal role in building and maintaining a robust, scalable, and high-performance Core Data platform. This position requires a hands-on expert who thrives in a dynamic, innovation-driven environment and contributes to the development of real-time and batch data pipelines using modern cloud-native technologies. You will collaborate closely with product managers, architects, and cross-functional engineering teams to deliver reliable, high-quality data solutions that power critical business functions across Engineering, Data Science, Analytics, and Operations. Your work will directly impact the integrity, performance, and scalability of our data infrastructure, ensuring adherence to SLAs and industry best practices. The role is onsite four days per week, offering the opportunity to be deeply embedded in a collaborative, high-impact technical culture. Contract Terms: 12 Monhts Working Hours: 8 hours Working Hours per wk: 40, * Design, build, and maintain scalable data pipelines using Python, AWS, Spark, Databricks, and Airflow

  • Develop and manage real-time streaming data pipelines leveraging Delta Lake and other modern data streaming technologies

  • Build and maintain APIs (including GraphQL) to expose data assets to downstream applications and services

  • Collaborate with product managers, architects, and engineers to define platform requirements and drive technical execution

  • Establish and enforce internal and external standards for pipeline configuration, naming conventions, and data governance

  • Ensure operational excellence, data accuracy, and reliability across all datasets to meet SLAs and stakeholder expectations

  • Contribute to documentation and continuous improvement of data platform architecture and best practices

Requirements

  • 5+ years of data engineering experience specifically developing large-scale data pipelines Spark, Airflow, Databricks or Snowflake, SQL, Python

  • Proficiency in Python, Java, or Scala, with a strong foundation in software engineering principles

  • Hands-on experience with distributed processing systems such as Apache Spark in production

  • Proven experience with data pipeline orchestration tools, particularly Apache Airflow

  • Demonstrated experience with cloud-based MPP databases such as Snowflake, BigQuery, or Databricks

  • Experience developing APIs using GraphQL or similar technologies

  • Deep understanding of OLTP vs OLAP systems and their respective use cases

  • Strong background in distributed data processing, data service software engineering, or data modeling

Benefits & conditions

  • Three 3 levels of medical insurance for you and your family

  • Dental insurance (family)

  • 401K

  • Overtime

  • California has the following sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours. If you are based in a different state, please inquire about that state's sick leave policy.

About the company

© 2026 Careerjet All rights reserved

Apply for this position