Software Engineers
Role details
Job location
Tech stack
Job description
The Media and Session Data Product (MSDP) team for the Data organization within the DEEPT organization is in search of a Senior Software Engineer. As a member of the MSDP team you will establish set of data pipelines and datasets which are a vital key to success - enabling dozens of engineering and analytical teams to unlock the power of media session data to drive key business decisions and provide engineering, analytics, and operational teams the critical information necessary to scale the largest streaming service. Expanding, scaling, and standardizing the core foundational principles through consistent observability, lineage, data quality, logging, and alerting across all engineering teams in the Data organization is imperative to the creation of a single pane of glass. The MSDP team is seeking to grow their team of world class Software Engineers that share their charisma and enthusiasm for making a positive impact!
Responsibilities and Duties of the Role:
- Design, develop, and optimize large-scale batch and real-time data pipelines using Spark Structured Streaming on Databricks.
- Write production-grade, maintainable code primarily in Scala and PySpark (Scala preferred).
- Implement complex data transformations using Spark SQL and core Spark APIs.
- Own the end-to-end lifecycle of data products from ingestion, transformation, orchestration, to consumption by analytics, ML, and reporting teams.
- Design and maintain workflow orchestration using Apache Airflow.
- Build and manage infrastructure on AWS with some expertise in cloud-native data architectures.
- Collaborate with Data Scientists, Analysts, and Software Engineers to productionize machine learning models and analytical dashboards.
- Implement data quality, monitoring, alerting, and lineage solutions.
- Performance tuning of Spark jobs, cluster optimization on Databricks, and cost optimization on AWS.
- Lead technical design discussions, code reviews, and mentor mid/junior engineers.
- (Good to have) Work with Snowflake for data warehousing, cost optimization, and modern data sharing use cases.
Requirements
- 5+ years of hands-on data engineering experience.
- Proficient in Spark Structured Stream and Databricks platform (Unity Catalog, Delta Lake, Workflows, Cluster management).
- Strong programming skills in Scala (must) and Python/PySpark (must).
- Advanced SQL skills with experience in writing complex, optimized queries.
- Hands-on experience building and scheduling workflows in Apache Airflow.
- Working knowledge of AWS services for data engineering.
- Solid understanding of data modeling (star schema, slowly changing dimensions, data vault, etc.).
- Experience with CI/CD for data pipelines (Git, Jenkins, GitHub Actions, etc.).
- Familiarity with Snowflake is a strong plus.
- Experience with streaming technologies beyond Spark (Kinesis, Flink) is a plus.
- Excellent problem-solving, communication, and leadership skills.
- BA/BS required
Benefits & conditions
The hiring range for this position in Burbank, CA is $141,900 - $190,300 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.