Senior Software Engineer - Real-Time Ingestion

Yahoo
Sunnyvale, United States of America
16 days ago

Role details

Contract type
Permanent contract
Employment type
Part-time (≤ 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 267K

Job location

Remote
Sunnyvale, United States of America

Tech stack

Java
Amazon Web Services (AWS)
Apache HTTP Server
Google BigQuery
Cloud Computing
Information Engineering
Data Governance
Information Leak Prevention
Serialization
Data Systems
Database Queries
Distributed Data Store
Amazon DynamoDB
Data Flow Control
Protocol Buffers
Identity and Access Management
JSON
Python
Enterprise Messaging Systems
Open Source Technology
Operational Databases
Software Engineering
Data Streaming
Parquet
Data Processing
Google Cloud Platform
Spark
Electronic Medical Records
Change Data Capture
Event Driven Architecture
Information Technology
Apache Flink
Cassandra
Avro
Kafka
Data Management
Event Sourcing
Stream Processing
Data Pipelines
Apache Beam
Confluent

Job description

As a Senior Data Engineer on the Consumer Data Organization(CDO), you will design and implement streaming data pipelines that process billions of user signals daily, maintaining a real-time view of 2.5B+ profiles. Your pipelines handle critical third-party ID mutations, behavioral signals, and identity updates with sub-second latency, ensuring data freshness for downstream activation and monetization use cases worth hundreds of millions in annual revenue.

You will build scalable Kafka-based streaming infrastructure processing millions of events per second, implementing Apache Beam/Dataflow jobs for stream processing, enrichment, and validation. Your work requires balancing extreme throughput requirements, data quality guarantees, and operational reliability while ensuring privacy-compliant handling of sensitive user data.

This role demands expertise in real-time streaming architectures, distributed messaging systems (Kafka, Pub/Sub), and production data engineering at massive scale. You will collaborate closely with Storage, Privacy, and Platform teams to ensure efficient data flow from ingestion to activation., * Develop and optimize real-time streaming pipelines for third-party ID mutations, behavioral signals, and user event ingestion

  • Build scalable Kafka-based data pipelines handling millions of events per second with exactly-once processing semantics
  • Implement Apache Dataflow/Beam jobs for stream processing, enrichment, validation, and transformation of user signals
  • Design comprehensive monitoring and data quality checks ensuring pipeline reliability, data freshness, and SLA compliance
  • Collaborate with Storage team on efficient Cloud Spanner write patterns, schema design, and high-throughput mutation strategies
  • Optimize pipeline performance to reduce lag, improve throughput, and minimize processing costs in GCP infrastructure
  • Implement dead letter queues, retry logic, and error handling strategies ensuring data loss prevention
  • Troubleshoot production data issues including pipeline failures, data quality problems, and performance degradation
  • Work with Privacy team to ensure compliant data handling, PII protection, and sensitive data detection in real-time streams
  • Create comprehensive documentation for pipeline architecture, operational runbooks, and on-call procedures
  • Participate in on-call rotation supporting production streaming pipelines with 99.9% uptime SLA
  • Partner with upstream data producers to ensure consistent event schemas and data quality

Requirements

  • Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or related technical field, * 5+ years data engineering experience building production data systems
  • 3+ years hands-on experience with real-time/streaming data processing systems at scale
  • 2+ years with GCP (Dataflow, Pub/Sub, BigQuery, Spanner, GCS) or AWS equivalents (Kinesis, EMR, DynamoDB)

Technical Skills

  • Strong proficiency in Python, Java, or Scala for data pipeline development
  • Hands-on experience with Apache Kafka, Google Pub/Sub, or other distributed messaging platforms
  • Experience with Apache Beam, Apache Dataflow, or Apache Spark Streaming for stream processing
  • Understanding of stream processing patterns: windowing, watermarks, exactly-once semantics, state management
  • SQL proficiency and experience with distributed databases (Spanner, Cassandra, DynamoDB)
  • Familiarity with data serialization formats: Avro, Protobuf, JSON, Parquet

Competencies

  • Strong problem-solving skills and operational excellence mindset in production environments
  • Demonstrated ability delivering reliable data pipelines on schedule with minimal guidance
  • Excellent collaboration across engineering, product, and infrastructure teams
  • Team-level impact with ability to influence technical decisions within immediate team
  • Understanding of data governance and privacy compliance (GDPR, CCPA) in data pipelines

Preferred Qualifications

  • Experience with Cloud Spanner writes at high throughput (millions of writes per second)
  • Knowledge of data governance frameworks, privacy compliance, and PII handling best practices
  • Prior experience in adtech, identity platforms, or consumer data systems processing user behavioral data
  • Familiarity with data quality frameworks: Great Expectations, Deequ, or custom validation systems
  • Understanding of event-driven architectures, change data capture (CDC), and event sourcing patterns
  • Experience with schema evolution, schema registries (Confluent Schema Registry, Apicurio)
  • Contributions to open-source streaming projects (Kafka, Beam, Flink) or data engineering communities
  • Self-driven, detail-oriented, excellent multitasking abilities in fast-paced environments

The material job duties and responsibilities of this role include those listed above as well as adhering to Yahoo policies; exercising sound judgment; working effectively, safely and inclusively with others; exhibiting trustworthiness and meeting expectations; and safeguarding business operations and brand integrity.

Benefits & conditions

The compensation for this position ranges from $128,250.00 - $266,875.00/yr and will vary depending on factors such as your location, skills and experience.The compensation package may also include incentive compensation opportunities in the form of discretionary annual bonus or commissions. Our comprehensive benefits include healthcare, a great 401k, backup childcare, education stipends and much (much) more.

About the company

Yahoo serves as a trusted guide for hundreds of millions of people globally, helping them achieve their goals online through our portfolio of iconic products. For advertisers, Yahoo Advertising offers omnichannel solutions and powerful data to engage with our brands and deliver results. About The Team Our platform is the foundational identity and data layer for 900M+ monthly active users, serving 2.5B+ profiles at massive scale. We are building a predictive, identity-centric insights engine-ensuring our audience is understood with precision to deliver hyper-personalized experiences and advertising solutions across all our digital properties. Our mission centers on first-party data strategy: capturing, enriching, and activating audience signals to build a 360-degree view of every user. We operate under a Privacy-by-Design philosophy, adhering to global regulations (GDPR, CCPA) and industry security standards, while leveraging a cloud-native stack across GCP (BigQuery, Spanner, Dataflow, Composer, GKE) and AWS, with modern MLOps practices to deliver measurable business impact.

Apply for this position