Senior Software Engineer

ZoomInfo Technologies LLC
Waltham, United States of America
27 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 220K

Job location

Waltham, United States of America

Tech stack

Java
Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Batch Processing
Google BigQuery
Code Review
Information Engineering
ETL
Data Flow Control
Python
Operational Databases
Cloudera
Software Engineering
Data Streaming
Data Processing
Snowflake
Spark
Multi-Cloud
Kubernetes
Information Technology
Kafka
Terraform
Stream Processing
Data Pipelines
Apache Beam

Job description

  • Own and drive the design and implementation of large-scale data pipelines that ingest, validate, transform, and enrich first-party contributed data from CRM systems, email providers, and recording platforms
  • Architect resilient ETL/ELT pipelines handling massive volumes of contact data, opportunity metadata, engagement signals, and activity patterns
  • Take initiative on complex technical challenges - identify problems proactively, propose solutions, and execute with urgency
  • Build streaming and batch processing systems for real-time and scheduled data flows using Kafka, Pub/Sub, Apache Beam, or similar
  • Establish data quality frameworks, ensuring accuracy, consistency, and completeness across contributed data
  • Define and implement observability, monitoring, and alerting for pipeline health, throughput, cost, and data quality metrics
  • Drive technical design decisions and guide implementations from concept to production
  • Mentor and elevate other engineers on the team through code reviews, pairing, and knowledge sharing
  • Partner with product, platform, and data science teams to deliver high-impact features on tight timelines
  • Influence technical direction across the Contributory Network initiative and the broader Data Acquisition organization

Requirements

  • 5+ years of professional software engineering experience with a strong focus on data engineering
  • Proven track record of building and operating production data pipelines at scale
  • Deep experience with Python and/or Java
  • Hands-on expertise with data processing technologies: Apache Beam, Apache Airflow, Spark, Google Dataflow, or DataProc
  • Strong experience with streaming systems (Apache Kafka, Google Pub/Sub, or similar)
  • Experience with cloud platforms, preferably GCP (BigQuery, GKE, Dataflow)
  • Solid understanding of data modeling, schema evolution, and data quality management
  • Experience designing and operating large-scale ETL/ELT pipelines processing terabytes of data

Technical Leadership & Drive

  • Demonstrated ability to drive complex technical initiatives end to end - from scoping through delivery
  • Track record of operating with high autonomy and a bias toward action
  • Ability to push through ambiguity, make pragmatic decisions under uncertainty, and unblock progress
  • Experience influencing technical direction within a team or across teams
  • Strong code review and technical mentorship skills
  • Proven ability to balance quality with velocity - you ship, iterate, and improve

General

  • Bachelor's degree in Computer Science, Software Engineering, or a related field
  • Exceptional interpersonal skills with a proven ability to build productive cross-departmental relationships
  • Strong communicator who can explain complex systems to diverse audiences
  • Entrepreneurial mindset - comfortable pioneering new capabilities and wearing multiple hats, * Experience with Kubernetes (GKE/EKS) for running distributed workloads
  • Familiarity with multi-cloud environments (GCP + AWS)
  • Experience with Snowflake, BigQuery, Starburst/Trino, or similar query engines
  • Experience with Terraform or infrastructure-as-code
  • Knowledge of data integration patterns with CRM systems, email providers, or recording platforms
  • Exposure to AI/LLM-based data processing approaches
  • Experience in a B2B data company or data-as-a-product environment
  • Experience with Apache Spark at expert level

About the company

We're looking for a Senior Software Engineer to join the Contributory Network team within Data Acquisition - one of ZoomInfo's most strategically important initiatives. The Contributory Network is building the platform that ingests, transforms, and processes first-party data contributed by thousands of customers through their CRM, email, and recording provider integrations. This data powers a suite of intelligence products - from competitive benchmarking and buyer committee insights to predictive market timing - that are impossible to build any other way. This role demands a driver. We need someone who takes ownership, pushes through ambiguity, unblocks themselves and others, and relentlessly moves work forward. You won't wait for perfect specs or complete clarity - you'll carve the path, make decisions, and deliver. If you thrive when given a hard problem and the autonomy to solve it, this is your role.

Apply for this position