Senior Data Engineer

Edelman
10 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

API
Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Cloud Computing
Code Review
Continuous Integration
Information Engineering
Data Infrastructure
Data Systems
Decision Support Systems
DevOps
Distributed Data Store
Amazon DynamoDB
Python
Metadata
Operational Databases
SQL Databases
Data Streaming
Management of Software Versions
Data Logging
Snowflake
Spark
Electronic Medical Records
Generative AI
PySpark
Infrastructure Automation Frameworks
Kafka
Data Management
Terraform
Data Pipelines
Databricks

Job description

You'll serve as a technical anchor for data initiatives-owning complex pipelines, shaping platform direction, and guiding engineers through best practices in data, cloud, and AI-enabled systems. You'll partner closely with the Data Engineering lead, focusing on technical direction and delivery.

Why You'll Love Working with Us: At Edelman, we believe in fostering a collaborative and open environment where every team member's voice is valued. Our data engineering team thrives on building robust, scalable, and efficient data systems to power insightful decision-making.

We are at an exciting point in our journey and you'll work at the intersection of data, AI, and real-world business impact . Our data team is building modern platforms that enable insight, innovation, and responsible AI adoption across the organization. You'll have the autonomy to shape solutions, the trust to lead technically, and the support to keep pushing the platform forward., * Lead the design and evolution of scalable data architectures , supporting batch, streaming, and AI-driven workloads.

  • Own end-to-end data pipelines -from ingestion and transformation through to serving analytics and ML/GenAI use cases.
  • Define and enforce data engineering standards across modelling, orchestration, observability, and reliability.

Technical Leadership & Collaboration

  • Mentor and guide data engineers through code reviews, design discussions, and architectural decisions .
  • Translate business problems into scalable technical solutions, balancing speed, quality, and long-term maintainability.
  • Drive the use of agent-based solutions across the development lifecycle , designing autonomous and semi-autonomous workflows that deliver measurable business value.
  • Clearly document architectures and workflows to support shared understanding and operational excellence.

Data Engineering & Cloud Execution

  • Build and optimize data pipelines using Databricks, Spark (PySpark), Snowflake, Apache Airflow and Terraform .
  • Design performant data models and lakehouse structures (Delta, Unity Catalog) for analytics and downstream AI consumption.
  • Leverage AWS-native services (e.g. S3, EMR, DynamoDB) to deliver cost-efficient, production-grade solutions.
  • Implement robust data quality, testing, and monitoring (e.g. Great Expectations, logging, alerting).

Generative AI

  • Design data pipelines that power Generative AI applications , including data preparation, enrichment, and feature generation.
  • Integrate 3rd party APIs into data workflows for use cases such as:
  • Automated data enrichment and classification
  • Intelligent summarization and insight generation
  • Metadata generation and semantic search enablement
  • AI-assisted reporting and decision support
  • Collaborate with ML and Product teams on prompt design, evaluation, and governance , ensuring responsible and reliable AI usage.
  • Support production AI systems through data versioning, lineage, and lifecycle management .

Requirements

  • 4+ years building and operating enterprise-scale data platforms, with ownership across the full lifecycle.
  • Strong hands-on experience with Databricks, Snowflake, Airflow , and distributed data processing.
  • Advanced Python and SQL , with production-quality engineering standards.
  • Proven experience designing and maintaining cloud-native data infrastructure on AWS .
  • Experience integrating Generative AI models (OpenAI, Claude or similar) into production data or analytics workflows.
  • Solid understanding of CI/CD, Infrastructure as Code, DevOps practices , and operating reliable data systems at scale.
  • Actively stay current on advances in code agents and automation, guiding their responsible adoption across the development lifecycle.
  • Exposure to streaming architectures (Kafka or equivalent) is advantageous.
  • A leadership mindset: proactive, pragmatic, and comfortable influencing technical direction.
  • Excellent communication skills and the ability to work effectively across disciplines.

About the company

Edelman is a voice synonymous with trust, reimagining a future where the currency of communication is action. Our culture thrives on three promises: boldness is possibility, empathy is progress, and curiosity is momentum. At Edelman, we understand diversity, equity, inclusion and belonging (DEIB) transform our colleagues, our company, our clients, and our communities. We are in relentless pursuit of an equitable and inspiring workplace that is respectful of all, reflects and represents the world in which we live, and fosters trust, collaboration and belonging.

Apply for this position